Like Nuclear War, AI Could Exterminate Humanity: Geoffrey Hinton, Sam Altman and 100 Experts Sign Open Letter

A number of Turing Award winners, CEOs of top AI companies, professors of top universities, and hundreds of experts who have the right to speak in their respective fields signed an open letter, which is simple but powerful:

Reducing the risk of AI extinction should be a global priority, along with other societal-scale risks such as pandemics and nuclear war.

picture

The open letter was published in the Center for AI Safety (CAIS). CAIS said the short statement was intended to open a discussion on the topic of "broad important and pressing risks from artificial intelligence".

In the name list of this joint open letter, there are many familiar names, including:

  • Turing Award winners Geoffrey Hinton and Yoshua Bengio;

  • OpenAI CEO Sam Altman, Chief Scientist Ilya Sutskever, CTO Mira Murati;

  • Demis Hassabis, CEO of Google DeepMind, many research scientists;

  • Anthropic CEO Dario Amodei;

  • and professors from UC Berkeley, Stanford and MIT.

picture

In a related press release, CAIS said they hoped to "put up guardrails and build institutions so that AI risks don't catch us off guard," likening about AI to J. Robert Oppenheimer, the "father of the atomic bomb." Warning about the potential effects of the atomic bomb .

However, some experts in AI ethics disagree. Dr. Sasha Luccioni, a machine learning research scientist at Hugging Face, likened the open letter to a "sleight of hand. "

Putting hypothetical existential risks of AI alongside very real risks like pandemics and climate change is very intuitive to the public and easier to convince, she said.

But it's also misleading, "drawing the public's attention to one thing (future risks) so they don't think about another (current tangible risks such as bias, legal issues)."

For a long time, Andrew Ng and Yann LeCun have been active embracers of AI technology. After this open letter was issued, Wu Enda expressed his personal views on his personal Twitter:

When I think about the existential risk for most of humanity:

the next pandemic;

Climate change → massive population decline;

Another asteroid.

AI will be a key part of our solution. So if you want humanity to survive and thrive for the next 1000 years, let's make AI go faster, not slower.

Later, Yang Likun forwarded this tweet and jokingly said, "Before we have the basic design of even a dog-level AI (let alone a human-level), it is immature (idea) to discuss how to make it safe."

picture

Since the advent of large AI models such as ChatGPT/GPT-4, some AI security researchers have begun to worry that a super-intelligent AI that is much smarter than humans will soon appear, escape captivity, and control or destroy human civilization.

picture

Figure|A picture of "AI Occupying the World" generated by artificial intelligence

While this so-called long-term risk is on the minds of some, others believe that signing a vague open letter on the topic is a challenge for companies that may be responsible for other AI risks, such as deepfakes. It is a mitigation method. According to Luccioni, "This makes the people who signed the letter the heroes of the story, because they are the ones who created this technology."

To critics such as Luccioni, AI technology is far from harmless. Instead, they see prioritizing hypothetical future threats as a diversion from existing AI risks. These harms raise thorny ethical questions. Companies would rather choose to forget about these issues.

So, even though AI might one day threaten humanity, these critics argue that it is neither constructive nor helpful to focus on an unspecified doomsday scenario in 2023. **They point out that you can't study something that isn't real.

"Trying to solve imaginary tomorrow's problems is a complete waste of time. Solve today's problems and tomorrow's problems will be solved by the time we get there."

Reference link:

https://www.safe.ai/statement-on-ai-risk#open-letter

https://www.safe.ai/press-release

Guess you like

Origin blog.csdn.net/AMiner2006/article/details/130968114