Musk and more than 1,200 people jointly signed an open letter calling for: Stop training more powerful AI. The reason behind this is thought-provoking. It’s time to pour cold water on ChatGPT

A moratorium on giant AI experiments: An open letter

On March 29th, **Future of Life** issued an open letter to the whole society "Suspending Large-scale Artificial Intelligence Research", calling on all artificial intelligence laboratories to immediately suspend the development of AI that is more powerful than GPT-4. The training of the artificial intelligence system is suspended for at least 6 months. The agency's mission is "to steer transformative technologies for the benefit of life, away from extreme large-scale risks."

"Robust artificial intelligence systems should only be developed if we are confident that their effects are positive and the risks are manageable," the letter reads.

It is worth mentioning that OpenAI CEO Altman pointed out in the latest 2-hour conversation with MIT research scientist Lex Fridman that after continuous testing by OpenAI, starting from ChatGPT, the GPT series has shown reasoning capabilities, but no one can To interpret the reason for this ability, even the OpenAI team and Altman himself still cannot fully interpret GPT-4 , and can only judge its "ideas" by constantly asking it questions. It can be said that AI has developed its unexplainable reasoning ability, and Altman also admitted that there is a certain possibility that "AI will kill humans".

The open letter from the Future of Life Institute is titled " Pause on Giant AI Experiments: An Open Letter ". Here is the original text of the open letter :

insert image description here

As recognized by extensive research and top AI labs, AI systems pose significant risks to society and humanity. The Asilomar Principles for Artificial Intelligence state that advanced artificial intelligence could represent a profound transformation of the history of life on Earth and should be planned and managed with commensurate care and resources. Yet despite the AI ​​frenzy in AI labs in recent months, developing and deploying increasingly powerful digital brains, no one can currently understand, predict, or reliably control AI systems, nor do they have the corresponding level of planning and management.

Now that artificial intelligence is becoming competitive with humans at common tasks, we must ask ourselves: should we let machines promote untrue information in the information channel? Should we automate all jobs, including the fulfilling ones? Should we develop non-human brains so that they eventually outnumber and outsmart humans, weeding out and replacing them? Should we risk losing control of our civilization? Such decisions must never be delegated to unelected technical leaders. Strong AI systems should only be developed if we are confident that their effects are positive and their risks manageable. At the same time, this confidence must be validated and strengthened with the magnitude of the potential impact on the system. OpenAI's recent statement on artificial intelligence noted that it may have to receive independent review before starting to train future systems, and for state-of-the-art efforts, agreed to limit the rate of computational growth used to create new models. We agree that the time to act is now.

We therefore call on all AI labs to immediately suspend the training of AI systems stronger than GPT-4 for at least 6 months. This suspension should be public, verifiable, and include all key players. If such a ban cannot be implemented quickly, the government should step in and institute a moratorium.

AI labs and independent experts should jointly develop and implement a set of shared security protocols for advanced AI design and development during the moratorium, subject to rigorous review and oversight by independent external experts. These protocols should ensure that systems that follow the protocol are secure. It’s worth mentioning that this doesn’t mean pausing general AI development, just taking a step back from the dangerous race to limit unpredictable research and development.

AI research and development should be refocused on making current state-of-the-art and robust systems more accurate, safe, explainable, transparent, robust, consistent, trustworthy, and faithful.

At the same time, **AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems. **These should include at a minimum: AI-specific regulators; oversight and tracking of high-capacity AI systems and large computing-capable hardware; provenance and watermarking systems to help distinguish real from synthetic and track model leaks; robust censorship and Certified ecosystems; accountability for harm caused by AI; strong public funding for AI safety technology research and well-resourced institutions to address the massive economic and political disruption AI could cause.

Humanity can enjoy a prosperous future through artificial intelligence. Now, our success in creating robust AI systems can pay off in this "summer of AI," designing those systems for the clear benefit of all and giving society a chance to adapt. Discontinuing the use of other technologies could have catastrophic effects on society, so we must remain prepared. Let's enjoy a long AI summer instead of rushing into fall.

As of now, the letter has been signed by 1,279 technology leaders and researchers , including Musk, Apple co-founder Steve Wozniak, Stability AI founder Emad Mostaque, and Turing Award winner Joshua Bengio (Yoshua Bengio). ), Stuart Russell, author of Artificial Intelligence: A Modern Approach, Steve Wozniak, Apple co-founder, Emad Mostak, CEO of Stability AI (Emad Mostaque) and other technology leaders.

insert image description here

OpenAI CEO Sam Altman: "AI may indeed kill humans"

OpenAI CEO Sam Altman said that he does not deny the view that overly powerful AGI "may kill humans". It must be admitted that (AI kills humans) is a certain possibility. Many predictions about AI safety and challenges have proven wrong , and we must face up to this and try to find solutions to the problem as early as possible.

Musk has expressed his concerns about artificial intelligence many times before, and believes that artificial intelligence is one of the biggest risks to human civilization in the future. The threat is much higher than car accidents, plane crashes, drug floods, etc., and even more dangerous than nuclear weapons .

Therefore, since super artificial intelligence is still in its infancy, it is undeniable that AI may cause chaos or even death to human beings. A prudent approach must be taken from the outset to research and evaluate the development of AI, and to ensure that AI does not pose risks as it develops. In order to avoid unexpected situations, sound technologies, evaluation systems or policies should be adopted in order to effectively supervise and manage the development of artificial intelligence.

Reference material: Daily Economic News

Other information download

If you want to continue to learn about artificial intelligence-related learning routes and knowledge systems, welcome to read my other blog " Heavy | Complete artificial intelligence AI learning-basic knowledge learning route, all materials can be downloaded directly from the network disk without paying attention to routines "
This blog refers to Github's well-known open source platform, AI technology platform and experts in related fields: Datawhale, ApacheCN, AI Youdao and Dr. Huang Haiguang, etc. There are about 100G related materials, and I hope to help all friends.

Guess you like

Origin blog.csdn.net/qq_31136513/article/details/129850280