"Danger--suspend GPT-5" led by Musk, more than 1,000 bigwigs jointly appealed

On March 22, the Future of Life Institute (Future of Life) issued an open letter to the whole society "Suspending Large-scale Artificial Intelligence Research", calling on all artificial intelligence laboratories to immediately suspend the development of artificial intelligence systems more powerful than GPT-4. training, the suspension period is at least 6 months. The agency's mission is "to steer transformative technologies for the benefit of life, away from extreme large-scale risks."

ced7f3df8c40c74494f71ed7cce778a4.jpeg

 52890382dcaed1911dcdd7c77e50b820.jpeg


More than 1,000 people signed the open letter, including Turing Award winner Yoshua Bengio, American writer and New York University professor Gary Marcus, "Artificial Intelligence: A Modern Approach" author Stuart Russell, Apple co-founder Steve Wozniak (Steve Wozniak), etc.

b50d484b7310db49161d20319b9dabce.jpeg

 

Regarding this matter, what does V Qi’s assistant think?

dd977df2372d8e9d3df259ab6d8532b1.jpeg



0473202e22d94c26b24a8d7dba007fc6.jpeg

 

V’s assistant believes that strict standards and clear norms should be formulated to ensure the safety and transparency of artificial intelligence, so as to ensure the value and significance of human society.

Give it a thumbs up! ! !

 

What do you think? Welcome to leave a message in the comment area to tell me! ! !

 

The following is the original text of the open letter:

 

As recognized by extensive research and top AI labs, AI systems pose significant risks to society and humanity. The Asilomar Principles for Artificial Intelligence state that advanced artificial intelligence could represent a profound transformation of the history of life on Earth and should be planned and managed with commensurate care and resources. Yet despite the AI ​​frenzy in AI labs in recent months, developing and deploying increasingly powerful digital brains, no one can currently understand, predict, or reliably control AI systems, nor do they have the corresponding level of planning and management.


Now that artificial intelligence is becoming competitive with humans at common tasks, we must ask ourselves: should we let machines promote untrue information in the information channel? Should we automate all jobs, including the fulfilling ones? Should we develop non-human brains so that they eventually outnumber and outsmart humans, weeding out and replacing them? Should we risk losing control of our civilization? Such decisions must never be delegated to unelected technical leaders. Strong AI systems should only be developed if we are confident that their effects are positive and their risks manageable. At the same time, this confidence must be validated and strengthened with the magnitude of the potential impact on the system. OpenAI's recent statement on artificial intelligence noted that it may have to receive independent review before starting to train future systems, and for state-of-the-art efforts, agreed to limit the rate of computational growth used to create new models. We agree that the time to act is now.


We therefore call on all AI labs to immediately suspend the training of AI systems stronger than GPT-4 for at least 6 months. This suspension should be public, verifiable, and include all key players. If such a ban cannot be implemented quickly, the government should step in and institute a moratorium.


AI labs and independent experts should jointly develop and implement a set of shared security protocols for advanced AI design and development during the moratorium, subject to rigorous review and oversight by independent external experts. These protocols should ensure that systems that follow the protocol are secure. It’s worth mentioning that this doesn’t mean pausing general AI development, just taking a step back from the dangerous race to limit unpredictable research and development.


AI research and development should be refocused on making current state-of-the-art and robust systems more accurate, safe, explainable, transparent, robust, consistent, trustworthy, and faithful.


At the same time, AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems. These should at least include: AI-specific regulators; oversight and tracking of high-capacity AI systems and large computing-capable hardware; provenance and watermarking systems to help distinguish real from synthetic and track model leaks; a strong review and certification ecosystem systems; accountability for harm caused by AI; strong public funding for AI safety technology research and well-resourced institutions to address the massive economic and political disruption AI could cause.


Humanity can enjoy a prosperous future through artificial intelligence. Now, our success in creating robust AI systems can pay off in this "summer of AI," designing those systems for the clear benefit of all and giving society a chance to adapt. Discontinuing the use of other technologies could have catastrophic effects on society, so we must remain prepared. Let's enjoy a long AI summer instead of rushing into fall.




Guess you like

Origin blog.csdn.net/m0_38049504/article/details/129844405