Wu Enda and Yang Likun personally went off to start a live broadcast: GPT-5 can't stop!

cdec26e985ee22c9596d313e00deb467.jpeg

Editor|Meng Chen 

Source | Qubits

Large-scale model research cannot stop! Wu Enda and LeCun even conducted a live broadcast in person for this matter. After all, if there is no action, the situation is becoming more and more serious: the situation that Musk and others stopped the development of GPT-5 has escalated again, and it has grown from a thousand people to more than 13,500 people . The two also bluntly said that suspending AI research is irrational:

A 6-month moratorium on AI research is what really hurts. It is AI products that should be regulated, not the research behind them .

81885154b12237254b87dda58353f789.png

The previous call to suspend AI experiments that are more powerful than GPT-4 has been signed and supported by Yoshua Bengio, one of the deep learning giants. Hinton did not sign but said that "it will take longer than 6 months."

This time, Ng Enda and LeCun not only explained their views live, but also responded to more questions that netizens care about. Netizens who watched the live broadcast and video playback said that the video outbound provided more context and subtle tone differences than the tweets.

605a5403a8b422a1f5fc17fc4c2df09e.png

AGI escaped the laboratory, need to worry?

LeCun believes that people's concerns and fears about AI should be divided into two categories:

1. Conjectures related to the future, where AI is out of control, escapes from the laboratory, and even rules humans.

2. Related to reality, AI’s flaws in fairness and prejudice and its impact on society and economy.

For the first category, he believes that in the future, AI is unlikely to be a ChatGPT-style language model, which cannot make security specifications for things that do not exist.

The car has not been invented yet, how to design seat belts?

Regarding the second category of concerns, Andrew Ng and LeCun both said that regulation is necessary, but not at the expense of research and innovation. Wu Enda said that AI has created great value in education and medical care, and has helped many people.

Pausing AI research would hurt these people and slow value creation.

LeCun believes that the doomsday theory of "AI running away" or "AI ruling mankind" also makes people have unrealistic expectations for AI. ChatGPT brings this idea to people because it is fluent in language, but language is not all intelligence.

Language models have a very superficial understanding of the real world, and even though GPT-4 is multimodal, it still doesn't have any "experience" with reality, which is why it still spouts serious nonsense.

And this question, LeCun had already responded to this question with an article written in "Scientific American" with Anthony Zador, a neuroscientist at Cold Spring Harbor Laboratory, titled "Don't Be Afraid of the Terminator" . In the live broadcast, LeCun once again reiterated the main point of the article.

The motive for domination appears only in social species, such as humans and other animals, which need to survive and evolve amidst competition. And we can design AI to be a non-social species, designed to be non-dominant, submissive, or obey specific rules in order to conform to the interests of human beings as a whole.

6dfcb393128961a26e451950b8331b7e.png

Wu Enda compared it with the milestone event in the history of biological sciences, the "Ahilomar Conference". In 1975, recombinant DNA technology was just emerging, and its safety and effectiveness were questioned. Biologists, lawyers, and government representatives from all over the world held meetings, and after public debate, they finally reached a consensus on suspending or banning some experiments and proposing scientific research action guidelines.

bb30e7bd36dfffdb442960b907c3cdbf.png

Wu Enda believes that the situation back then is different from what is happening in the AI ​​field today. It is a real concern that DNA viruses escape the laboratory, but he does not see any risk of today's AI escaping the laboratory, at least dozens of years or even hundreds of years. When answering the audience's question "Under what circumstances would you agree to suspend AI research?", LeCun also said that "potential hazards, real hazards" and "imagined hazards" should be distinguished, and measures should be taken to regulate products when real hazards arise.

The first car was not safe. At that time, there were no seat belts, good brakes, and no traffic lights. Past technologies have gradually become safer, and AI is not special.

As for the question "How do you think Yoshua Bengio signed a joint name?", LeCun said that he and Bengio have always been friends. He thinks that Bengio's concern is that "it is inherently bad for companies to master technology for profit", but he doesn't think so. , where the two agree that AI research should be conducted in the open.

26c945b4a9c10fd316a79997a30191f8.png

Bengio also recently explained in detail why he signed on his personal website.

With the advent of ChatGPT, business competition has become more than ten times fiercer. The risk is that companies will rush to develop huge AI systems and leave behind the open and transparent habits of the past decade or so.

After the live broadcast, Wu Enda and LeCun are still communicating with netizens.

5b30181e530a336af3d7ccc774adb976.png

Regarding "Why don't you believe that AI will escape from the laboratory", LeCun said that it is very difficult to keep AI running on a specific hardware platform.

999e6de9fa9e99eb8acab49bc1e3dc18.png

The response to "AI reaches a singularity and becomes abrupt and uncontrollable" is that in the real world, every process will have friction, and exponential growth will quickly become a Sigmoid function.

cff40983e101804b26b94a401d9f6d47.png

Some netizens joked that language models are often described as "parrots that spit out words at random", but real parrots are much more dangerous, with beaks, claws, and intentions to hurt people.

9a6e61abbeb831083e414e9d90c0700d.png

LeCun added that Australian cockatoos are more vicious and I called for a six-month ban on cockatoos.

b88939cffaf883a7deaa8c53c5183d3b.gif

One More Thing

There are also more and more people expressing their views on the increasingly influential AI moratorium proposal for 6 months.

"I don't think putting one particular organization on hold will solve these problems. In a global industry, it's very difficult to enforce a pause," Gates told Reuters .

According to Forbes, former Google CEO Eric Schmidt believes that "most people in the regulatory department do not understand technology enough to properly regulate its development. In addition, if it is suspended in the United States for 6 months, it will only benefit other countries."

At the same time, the influence of another voice in the AI ​​research community is gradually increasing. A petition initiated by the non-profit organization LAION-AI (provider of Stable Diffusion training data) has also been signed by more than 1,400 people.

This project calls for the construction of a publicly funded international super AI infrastructure equipped with 100,000 state-of-the-art AI acceleration chips to ensure innovation and security. It's the CERN (European Organization for Nuclear Research) of particle physics. Supporters include well-known researchers such as Jürgen Schmidhuber, the father of LSTM, and HuggingFace co-founder Thomas Wolf.

085e958477286d6a8c9fc08c3e3b1536.png

Full video playback:
https://www.youtube.com/watch?v=BY9KV8uCtj4&t=33s

AI transcribed text:
https://gist.github.com/simonw/b3d48d6fcec247596fa2cca841d3fb7a

a0c90a6c95b821886083d8f8183800da.jpegReply keywords in the background [ join the group ]

Join the NLP, CV, search promotion and job hunting discussion group

 a4f5ea1dac556b5ef7ba53e357b11688.png

[1]https://twitter.com/AndrewYNg/status/1644418518056861696

[2]https://blogs.scientificamerican.com/observations/dont-fear-the-terminator/

Guess you like

Origin blog.csdn.net/xixiaoyaoww/article/details/130051019