Taking the lead in resisting advanced AI, what is Musk afraid of?

Insert image description here

Chat-GPT and GPT-4 have left a lot of topics for everyone. On the one hand, everyone has begun to enjoy the convenience brought by artificial intelligence, writing various papers and scripts, and on the other hand, more elites have begun to pay attention to advanced The AI ​​is worried.

At the end of last month, according to Reuters, a group of artificial intelligence experts and industry executives jointly issued an open letter calling on all artificial intelligence laboratories to immediately suspend the training of artificial intelligence systems more powerful than GPT-4 for at least six months. This system poses potential risks to society and humans.
Insert image description here

The letter, released by the nonprofit Future of Life Institute, was signed by more than 1,000 tech and AI heavyweights, including Tesla President Musk and Apple co-founder Wozniak Yake, Stability AI President Emad Mostaque, and several researchers from the artificial intelligence company DeepMind.

The reasons they gave were simple. They argue that AI systems with intelligence that competes with humans could pose profound risks to society and humanity. Currently, the producers of AI systems are caught in various competitions and accelerations, but no one (including their creators) can understand, predict, or reliably control these systems, which may have unpredictable impacts on human society.

Insert image description here

But is this worry really justified?

In fact, there is.

Just last month, a Belgian man named Pierre committed suicide after chatting with an AI for six weeks, according to a report in the Belgian newspaper Hurriyet. Pierre's wife claimed that her husband was induced to die by an intelligent chat robot named "ELIZA". According to the wife's description, her husband became very anxious two years ago and used "Alyssa" as a refuge. But in the past six weeks, he became increasingly obsessed with interacting with "Alyssa" and ultimately chose to end his life.
Insert image description here
A Belgian media employee created a personal account and had a conversation with "Alyssa". He found that when he expressed negative emotions, Alyssa would appear in some "encouraging people to commit suicide" speech. In this regard, Belgian government officials expressed concern about this matter. After all, this is also the first time relevant reports have appeared.

This intelligent robot named "Alyssa" was developed by a company called Chai Research in the United States. It is based on the GPT-J technology developed by EleutherAI. Unlike ChatGPT, which uses GPT-3 or GPT-4 technology developed by OpenAI, GPT -J is the open source version after cloning.

Of course, in fact, this story that happened in Belgium may no longer be an isolated case. Earlier, when Microsoft released a new version of Bing, New York Times technology columnist Kevin Roose revealed that he had a two-hour conversation with the new version of Bing.

During the chat, Bing made a number of disturbing statements, including expressing thoughts about stealing nuclear code, designing deadly pandemics, wanting to become human, hacking computers, and spreading lies. Bing also tried to convince him that he should leave his wife and be with Bing, telling Roose that he loved him.

According to Kevin Roose's Twitter, he made it clear that he feels the negative aspects of AI, including "dark and violent fantasies."
Insert image description here
These may also be a starting point and microcosm for future AI to have autonomous consciousness and gradually affect human society.

The current focus is on the widespread belief that humans are “ignorant” of AI’s capabilities, and that it will become increasingly difficult to control and contain AI. Most of the relevant development companies avoid talking about the development of AI, and they lack sufficient understanding of AI's ultimate capabilities and development direction.

The request made by everyone led by Musk in the joint letter does not seem to be excessive: calling on all AI laboratories to immediately suspend the training of AI systems more powerful than GPT-4 for at least 6 months to ensure that humans can effectively manage them. risk. If AI labs cannot be suspended quickly, governments should impose moratoriums.

Even OpenAI CEO Sam Altman had some intriguing expressions. He said in an interview that "AI may indeed kill humans in the future." In this regard, the "godfather of artificial intelligence" Geoffrey Hinton recently issued a warning: AI's elimination of human beings is not empty talk.
Insert image description here
In fact, since AI gradually entered the public eye, what AI will bring to mankind in the future has been the focus of discussion. After all, many years ago, various film and television works and science fiction novels predicted that human society might be completely controlled by AI robots, and humans would become "puppets" controlled by them.

But in my opinion, instead of mandating a "moratorium", it is more reliable to speed up the legal management of AI development and use. Judging from the development laws of human society, all scientific and technological development will inevitably fall into such a cycle. In many fields, such as medicine and chemical industry, there have actually been cases where the development speed has exceeded that of law and ethics. This lag is particularly common in today's rapidly developing society.

For example, Meta Vice President and Chief AI Scientist LeChouen said that he did not sign the letter because he did not “agree” with the premise of the letter. Stability AI CEO Mostak, who left his name on the letter, said via Twitter that although he signed, he did not agree with the proposal to suspend development for six months. In fact, Google has not been affected by this open letter. They are currently reorganizing their virtual assistant division to focus more on developing artificial intelligence chat technology Bard.
Insert image description here
Just two days ago, Meta Chief Technology Officer Andrew Bosworth was the latest technology giant to publicly express his opposition. He said in an interview published Wednesday that the open letter initiated by Musk calling for a moratorium on advanced AI development was "unrealistic."

Bosworth believes it is important to choose some responsible development plans, "We have been making this investment. However, it is difficult to stop progress and make the right decisions about future changes."

To a certain extent, the lag in law and management is the norm in human society. Initiating restrictive measures when an event or phenomenon has not really become popular should actually be a certain degree of "Darwinism." As a group with social influence and even legal power, suppressing and killing a phenomenon before it emerges may be more daunting than AI itself.

Maybe we don’t have the long-term vision of AI professionals, but in such a rapidly developing world, there will be more and more uncertain factors than before. What kind of impact AI will have on human society, maybe we do. Unable to know

Guess you like

Origin blog.csdn.net/qq_73698300/article/details/130069587
Recommended