ChatGPT Boss Warns: AI Could Exterminate Humanity

Click the card below to follow the " CVer " official account

AI/CV heavy dry goods, delivered in the first time

Click to enter —>【Transformer】WeChat exchange group

Posted by Xiao Xiao from Aufei Temple
and reproduced from: QbitAI

Bengio and Hinton, Turing Award winners, also warned that AI may exterminate human beings !

Just last night, an open letter with as many as 350 signatories spread rapidly. The core of the letter is only one sentence:

“Reducing the risk of AI extinction” should be a global priority like other societal-scale risks such as pandemics and nuclear war.

c1c53114f87e1ab4982e6705d022ccc8.png

The open letter was signed by the CEOs of the three major AI giants, namely ChatGPT boss, Sam Altman of OpenAI , Demis Hassabis of DeepMind and Dario Amodei of Anthropic ;

c5a24ea5390d395cf673fbac72fc8ca4.png

It also includes many professors from universities at home and abroad. In addition to Yoshua Bengio and Geoffrey Hinton, Tsinghua AIR Dean Zhang Yaqin and Stanford professor Martin Hellman , winner of the Turing Award , are also among them.

During this period of time, various large and small discussions about the risks of AI have also continued. OpenAI CEO Sam Altman also advocated some time ago that people should regulate AI like nuclear facilities.

So, who initiated this open letter initiative, and is there any AI expert who opposes it?

Why did you sign this letter?

The organization that issued this open letter is called Center for AI Safety (CAIS), a non-profit organization on AI safety.

1946f6786a1a1bca17daa88708f3621b.png

At the beginning of this open letter, CAIS repeatedly emphasized:

A growing number of people, including AI experts, journalists, policymakers, and the general public, are discussing a range of important and urgent risks posed by AI.

Even so, it may be difficult to express serious risks to advanced AI. This open letter aims to overcome this hurdle and start a discussion on related issues, so that more experts and public figures can realize this.

64bf2539662c1fddaed53bb6c3b18106.png

In fact, many AI experts who have signed this letter have recently posted blog posts about AI risks or expressed their views in interviews.

For example, some time ago, Yoshua Bengio wrote a long letter , warning that "the human brain is a biological machine, and there must be a super intelligent AI that surpasses it."

7f95d7f1f7bc2a1853f50631494009cd.png

In addition, Emad Mostaque, the founder of Stability AI, and David Krueger, an assistant professor at the University of Cambridge, also mentioned the potential harm of AI in recent interviews.

For example, Emad Mostaque believes that 10 years later, including Stability AI, OpenAI and DeepMind, the scale of these AI companies will even exceed Google and Facebook.

It is precisely because of this that AI risk is something that must be considered:

We may be on the cusp of sweeping changes too large for any one company or country to manage.

What about the unsigned AI master?

Of course, under this wave of "requesting AI supervision", there are also many voices of opposition.

So far, AI experts including another Turing Award winner Yann LeCun, former Tesla AI director Andrej Karpathy, and Wu Enda have not signed the joint letter.

Wu Enda responded by tweeting:

When I think about most of the risks to human existence: pandemics, climate change leading to mass population depopulation, asteroids...

AI will be a key part of our search for solutions. So, if you want humanity to survive and thrive for the next 1,000 years, let AI develop faster, not slower.

3b1e28fe044c52ed283ff68c9f4df279.png

LeCun retweeted to agree:

The reason "superhuman" AI isn't at the top of the risk list is largely because it doesn't exist yet. At least it's too early to discuss the safety of an AI that's as intelligent as a dog's (let alone a human's) has been designed.

d5cc40a59ae00d2e29f19e0cc8c9ab37.png

In response, New York University professor Gary Marcus added:

Don't narrow the frame to just focus on the risk of human extinction. There are many other serious risks of AI, including the possibility of aiding in the development of biological weapons.

ebfeba6c3155dd73a13b4358d448d830.png

However, Gary Marcus also did not sign the 350-person open letter.

Obviously, including many AI experts, the opponents mainly have two opinions.

One opinion is that AI technology does have risks, but the content of this open letter is too broad.

The letter could still have been signed if it had been more precise in its statement that "mitigating the potential risks of AI technologies should be a top priority for the tech industry, government, and academic researchers in the field."

4336822cce22a12b231fe201f8ba5b42.png

The content of this open letter is simply exaggerated, and AI can even play a role in reducing risk in areas such as climate change.

cd9a7f2ca3abd877a17ff05efc6fae3d.png

Another type of opinion believes that this letter is nothing more than the person who currently has the most advanced AI technology trying to control the next direction of AI.

Take OpenAI CEO Sam Altman as an example:

If Sam Altman was really worried, he could just shut down the ChatGPT server right now.

4649c86087c929558cbf0df5744f791c.png

So, do you think that according to the current progress, AI may exterminate human beings?

参考链接:
[1]https://www.safe.ai/statement-on-ai-risk
[2]https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html?smtyp=cur&smid=tw-nytimes
[3]https://twitter.com/AndrewYNg/status/1663584330751561735
[4]https://www.reddit.com/r/MachineLearning/comments/13vls63/n_hinton_bengio_and_other_ai_experts_sign/

Click to enter —>【Transformer】WeChat exchange group

The latest CVPR 2023 papers and code download

 
  

Background reply: CVPR2023, you can download the collection of CVPR 2023 papers and code open source papers

Background reply: Transformer review, you can download the latest 3 Transformer review PDFs

Transformer交流群成立
扫描下方二维码,或者添加微信:CVer333,即可添加CVer小助手微信,便可申请加入CVer-扩散模型或者Transformer 微信交流群。另外其他垂直方向已涵盖:目标检测、图像分割、目标跟踪、人脸检测&识别、OCR、姿态估计、超分辨率、SLAM、医疗影像、Re-ID、GAN、NAS、深度估计、自动驾驶、强化学习、车道线检测、模型剪枝&压缩、去噪、去雾、去雨、风格迁移、遥感图像、行为识别、视频理解、图像融合、图像检索、论文投稿&交流、PyTorch、TensorFlow和Transformer等。
一定要备注:研究方向+地点+学校/公司+昵称(如Transformer+上海+上交+卡卡),根据格式备注,可更快被通过且邀请进群

▲扫码或加微信号: CVer333,进交流群
CVer计算机视觉(知识星球)来了!想要了解最新最快最好的CV/DL/AI论文速递、优质实战项目、AI行业前沿、从入门到精通学习教程等资料,欢迎扫描下方二维码,加入CVer计算机视觉,已汇集数千人!

▲扫码进星球
▲点击上方卡片,关注CVer公众号

It's not easy to organize, please like and watchf514fc163e9bf78cc2f8e3157a18896e.gif

Guess you like

Origin blog.csdn.net/amusi1994/article/details/130998735