Kaifu Lee and Ultraman Robin Li were named global AI leaders, produced by Time, among the top 100 people in the world

The west wind comes from the Aofei Temple
qubit | Public account QbitAI

"Time" released for the first time a list of the world's most influential people in AI.

100 academic and industry leaders gathered here.

These include Professor Ng Enda, Professor Li Feifei, Sinovation Ventures CEO Kai-fu Lee, Baidu CEO Robin Li, OpenAI CEO Sam Altman, Musk, NVIDIA Jen-Hsun Huang, Demis Hassabis, the father of AlphaGo, Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, etc.

It is worth mentioning that Musk was shortlisted as the founder of xAI.

942301851e21b4d48fe0cdf7def97ae8.png

In addition, some science fiction writers and musicians are also on the list as shapers. The youngest on the list is only 18 years old, and the oldest is 76-year-old Geoffrey Hinton.

Time AI 100

This list by Time divides these influential people in the field of AI into four categories: leaders, innovators, shapers, and thinkers.

Ng Enda, Sinovation Ventures Kai-Fu Lee, Baidu Robin Li, OpenAI CEO Sam Altman, Anthropic CEO and President Dario and Daniela Amodei, Musk, OpenAI co-creator Greg Brockman... are all in the leader camp.

People who think and explore how artificial intelligence affects society in innovative ways have also been appreciated, including musician Grimes, Chinese science fiction writer Ted Chiang, "Black Mirror" creator Charlie Brooker, etc.

This list also commends scientists, professors, researchers, etc. who are committed to AI ethics, bias and safety. Academicians of the Chinese Academy of Sciences Zeng Yi, Li Feifei, and the three giants of deep learning are among the thinkers.

What did the people on the list say?

"Times" also interviewed almost everyone on the list to see what the most influential people in the AI ​​circle had to say.

Jen-Hsun Huang

Everyone is a programmer now, you just have to say something to the computer.

699e2177c7b01b533be715cbd4487c49.png

In 1993, Jen-Hsun Huang founded Nvidia. In 2001, he was selected into Fortune's 40 richest people under 40 years old; in 2020, he was selected into the "2020 Forbes Global Billionaires List".

In May of this year, Nvidia became the first chip company with a market value of US$1 trillion, and the ninth company in history to enter the "trillion club" with a US dollar market value.

Robin Li

Artificial intelligence now has various logical reasoning capabilities that were previously unachievable.

3db4b8bdb09864e9dcc73d44333b7b5a.png

Robin Li founded Baidu in 2000. In 2018, he appeared on the cover of the first issue of Time magazine in 2018, becoming the first Chinese Internet entrepreneur to appear on the cover of Time.

Robin Li compared the current inflection point with the birth of the mobile Internet, which spawned applications like Uber, WeChat, and TikTok, and the current inflection point is expected to generate "millions of new, artificial intelligence-oriented applications."

Kai-fu Lee

More work needs to be done. When AI becomes powerful enough to come up with things we didn't know before, it could be used to come up with new ways to harm others, create weapons, or manipulate people with misinformation for gain.

46f9acfc21f62b2d453e86fb18359919.png

In 2018, Sinovation Ventures Chairman and CEO Kai-Fu Lee wrote that artificial general intelligence (AGI)—a hypothetical future technology that can perform most cognitive tasks better than humans—is still decades away. It takes time.

However, at this time, Kai-Fu Lee told Time magazine that the rapid development of large language model (LLM) applications like ChatGPT meant that “from some perspectives, we have already achieved it, from other perspectives, it is close to In front of my eyes".

Zeng Yi

I feel like I have a responsibility to let the world know.

be0b53c9265c7b900a9a548676833d11.png

Zeng Yi is currently a researcher at the Institute of Automation, Chinese Academy of Sciences. His research fields include brain-inspired AI, AI philosophy and ethics, etc.

Since 2016, Zeng Yi has paid more attention to the risks brought by AI systems, and began to spend more time working with policymakers to formulate rules that are conducive to the development of AI. He once participated in the formulation of UNESCO’s recommendations on the ethics of artificial intelligence. .

Demis Hassabis

One of the most urgent things that needs to happen in the field of AI research is the development of the right benchmarks for assessing capabilities.

ea8ba669fe708a53c9ae1a2f835af118.png

Demis Hassabis is the CEO of DeepMind. He has been leading a team of computer scientists and has made many breakthroughs in the field of AI, such as solving problems such as protein folding. He is also the father of AlphaGo.

DeepMind was acquired by Google in 2014; in April 2023, DeepMind reorganized its AI team and merged with Google Brain. Hassabis said "both teams were already doing all these things before. But now I would say we are pushing with greater intensity and speed."

Elon Musk

“TruthGPT”

3eb22161969bf3c3906e95c6eff44206.png

Musk is the founder and CEO of Tesla, CEO of SpaceX, CEO of Twitter (X), and founder of Neuralink.

When he left OpenAI in 2018, Musk promised to build his own chatbot "TruthGPT". Although Musk is a leader in the field of AI, he is still worried that AI may pose a threat to humanity.

Sam Altman

We have a responsibility to tell policymakers and the public, tell them what we think is happening and what might happen, and put the technology out into the world so people can see it.

919d64999df87b4f1bb2cb92fd4960a0.png

Sam Altman is the CEO of OpenAI. In November 2022, OpenAI released ChatGPT, which shocked the world.

He told "Times" that humans are smart enough and adaptable enough to cope with the release of increasingly powerful AI into the world. As long as the advent of AI is safe and gradual, "society has the ability to adapt because people are better than many so-called The experts are much smarter and think we can manage this."

Andrew Ng

The only way to build “AI applications” is to empower many people around the world to use these tools.

bb0d8ffc2a21eed01796af5ca982d6cb.png

In 2012, Ng, then a professor at Stanford University, submitted a proposal to Google executives. He advocates that Google should use a large amount of computing power to train neural networks, so that it can achieve artificial general intelligence (AGI).

Ten years ago, one would have been considered a "weird" to discuss this topic. But even so, Ng's attitude toward AGI at that time was still "very optimistic."

And today he bet again, "I don't think we have any reason to think that day won't arrive. Sometimes it feels very far away, but I'm very confident." But "if the only secret is to expand the existing transformer network, I think that won’t get us there, we still need additional technological breakthroughs”.

Li Feifei

If we teleported ourselves back to any moment in history—the moment when fire was discovered, the moment when the steam engine was built, the moment when electricity was discovered, I think our discussion would be very similar: the double-edged sword of technology.

Technology gives power, but with that power comes danger. I think the same is true for AI.

903b5447d1c58d205da5afae2ae92dbe.png

Professor Li Feifei understands the promise and dangers she describes better than most. Her research laid the foundation for today’s AI image recognition systems and expanded AI applications in healthcare.

Recently, as AI developers use more computing power to train their systems, the academic community has faced financial pressure when training AI systems. Professor Li Feifei expressed "worry about the gap in global resources between academia and industry."

Yann LeCun

I could stay silent, but that's not my style.

ca503e8e66831074987c3ab6fbe733a8.png

Time described LeCun as "He still dares to make bold and controversial remarks and argue with anyone who disagrees with his views."

In addition, LeCun also said that although the current performance of large language models is surprising, it is not enough. We won't reach human levels of intelligence just by making these systems bigger and training models on more data.

I've been trying to calm everyone's excitement down a little bit by saying, "Wait a minute, this isn't the end of the game."

Geoffrey Hinton

These things (AI) are going to become smarter than us and take over. If you want to know what that feels like, ask a chicken.

e87e2699d042725ef7738e12463c07af.png

Hinton, 76, has been working on building AI systems that mimic the human brain. He joined Google in 2013. He has always believed that the human brain is more powerful than the machines he and others built, so by making machines more like brains, you can make machines more powerful.

But in February, he realized that "the digital intelligence we have now may already be more powerful than the brain, but it just isn't big enough."

In May, Hinton resigned from his position as vice president and engineering fellow at Google and subsequently gave a series of interviews. In the interview, he explained that he resigned so that he could freely express his views on the dangers of AI and that he regretted helping to promote the technology.

Joshua Bengio

If we can reduce the chance of an adverse event tenfold, then let's do it.

abc2a453b52e0af75d38e04ee37e7f15.png

In May, Bengio, now 59, spoke publicly about the risks of AI. He and Hinton believe that within the next 5 to 20 years, AI will be developed that is superior to humans in all tasks.

He expressed concern about the risks that the rapid development of AI may bring, "It is psychologically very challenging because you realize that what you have been working hard to do is something that is beneficial to society, humanity, and science. A great thing, but it could actually be disastrous."

Reference link:
[1]https://time.com/collection/time100-ai/?utm_source=roundup&utm_campaign=20230202

Guess you like

Origin blog.csdn.net/QbitAI/article/details/132769489