Li Feifei, Ng Enda and others’ top ten AI predictions for 2024! With shortage of GPU computing power, will AI agents explode within a year?

2023, the first year of the explosion of big models, is about to pass. Looking to the future, Bill Gates, Li Feifei, Ng Enda and others have made their own predictions for the development of artificial intelligence in 2024.

2023 can be said to be the spring of artificial intelligence.

Over the past year, ChatGPT has become a household name,

During this year, the various changes in AI and AI companies have shocked us and become something we enjoy after dinner.

Generative AI made significant progress during the year, allowing AI startups to attract significant funding.

Big names in the AI ​​field are beginning to discuss the possibility of AGI, and policymakers are starting to take AI regulation seriously.

But in the eyes of leaders in the artificial intelligence and technology industries, the AI ​​wave may have just begun. Every year after that may be the most turbulent year.

Bill Gates, Li Feifei, Ng Enda and others have all recently expressed their views on the future development trend of AI.

They all talked about looking forward to larger multimodal models, more exciting new features, and more conversations around how we use and regulate this technology.

Bill Gates: 2 predictions, 1 experience, 6 questions

Bill Gates published a 10,000-word article on his official blog, describing 2023 in his eyes as a new beginning of a new era.

Article address: https://www.gatesnotes.com/ The-Year-Ahead-2024?WT.mc_id=20231218210000_TYA-2024_MED-ST_&WT.tsrc=MEDST#ALChapter2

This blog still starts from his work at the Gates Foundation and talks about the far-reaching changes that have occurred or will occur around the world.

Regarding the development of AI technology, he said:

If I had to make a prediction, in a high-income country like the United States, I would guess that we are still 18 to 24 months away from widespread use of AI by the general public.

In African countries, I expect to see similar levels of usage in three years or so. There's still a gap, but it's a much shorter lag than we've seen with other innovations.

Bill Gates believes that AI, as the most far-reaching innovative technology on earth, will completely sweep the world within three years.

Gates said in a blog post that 2023 will be the first time he uses artificial intelligence at work for "serious reasons."

Compared with previous years, the world has a better understanding of what AI can do on its own and what it can "serve as an auxiliary tool."

But for most people, there is still a certain distance before AI can fully play its role in work scenarios.

Based on his own data and observational experience, he says the single most important lesson the industry should learn is: a product must work for the people who use it.

He gave a simple example: Pakistanis usually send each other voice messages instead of text messages or emails. Therefore, it makes sense to create an app that relies on voice commands rather than typing long queries.

From the perspective that he is most concerned about, Gates raised 5 questions, hoping that artificial intelligence can play a huge role in related fields:

-Can artificial intelligence combat antibiotic resistance?

-Can artificial intelligence create personalized tutors for each student?

-Can artificial intelligence help treat high-risk pregnancies?

-Can artificial intelligence help people assess their risk of contracting HIV?

-Can artificial intelligence make it easier for every medical worker to obtain medical information?

If we invest wisely now, AI can make the world a fairer place. It could reduce or even eliminate the lag time between the rich world getting innovation and the poor world getting it.

Ng Enda: LLM can understand the world and won’t regulate AI, so it’s better to leave it alone

Ng Enda recently said in an interview with the Financial Times that the doomsday theory of artificial intelligence is ridiculous and that AI regulation will hinder the development of AI technology itself.

In his view, current regulatory measures related to artificial intelligence have almost no effect in preventing certain problems. In addition to hindering technological progress, such ineffective supervision will not have any positive benefits.

Therefore, in his view, it is better to have no supervision than to do low-quality supervision.

He cited the recent example of the U.S. government asking big technology companies to commit themselves to adding "watermarks" to AI-generated content to deal with issues such as false information.

In his view, since the White House's voluntary commitment, some companies have stopped watermarking text content. He therefore argued that the voluntary commitment approach failed as a regulatory approach.

On the other hand, if regulators transplant this ineffective supervision to issues such as "regulating open source AI", it is likely to completely stifle the development of open source and create a monopoly of large technology companies.

If the level of supervision that AI receives is what it is now, then there really is no point in regulating it.

Ng reiterated that he actually wants the government to take matters into its own hands and create good regulations, rather than the poor regulatory proposals we are seeing now. So he doesn't advocate letting things go. But between bad regulation and no regulation, he would rather have no regulation.

Ng Enda also mentioned in the interview that LLM now has the prototype of a world model.

"From the scientific evidence I have seen, AI models can indeed build models of the world. So if AI has a model of the world, then I tend to believe that it does understand the world. But this is my own interpretation of the word understanding. understanding.

If you have a model of the world, then you understand how the world works and can predict how it will evolve under different scenarios. There is scientific evidence that LLM can indeed build a model of the world after being trained on large amounts of data. "

Li Feifei joins hands with Stanford HAI to release seven major predictions

Knowledge worker challenges

Erik Brynjolfsson, director of the Stanford Digital Economy Laboratory and others, predict that artificial intelligence companies will be able to provide products that truly impact productivity.

Knowledge workers will be affected as never before. For example, the jobs of creative workers, lawyers, and finance professors will undergo great changes.

These people have been largely untouched by the computer revolution over the past 30 years.

We should embrace the changes brought about by artificial intelligence, making our jobs better and allowing us to do new things that we couldn't do before.

Proliferation of false information

James Landay, a professor at Stanford University's School of Engineering and others, believe that we will see new large-scale multi-modal models, especially in video generation.

So we must also be more vigilant against serious deepfakes,

As consumers, we need to be aware of this, and as people, we also need to be aware of this.

We will see companies like OpenAI and more startups releasing the next bigger models.

We're still going to see a lot of discussion about "Is this AGI? What is AGI?" -- but we don't have to worry about AI taking over the world, it's all hype.

What we should really be worried about is the harm happening now — disinformation and deepfakes.

GPU shortage

Stanford University professor Russ Altman and others expressed concern about the global GPU shortage.

Big companies are trying to bring AI capabilities in-house, and GPU manufacturers like Nvidia are already operating at full capacity.

GPU, or AI computing power, represents the competitiveness of the new era, for companies and even countries.

The fight for GPUs will also put tremendous pressure on innovators to come up with hardware solutions that are cheaper and easier to make and use.

Stanford University and many other research institutions are working on low-power alternatives to current GPUs.

This work still has a long way to go to achieve large-scale commercial use, but in order to democratize AI technology, we must continue to move forward.

More useful agents

Peter Norvig, a distinguished education researcher at Stanford University, believes that agents will emerge in the next year, and AI will be able to connect to other services and solve practical problems.

2023 is the year of being able to chat with AI. The relationship between people and AI is only interaction through input and output.

And by 2024, we will see the ability of agents to do work for humans – making reservations, planning trips, and more.

In addition, we will move towards multimedia.

So far there has been a lot of focus on language models and then image models. Later, we will also have enough processing power to develop video models - which will be very interesting.

The things we're training now are very purposeful - people writing pages and paragraphs about things they think are interesting and important; people using cameras to record certain things that are happening.

But with video, there are cameras that run 24 hours a day, and they capture what's going on, without any filtering, without any filtering for purpose.

- AI models have not had this kind of data before, which will give the models a better understanding of everything.

hope for regulation

Li Feifei, co-director of HAI at Stanford University, said that in 2024, artificial intelligence policy will be worthy of attention.

Our policies should ensure that students and researchers have access to AI resources, data, and tools to provide more opportunities for AI development.

In addition, we need to develop and use artificial intelligence safely, securely and trustworthy,

Therefore, in addition to nurturing a vibrant AI ecosystem, policies should also focus on leveraging and managing AI technologies.

We need relevant legislation and executive orders, and the relevant public sectors should receive more investment.

Ask questions and give solutions

Ge Wang, senior HAI researcher at Stanford University, hopes that we will have enough funding to study what life, community, education and society can gain from artificial intelligence.

Increasingly, this generative AI technology will be embedded in how we work, play, and communicate.

We need to give ourselves time and space to think about what is allowed and where we should limit it.

As early as February this year, academic journal publisher Springer Publishing issued a statement stating that large language models can be used when drafting articles, but they are not allowed to serve as co-authors on any publications. The reason they cited was accountability, which is very important.

-- Carefully put something out there, lay out what the rationale is, and say that this is how it is understood now and that more improvements may be incorporated into the policy in the future.

Institutions and organizations must have this perspective and strive to put it on paper by 2024.

Companies will face complex regulations

Jennifer King, HAI privacy and data policy researcher at Stanford University, said that in addition to this year’s EU Artificial Intelligence Act, California and Colorado will pass regulations by mid-2024 to address automated decision-making in the context of consumer privacy.

While these regulations are limited to artificial intelligence systems that train or collect personal information, both provide consumers with choices about whether to allow certain systems to use AI along with personal information.

Companies will have to start thinking about what it will mean when customers exercise their rights, particularly collectively.

For example, if a large company uses artificial intelligence to assist in the recruitment process, what will it do if hundreds of applicants refuse to use AI? Do these resumes have to be reviewed manually? What difference does this make? Can humans do better? - We are just beginning to address these issues.

References:

  • https://x.com/StanfordHAI/status/1736778609808036101?s=20

  • https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3

  • https://www.businessinsider.com/bill-gates-ai-radically-transform-jobs-healthcare-education-2023-12

Guess you like

Origin blog.csdn.net/English0523/article/details/135126453