Stanford's 2023 AI Index Report: Trends, Costs, Abuse, Funding, People, Environment, Legislation, Opinions...

83469067cc639cef403e781ee158d391.png

本文约2800字,建议阅读5分钟通过10张图观察AI应用的全景。

On April 3, 2023, Stanford University's Human-Centered Artificial Intelligence Institute (Stanford HAI) officially released the "Artificial Intelligence Index Report 2023" (Artificial Intelligence Index Report 2023). This is the sixth annual report released by the agency, which analyzes the impact of artificial intelligence and annual trends. This report is 302 pages long, almost 60% more than the 2022 report.

1f0ee207f50fb22902568877db28c3f3.jpeg

The new report reveals several key trends in the AI ​​industry through 2022:

  • AI continues to post state-of-the-art results on many benchmarks, but year-over-year improvements are minimal in several areas. Additionally, the speed at which baseline saturation is reached is increasing. Many traditional benchmarks for measuring AI progress, such as ImageNet and SQuAD, seem to fall short. New, more comprehensive benchmark suites such as BIG-bench and HELM have been released to challenge increasingly powerful AI systems.

  • Generative AI models such as DALL-E 2, Stable Diffusion, and ChatGPT have become part of the trend of the times. They display impressive capabilities while also raising a host of ethical questions. Vincent graphs are often biased along the gender dimension, and chatbots, like ChatGPT, can be misinformed or used for nefarious purposes.

  • Recent AI advances have been fueled by large language models (LLMs), which are now bigger and more expensive. For example, PaLM, one of the AI ​​models released in 2022, is 160 times the cost and 360 times the size of GPT-2, one of the first LLMs launched in 2019.

  • AI is helping to accelerate scientific progress. In 2022, AI models are used to control hydrogen fusion, improve the efficiency of matrix operations, and generate new antibodies. AI is also starting to build better AI. Nvidia uses AI reinforcement learning agents to improve chip designs that power AI systems. Likewise, Google recently used PaLM, one of its LLMs, to suggest ways to improve the same model.

The 2023 report includes more original data and analysis from the AI ​​Index team than ever before. This year's report also includes new analyzes of underlying models, including geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and trends in AI public opinion. The report also expanded its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023.

Observing the global scene of AI applications through 10 pictures

1. The cost of large language model training is high

While the capabilities of large language models such as ChatGPT have been significantly enhanced, so has the cost of training such models. Of all machine learning systems, language models consume the most computing resources.

7aeb60a80431955947c55cff28da3a7e.png

Figure 1 Training costs of various large language models

2. AI is both harmful and helpful to carbon reduction

While estimating the carbon footprint of an AI system is not easy, the AI ​​Index team came up with the best results considering the number of parameters in the model, the energy efficiency of the data center, and the type of generation used to deliver the electricity. According to Luccioni et al., in 2022, BLOOM's training will emit 25 times more carbon than a one-way air passenger from New York to San Francisco. Meanwhile, new reinforcement learning models like BCOOLER show that AI systems can be used to optimize energy use.

8ef8022080ffde49419cdb7568243282.png

Figure 2 Carbon emission calculation of large language model

3. Private capital investment in the AI ​​industry fell for the first time, while government investment continued to increase

Private capital investment in AI fell for the first time in a decade, falling by about one-third from 2021 to $189.6 billion. Ray Perrault, co-director of the AI ​​Index Steering Committee, said: "In 2022, private capital investment in start-ups will decline overall; question."

On the positive side of AI research, government investment has increased, at least in the United States, according to the report. The Artificial Intelligence Index report shows that non-defense government agencies in the United States will allocate $1.7 billion for artificial intelligence research and development in 2022, an increase of 13.1% over 2021. The U.S. Department of Defense requested $1.1 billion in unclassified AI-specific research in fiscal year 2023, an increase of 26.4% compared to 2022. Perrault said: "These numbers are hard to come by. The Stanford AI Index team used several different measurements and came up with roughly similar numbers, but was unable to gather comparable data from around the world.

There are several potential sources of this growth, Perrault noted. For example, in the United States, a National Security Council that studies artificial intelligence released a report in 2021, recommending an increase of about US$1 billion in funding for AI itself and another US$1 billion for high-performance computing. The report's recommendations seem to have some effect. In the past AI was funded by a handful of agencies such as DARPA, NSF and some DoD organizations, but now, given that AI is seen as a problem of broader interest, like biology, it is being funded in various fields have a huge impact.

0f31a2f7b44cc78ff3fc1c8ac34c28b5.png

Figure 3 The investment amount of global private capital in the field of AI

4. Industry recruits more AI PhD graduates than academia

According to the latest data released by 2021, according to the AI ​​Index Report, 65.4% of all doctoral graduates in AI majors have entered the industry, while the proportion of those working in academia is 28.2%. Others were self-employed, unemployed or reported as "other." Initially the three areas were nearly evenly split, and this divergence has steadily widened since 2011. Furthermore, in every sector in the U.S. for which data is available (except agriculture, forestry, fishing, and hunting), the number of jobs related to AI increases on average from 1.7 percent in 2021 to 1.9 percent in 2022. U.S. employers are increasingly looking for workers with AI-related skills.

25e957872eb580439742c4e5b6a49200.png

Figure 4 Employment of PhD graduates in the field of artificial intelligence in North America

5. The contribution of academia in the field of large models is increasingly lagging behind that of the industry

As the number of PhDs employed in industry increases, it should come as no surprise that industry is ahead of academia in creating new machine learning models. Until 2014, most new machine learning models came from academia, but industry has started to take off rapidly. According to data collected by HAI, by 2022, industry will create 32 machine learning models, compared to only 3 in academia. The AI ​​Index report noted that industry also has an advantage in accessing the vast amounts of data, computing power and funding necessary to build state-of-the-art AI systems.

Given this trend, Perrault said, "a big question is to what extent universities will get the resources to build their own large models, rather than tinkering with models from outside."

758e85123e551dbc66cc2cbb778e4558.png

Figure 5 The number of large language models created by each department

6. The large language model is constantly innovating

The AI ​​Index Report Steering Committee has selected the most important technical developments for large models in recent years and presented them in chronological order. This "monthly model," which is new to the research team, is augmenting home-made data collection, rather than relying solely on published research by others, Perrault said. At present, the United States, China, and Canada are the places where large language models emerge.

11af2df8d2c4a6e92c671c6a1203662f.png

7911cf28f231ade9d1b0856d84cba1af.png

Figure 6 Emerging large language models continue to emerge

7. Incidents of abuse of artificial intelligence are gradually increasing

The AI ​​Index Report, using data from the AIAAIC Repository, reports that incidents of misuse of AI are surging. The data is lagged by about a year, allowing the report to be reviewed. Of course, this data also includes some events in early 2022, such as the deepfake of the surrender of Ukrainian President Volodymyr Zelensky and the news that Intel has developed a student emotion monitoring system.

a3ded17f9b1b1ac729baeb4edd5ae5d0.pngFigure 7 Statistics on the number of AI risk events

8. Lawmakers increasingly focus on AI governance

According to the AI ​​Index Report, the total number of AI-related laws passed in 127 countries is surging. Only one AI law was passed in 2016, compared to 37 in 2022. These include amendments to Latvia's national security law to restrict organizations important to national security, including businesses that develop AI products; and a Spanish bill requiring AI algorithms used by public administration to take into account bias minimization criteria.

fb0c8a9be5104fe7a184cfc89aa74c1a.png

562a187a31e5d636ec816c1fdbaefb6e.png

Figure 8 Statistics on the number of AI legislation

9. Chinese people are more optimistic about artificial intelligence, while the United States and France are relatively backward

According to a survey conducted by global research firm IPSOS, 78% of Chinese respondents believe that the benefits of products and services using artificial intelligence outweigh the risks. In the United States, only 35% of people think that the advantages of artificial intelligence outweigh the disadvantages, and only 31% of respondents in France believe that the advantages of artificial intelligence development outweigh the disadvantages. Additionally, men generally have more positive attitudes toward AI than women, IPSOS reported.

d384580b3ba79b17a58536da175bf114.png

Figure 9 The support rate of people in various countries that the benefits of artificial intelligence development outweigh the disadvantages

10. NLP experts are concerned about both the potential and risks of AI

According to the AI ​​Index Report, they surveyed natural language processing (NLP) researchers to find out what AI experts think about AI research. While nearly 90 percent say the past and future benefits of AI outweigh the risks, they are not ignoring its power or risks. An overwhelming majority (73%) expects AI to lead to revolutionary social change soon, while 36% believe it could lead to a nuclear-scale catastrophe. Perrault said: "This is a very interesting survey result. Because these people are mostly experts in the technical field. This survey result is out of date for a year. Considering that new changes are taking place in large language models, there may be new ones now. Findings."

b1f30ffeab1991cd0274a3f9b7bc7224.png

Figure 10 Statistics of NLP experts' attitudes towards the risks of AI applications

Note: AI Index is an independent initiative of Stanford HAI. Since 2017, led by Stanford University, a number of experts and professors from MIT, OpenAI, Harvard, McKinsey and other institutions have formed a group to release the AI ​​Index Annual Report every year to comprehensively track the latest development status and trends of artificial intelligence.

Editor: Yu Tengkai

Proofreading: Yang Xuejun

b85d9e9c7a294b4fefe2a0d6c3964fe7.png

Guess you like

Origin blog.csdn.net/tMb8Z9Vdm66wH68VX1/article/details/130143271