Robin Li: Big models are about to change the world

 Datawhale dry goods 

Lecture: Li Yanhong, Editor: Dr. Fu Yiping

Fu Yiping commented:

Well said, easy to understand, I think the general large-scale model eventually dominates 1-2 companies. For most industries and enterprises, the final track is not the large model itself, but the domain loaded with domain corpus In the battle of large models, the success of large models in the field depends on two factors: business scenarios and corpus data. The corpus data tests the level of digitalization and data governance of enterprises. The value of B is very large.

There is still a flywheel effect between the large model and the level of digitalization. The higher your level of digitalization, the more data and the better the quality. In a hurry, anxiety is sprouting in all kinds of articles.

start of text

Robin Li, founder, chairman and CEO of Baidu, delivered a speech entitled "Large Models Change the World" at the 2023 Zhongguancun Forum. He said that we are at a new starting point, which is a new era of artificial intelligence centered on large models. Large models have changed artificial intelligence, and large models are about to change the world.

The following is the transcript of the speech:

Distinguished leaders and distinguished guests, hello everyone!

It is a great pleasure to participate in the 2023 Zhongguancun Forum. The Zhongguancun Forum is a national platform for global scientific and technological innovation exchanges and cooperation. The topic I share today also focuses on innovation. The theme is "Large Models Change the World".

Recently, artificial intelligence has once again become the focus of human innovation, and more and more people recognize that the fourth industrial revolution is coming , and this revolution is marked by artificial intelligence.

The reason why it has become the focus is because of the large model, which successfully compresses human cognition of the entire world, allowing us to see the path to achieve general artificial intelligence.

At present, we are at a new starting point. This is a new era of artificial intelligence centered on large models. Large models have changed artificial intelligence, and large models are about to change the world.

Why do you say that large models change artificial intelligence?

Because of large computing power, large models, and large data, "intelligence emerges". What is intelligence emergence? The artificial intelligence in the past was that I taught the machine whatever skills I want the machine to learn. Those who have been taught may, but those who have not been taught will not. After the "intelligent emergence" of the large model, it will also learn skills that have not been taught before.

At the same time, the development direction of artificial intelligence is from discriminative to generative. What is discrimination? Search engines are a typical discriminant. What is generative? Using AI for literary creation, writing reports, drawing posters, etc., these are generative.

Generative AI will greatly improve work efficiency. Some research institutions believe that in the next 10 years, the efficiency of knowledge workers can be increased by 4 times.

So how does the big model redefine artificial intelligence?

First, large models redefine human-computer interaction . In the past few decades, the way of human-computer interaction has changed three times: For example, the command line was the main working interface when I was in college and graduate school. I enter a command and it gives me the desired response. I thought it was very efficient at the time, but most people can't do this kind of operation.

What is the simpler way of human-computer interaction? is a Graphical User Interface (GUI).

Just use the mouse to click on the menu layer by layer. This method is more friendly than the first one, at least many people can understand it, but it is still not the most natural way of interaction.

The birth of artificial intelligence allows us to interact with computers using natural language.

For example, I want to check "In April, which products of my company have a gross profit rate that exceeds the level before the epidemic?" This matter may have taken my assistant half a day and a day to get it in the past. Today, if a computer understands your natural language, it can give you a form within a second.

Natural language human-computer interaction will bring about a revolution in prompt words. In other words, future applications will be realized by mobilizing native AI applications through prompt words in natural language. This means that in the future, your salary level will depend on whether your prompt words are well written, not on whether your code is well written.

I made a prediction that 10 years from now, 50% of the jobs in the world will be prompt word projects. Asking questions is more important than solving them. Our education teaches children to ask questions, not just solve them.

Second, the big model will redefine marketing and customer service.

Whoever has the best way to communicate with customers will own the customer. This principle was established before the appearance of the big model, but with the big model, even if you have 7 billion customers, each customer can have an exclusive 24/7 assistant who knows everything to serve him.

Third, large models will give birth to AI-native applications.

What do native applications in the AI ​​era look like? For example, DoNotPay is an application that uses AI to help people file lawsuits and write legal documents. AI helps you get back the money you shouldn't have paid. Jasper is an application that helps companies and individuals write marketing and promotional copy through Al. Speak is a Korean app for learning foreign languages. The big model becomes a one-on-one teacher, providing individualized education for each child.

Baidu is also reconstructing all our products, services and workflows with AI-native thinking. For example, our Ruliu intelligent work platform allows every employee to have a work assistant with rich professional knowledge and real-time response. Through the dialogue understanding ability, the intelligent summary of chat records can be realized. My colleague commented, "I was stunned", "This really showed me at the time".

There is a saying in the industry that the era of large models has come, and every product is worth redoing. But who actually does it all over again? Baidu will be the first company to redo all products, not integration, not access, but redo, refactor!

On March 16, Wenxin Yiyan was released, and Baidu became the first company among the global technology companies to release GPT large-scale model products. It can be released so quickly because of long-term accumulation and investment. As early as 2019, we released the Wenxin Large-scale Model 1.0, and iterated to 2.0 and 3.0 since then. Soon, we will launch the Wenxin Large-scale Model 3.5.

At present, the market demand is very strong, and the enthusiasm of Chinese people for embracing new technologies is unprecedented. Wenxin Yiyan is also making rapid progress. For example, the QPS query and inference response speed per second has increased by 10 times, which means that the inference cost has been reduced to one tenth of the original.

In the future, all applications will be developed based on large models. Every industry should have its own large models, and the large models will be deeply integrated into the real economy.

The rules of the cloud computing game have been completely changed. When customers choose cloud vendors, they mainly look at your model and framework, rather than traditional capabilities such as computing power and storage.

Behind the big model change of artificial intelligence, the IT technology stack has also undergone very fundamental changes. In the past, no matter in the PC or mobile era, the IT technology stack had three layers, chip layer, operating system layer, and application layer.

In the era of artificial intelligence, the IT technology stack has become four layers: the bottom layer is still the chip layer, but the mainstream chip has changed from CPU to GPU. Baidu's layout at the chip layer is the Kunlun chip, which has mass-produced tens of thousands of chips. The third generation of Kunlun core is expected to be mass-produced in early 2024.

The chip is called the framework layer, which is the deep learning framework. Baidu's Paddle, Meta's PyTorch, and Google's TensorFlow are all at the framework layer. Baidu's flying paddle ranks first in China's market share.

Above the framework is the model layer, ChatGPT and Wenxin Yiyan belong to the model layer. Baidu's layout at the model layer, in addition to Wenxin, there are also large-scale transportation models, energy large-scale models and other large-scale industry models.

The top layer is the application layer, which is the native AI applications we mentioned earlier.

Baidu has full-stack self-developed products on these four layers, leading each layer, which can achieve end-to-end optimization and greatly improve efficiency.

Finally, I would like to talk about why Baidu can do it:

One is long-term investment and accumulation in technology.

Second, it has unique advantages. Our four-tier architecture has indeed greatly improved efficiency in practical applications. For example, we can increase the efficiency of urban traffic by 15% to 30% by intelligently adjusting the time of traffic lights. On the last working day before the May Day holiday, there was a huge traffic jam in Beijing, from the second ring road to the sixth ring road. The only green road was Yizhuang, because Baidu’s AI information control system was deployed at more than 300 intersections in Yizhuang.

The third is self-control. Wenxin Yiyan has achieved controllable data, framework, and models, which can reflect the high level of technological self-reliance and self-improvement in international competition. It can empower thousands of industries and help the Chinese economy create the next golden 30 years.

Today, the world is paying attention to the development of artificial general intelligence (AGI), which has also caused some controversy.

Are you worried that machines will replace humans? I think turning machines into humans should not be the direction of our efforts. Machines will be better than humans in many aspects, but machines cannot become humans, and there is no need to become humans. Machines will become more and more intelligent, capable of doing more and more things, and their efficiency will become higher and higher. We need to coexist with machines instead of binary opposition.

So, how do you prevent getting out of control? In the process of rapid development of artificial intelligence technology, it is indeed possible to appear in a direction that is not good for human beings. To prevent loss of control, countries with advanced AI technology need to work together to formulate rules from the perspective of a community with a shared future for mankind.

To participate in the formulation of the rules, one must first go to the poker table, to have the right to speak, and to have tickets to the global competition.

Thank you all, and I wish the forum a complete success.

08fcbeebb5b50b56ece199039a064419.png
" Like " three times together

Guess you like

Origin blog.csdn.net/Datawhale/article/details/130939344