Qcon2023: The growth of technical people in the era of large models (Simplified)

I am currently committed to operating system-related research and development. The company's goal is to create an intelligent native operating system for the Internet of Things era. How to implement AI Native in the operating system? With this question, I attended the Qcon2023 Beijing Station conference. What is different from Qcon 2022 Beijing Station is that the identity has changed. Last time it was a sharer, and this time it is a producer. The topic produced is - the growth of technical people in the era of large models.

In the era of big models, how do technical people grow? Divided into 3 topics:

  1. What are the characteristics of the large model era? What new demands are placed on individuals? How do we respond?

  2. What is the new development paradigm in the big model era? What new opportunities can we discover through new development paradigms? And how to seize these new opportunities?

  3. For our existing business, how to empower it through large models? What should we pay attention to when applying large model related technologies?

The three teachers who shared the lectures were very capable, and the effect of the three lectures met expectations to a certain extent, because there were still students who stood and voted.

Personal growth in the era of big models

The first person to take the stage was Mr. Wu Jinsheng from Capital Online. The topic was personal growth in the era of big models.

d6d6856e65b9072f3d8bf07f931a99f1.png

Starting from the development of domestic large models, we focused on the core capabilities of large models - emergence.

  • Translation 60B 

  • Math 60B 

  • In-context Learning 130B 

  • Chain-of-thought reasoning 130B

  • Knowledge combination 530B  

  • Emotion Perception 530B

Then we talked about some common application scenarios that you should know about——

ab752483c60b32068f6e0a136e431851.png

Facing large models, new requirements are put forward for our technical system, such as:

5a0c0bdbdc455c0393056df0ac9682d2.png

Everyone is an independent individual and should have his own thinking based on his own conditions.

cca24b6c465d17710ff0a79af4a4acfc.png

If we have more time, as topics related to technology growth unfold, I think it will definitely be able to bring more help to everyone.

Insights into entrepreneurial opportunities in the era of big models

The second sharing corresponded to our second question, but when I received Teacher Yibo’s PPT before the meeting, I was shocked. The 159-page lecture was simply an impossible task for our 45-minute special session. Thanks to Teacher Yibo for the substantial cutting, the live version is a simplified version customized for playback.

4876df970ba60159e2610fd8c3d015b8.png

The sharing started with a large number of implementation scenarios, letting us understand that large models are no longer just a matter of discussion. In addition to chatGPT, applications based on large models have already been implemented around us.

64a9ef0ab7a3d73c00a4220413b78354.png

There are three elements in large model training: the number of data sets, training intensity and parameters. There are also three collaboration modes between humans and AI: embedding, copilot and agent, but the most important thing is the change in the development paradigm.

5710f85fa6c512907f83839891f39568.png

Teacher Yibo explained the six levels of the large model development paradigm in an in-depth and simple way, and took LLMFarm as an example to explain how to apply this method to develop our AI App.

4bc1684543fa35a39ac5b5fe4d961249.png

The first principle is: AI First is an application that cannot be established without a large model.

Starting a business in the era of big models: Three suggestions for you who are far-sighted

The last person to take the stage was my old friend Yile. Lanying IM was already a very good product before the big model became popular. So what does the big model empower such an IM product?

87b47079ff6d6708da3183f6a5a8771b.png

Yile carefully shared the challenges encountered in applying large models: the emergence of large model capabilities requires large parameters, and the made-up large models cannot be explained, as well as the three modes of using large model services - PromptOnly, Embedding and Fine-tune. Importantly, common misconceptions in large model applications are pointed out:

f80289b7110bb5d05cea9aa046cb308a.png

82cf5d30cc0cb889210d2f6898db141c.png

Furthermore, Yile gave three suggestions:

Suggestion 1: Have the courage to go to sea, but also have calm thinking

Suggestion 2: AI First, but also AI Right

Suggestion 3: See far before you can go far, take one step and see three steps

Examples of the large model of Lanying IM application are used throughout, including the eye-catching knowledge base federation architecture——

9719ce556e1bde44babe7c99251019ff.png

The content of the three speeches was quite substantial, so I was forced to compress the interactive sessions, leaving some regrets in each sharing.

ddbc2b98dc67871df9f30944aef42614.jpeg

Thanks to all the teachers for their strong support. All lecture notes PPT for this topic are open to the public. Interested friends can download all lecture notes from the official website of Qcon2023 Beijing Station - The Growth of Technical People in the Era of Large Models. If you have any questions, you can leave a message or contact the teacher. Contact us directly.

The meaning is not yet finished, the mountains will not change, see you in the future!

[Related reading]

Guess you like

Origin blog.csdn.net/wireless_com/article/details/132680188