Are big models really going to replace programmers? The most dangerous position is...

Nowadays, as large models have shown more and more powerful capabilities in programming, code generation, automated testing and other fields, a thought-provoking question has emerged: Will large models eventually replace programmers' jobs?

Some people believe that no job in the world is absolutely safe, and that it is only a matter of time before machines replace humans. From GitHub Copilot, ChatGPT to the domestic Tongyi Lingma, various alternative tools have emerged one after another. AI will do more and more, and humans will do less and less. Baidu CEO Robin Li once said that basically there will be no such profession as "programmer" in the future, because as long as everyone can speak, everyone will have the ability to be a programmer.

But others hold a different view. They believe: "Low-end programmers will disappear, and creative programmers will become big." "Programming is still the basics and must be learned. If you can't understand the program, it will be useless no matter how creative you are." 360 CEO Zhou Hongyi believes that the enthusiasm for programmers will not weaken within ten years. Although everyone will use computers in the future, and everyone is a programmer, the products created by different people using computers are completely different. The AI ​​era requires computer experts and programmers even more, and they may be the most vocal in all walks of life. .

So, what is the real situation? What jobs can large model technology replace, and how will it affect programmers' careers? As an ordinary programmer, how should you adapt to changes?

In the 12th issue [Open Source Talk], we invited Yang Yanbo, head of agent research at iFlytek AI Engineering Institute, Sun Yishen, data scientist from PingCAP AI Lab team, and Ma Gong, engineer from Infra, to discuss what will happen with the development of large models. How to shape the technology workplace of the future?

Sharing guests:

Yang Yanbo

He is the person in charge of agent research at iFlytek AI Engineering Institute and a senior R&D engineer. He loves open source and is responsible for the research on the large model fine-tuning platform (Maas) and agent-related technologies.

 

Sun Yishen

PingCAP AI Lab Data Scientist. Since the shocking release of ChatGPT, it has focused on the exploration of LLM application development and Multi-Agents and other application directions, developed applications such as TiDB Bot and LinguFlow, and participated in the development of the AutoGen community.

 

host:

horse worker

Nordic Infra engineer, manager of the public account "Swedish Horseman". Regular guest of "Open Source Talk".

 

01To what stage has the current large-scale model developed? What jobs can be replaced?

 

Ma Gong: To what extent have large models developed so far? How far has the replacement of programmers reached? What products are already doing well? Can you tell me something?

 

Yang Yanbo: The topic we are talking about today is whether large models will replace programmers. Let’s first look at the definition of programmers. Wikipedia says that programmers initially referred to pure software developers, but it is obvious that today, programmers no longer only refer to software developers, and their work is no longer limited to writing code. Jobs such as script writing and software testing may be gradually replaced today. Nowadays, in some simple scenarios, such as translation, document sorting, and data annotation, large models have been used better with some agent frameworks. Positions in such scenarios are relatively easy to be replaced by large models. Lose.

 

Sun Yishen: When ChatGPT came out, it was really amazing. It was really a dimensionality reduction blow to some previous AI applications. However, after everyone's experience, you will find that it does have bright spots, but it also has many disadvantages. For a programmer, if it really needs to be used in production, I definitely don’t look at its ceiling, but at its average level, or where its bottom line is, because this is what guarantees your service. Quality is the focus.

If you look at it now, it can only do some relatively rudimentary things in the text field. For example, it is better at summarizing and reading comprehension, but if you really want it to do a very advanced text relationship, it is not that good at it, or its accuracy is not that high.

Looking at the field of programmers, it is actually similar. It can do some basic tasks. But advanced work, in theory, does not have the so-called logical reasoning ability. The reason why it seems to have logical reasoning ability is because in most cases logic is contained in language. In the process of learning language, as long as the text learned is good enough, it will naturally contain some logic in it. , but it doesn’t actually understand this thing itself. If you really ask it to develop a very complex thing, or a very new thing, then it basically won't be able to do it.

 

Ma Gong: Yes, Copilot provided me with some codes, which are very useful and convenient, but you still have to read them yourself, otherwise it will be a disaster if you put them in. But if the boss fires me, then he has to look at the Copilot code himself. I believe he might as well ask me to look at it.

Then again, you said it can't do advanced things, but how many advanced things are there in a programmer's daily work? Maybe 99% of our work is low-level stuff. What do you think?

 

Yang Yanbo: The large model itself contains some atomic capabilities, such as basic dialogue, some functions on its own page, etc. These are relatively simple methods of use that allow you to experience the effect of large models.

For more complex tasks, we generally use large model APIs to do more advanced applications on the client or programmatically. The large model we are talking about now is not only the development of its own atomic capabilities, but also the most popular programming paradigm this year, which is a paradigm of agent. This piece is also part of the development of large models. In the future, complex tasks may be solved more by the concept of intelligent agents. This is my point of view.

 

Ma Gong: What exactly is this paradigm shift? Does it mean that programmers will not be needed to write code, test, apply, and then deploy to the production environment in the future, or...?

 

Yang Yanbo: Let me give you an example. In the real world, when developing a project, there is often a project leader, project manager, development and testing, operation and maintenance deployment, and other various roles, who work together to get the project off the ground. After the large model comes out, we can use the large model to play these roles to achieve their different goals in the project and complete a project together. This is the intelligent agent platform that everyone is working on recently. In fact, as a relatively special environment like programming, it is actually very promising to use multiple roles to coordinate coding.

 

02 Will human programmers still be needed in the future?

 

Ma Gong: So what you are saying is that large models will not only replace programmers, but also kill the entire IT team. In other words, in the future, product managers will deal directly with agents, and there will be no need for people or companies. Yes. Is this what you mean?

 

Yang Yanbo: My point of view may be more radical, maybe something like this. Of course, after replacing these positions, some new positions will definitely be created.

 

Ma Gong: What a terrifying scene.

 

Sun Yishen: That’s right, because today’s topic discusses whether it will eventually be replaced. It’s hard to say whether it will be replaced in the end. We must add a time limit to it.

Speaking from my experience, I have done a lot of experiments. As Yanbo said, letting agents form a team is a better abstraction for the problem-solving process. I even added a layer of abstraction to this level - how to do my own task? Because the task will have a corresponding SOP to support how to do the task, and different SOPs will have a complete set of workflows, I can use this group to combine different roles to do this.

But after I completed these two levels of abstraction, it was still difficult to meet my actual production work needs. For example, if I ask it to do some data analysis work, in actual operation, you may enter a large number of tokens, and during this process, its context exchanges and interactions are also very frequent. In this case, the entire context may actually exceed 100K+.

In this case, the probability that it can finally complete the task is actually quite small. Because every aspect of your process has the possibility of error, and in the end it is a multiplication principle! After the multiplication is completed, overall, the success rate is still quite low.

Because when someone writes a paper, he will definitely show you its best highlights! But if this highlight is repeated 100 times, it’s hard to say whether 90% of it will look like this. In fact, I finally used it. If the context is long enough or the logic is complex enough, it is quite difficult to communicate with it by just a PM.

If a PM has the ability to review all generated code, isn't he actually equivalent to a programmer? I think the PM basically has no ability to review a very complex project, and he may not even be able to get it right just by looking at the input and output.

 

Ma Gong: Is the phenomenon you mentioned just a temporary phenomenon, and the situation will be completely different in two or three years? For example, the 100K context you just mentioned, maybe there will only be 10 MB context in the future? Or, when its complexity increases and the work becomes less smooth, I can solve the problem by adding another programmer. However, compared to the 100 programmers that might have been needed before, now just adding 1 is enough. Is this equivalent to replacing 99% of programmers?

 

Sun Yishen: According to Ultraman, there will be another leap forward after chatGPT5, so this is still quite difficult to predict. But based on the current Transformer model, to put it bluntly, it still prefers a Markov decision-making process, which is still based on probability and predicts the next state based on the previous state. If there is no fundamental change in this point, or if humans do not have a complete understanding of their own cognitive science, at least in the past few years, the replacement of programmers will not occur.

Now, everyone is paying attention to the quality of the large model after the context is increased, and how to expand the size of the context. But at the level above 100K200K, its accuracy may currently be less than 50%. In this case, the breakthrough may not be that fast.

In the first half of this year, I tried to ask AutoGen to form a group so that it could chat with people in order, but it couldn't do it. Now you say that not only the workflow of the entire group must meet the needs, but also the SOP of the entire business must meet your operational needs. When the whole step is combined, at least I think it is very poor.

 

Ma Gong: Could this be because your requirements for building a distributed database are too high? If I just write a medical information management system with very low requirements for programmers, and then apply your system to my medical application, will the effect be acceptable?

 

Sun Yishen: Same. I think the problems are relatively similar in different fields. LLM is definitely OK as an assist, but if you let it really drive automatically, this is definitely not OK. In addition to programmers, there is another important point, which is the issue of business ethics, or who should bear the issue of business responsibility?

In the medical industry, everyone knows that image recognition CT is used a lot in this field. In fact, AI can give some conclusions when watching movies, but in the end, doctors still need to stamp and sign. The same goes for programmers. The PM can hand over this task to an AI, but in the end there still has to be a person to sign and sign. It is unlikely that the PM will sign this. In the end, a programmer still has to sign this, and the programmer still has to check the result. If he doesn't check it, the boss will have to bear the commercial responsibility if he signs it.

 

Ma Gong: This is a very interesting point. Large models cannot take the blame.

 

03 What kind of positions are the most dangerous?

 

Yang Yanbo: I don’t agree with the example of autonomous driving. In fact, many cities have now piloted some bus lines. For example, Hefei has some pilot unmanned buses and unmanned express delivery. Real-life examples are already OK, but it will take some time before real humans can be replaced. However, the same goes for the programmer profession. It is replaced step by step, starting with some simple scenarios and then gradually replacing them with more complex scenarios.

For example, there are some jobs in our company that are prone to simple and repetitive code development. We used to recruit interns to do it. After the large model came out, we have explored using some agent technologies that rely on large models to replace this type of work. Greatly reduced labor costs.

 

Ma Gong: What is the intelligent agent you mentioned before? What is the difference with chatGPT?

 

Yang Yanbo: The concept of intelligent agent existed before the emergence of large models. Its English name is agent. Agent is called agent in Chinese, which means that when you complete something, you may not need to do it yourself, but you need a tool or a physical person to help you do it. Now, this agent is an agent. What it may do is not only adjust the model. It will have some high-order capabilities: it knows how to interact with the large model and how to call some external tools in the field of this matter. . This is the concept of intelligence. It is also a large model in nature, but it is based on the large model and encapsulates some task planning in specific fields, or some knowledge, which is equivalent to it being closer to our real users.

 

Ma Gong: I understand it this way: When I called Ctrip before, I needed to tell the customer service and ask him to book a ticket for me. Now this customer service may be changed to LLM. To me, he is an intelligent agent.

 

Sun Yishen: Yes, you can think of it as a controller, or a robot. What it mainly does is to first accept certain inputs and be able to perceive an environment, then make certain decisions by itself through the input information, and finally output some actions to change the environment.

 

04 What should I do if I am replaced?

 

Ma Gong: Let me ask a question, if we programmers are really replaced, what is the way out? Yanbo just said that new jobs will be created, but what are the new jobs? How should we prepare for this?

 

Yang Yanbo: What did we originally invent AI for? In order to liberate us humans and improve our efficiency. So, once the capability boundary of a large model becomes larger and larger, and it can do more and more things, what we need to pay more attention to is the quality of the things that the large model does for us, and whether the output is safe? If it is not ethical, how can we control the large model to produce better output? For example, in the area of ​​content security for large models, some new positions have been created.

Another position is, how to use large models in a more advanced way? For example, developing an agent is a new position: How to develop an agent so that large models can be used by novices? I think there will be more and more such positions, and the agents may be different in each field, and everyone's development methods are gradually changing.

 

Ma Gong: First, how can ordinary programmers have the ability to develop intelligent agents? Then I can only be a little assistant to that kind of intelligent agent and review what it writes. In other words, it is my assistant now, but after a few years, I will become its assistant. Doesn’t sound like a very exciting journey to me.

 

Sun Yishen: Yanbo just mentioned that it is OK for large models to replace some basic abilities. In fact, it is still your assistant, helping you with some chores, or tasks of low value and poor added value. As a programmer, it is impossible for you to do something as simple as "receiving the input of an stc and then outputting a program". The further a lot of work goes, the more likely it will be in the business direction. Then if you are in a different field, you may still need to do a lot of things based on the knowledge in this field.

As for CS, the system architecture is also constantly evolving. As long as it is something new, it can be considered that large models do not have strong capabilities. Because in essence, it is still summarizing existing knowledge. If it creates based on existing knowledge, its creativity will be very unpredictable. It may occasionally give you some useful information in a flash of inspiration, but most of the time it won't.

In addition, its understanding of the laws of physics is actually nonexistent. If I are doing an extremely complex BI scenario, or if I am a person who develops CAE software, and I cannot even figure out calculus, AI will not be able to figure it out either. I feel that there is actually a lot of professional knowledge that can be mined in every field.

 

Ma Gong: I understand. If I were a programmer and I wanted to ensure my career safety, I should ask my leaders to use new languages ​​as much as possible. Every time a language comes out, I quickly use it. LLM It hasn't read enough text yet for it to be useful. Ha ha.

 

For more live content, scan the code to watch the replay↓↓↓


[Open Source Talk]

The OSCHINA video account chat column [Open Source Talk] has a technical topic in each issue. Three or five experts sit around to express their opinions and chat about open source. Bringing you the latest industry frontiers, the hottest technical topics, the most interesting open source projects, and the sharpest ideological exchanges. If you have new ideas or good projects and want to share them with your colleagues, please contact us. The forum is always open~

How much revenue can an unknown open source project bring? Microsoft's Chinese AI team collectively packed up and went to the United States, involving hundreds of people. Huawei officially announced that Yu Chengdong's job changes were nailed to the "FFmpeg Pillar of Shame" 15 years ago, but today he has to thank us—— Tencent QQ Video avenges its past humiliation? Huazhong University of Science and Technology’s open source mirror site is officially open for external access report: Django is still the first choice for 74% of developers. Zed editor has made progress in Linux support. A former employee of a well-known open source company broke the news: After being challenged by a subordinate, the technical leader became furious and rude, and was fired and pregnant. Female employee Alibaba Cloud officially releases Tongyi Qianwen 2.5 Microsoft donates US$1 million to the Rust Foundation
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/6852546/blog/11129786