Zhou Hongyi, Wang Xiaochuan, Huang Tiejun, and Zhang Peng discuss artificial intelligence: all the way to the singularity

picture

A highly information-dense AI debate.

(This article comes from Jiazi Guangnian)

Today, we have a lot of consensus and non-consensus about artificial intelligence.

The consensus is that, like the Industrial Revolution, it will greatly change human production and lifestyle; the non-consensus is, in what way and what impact will it bring? When will the singularity of general artificial intelligence arrive? Is humanity ready for this?

On November 30, on the first anniversary of the release of ChatGPT, China’s technology industry think tank “Jiazi Guangnian” held the 2023 Jiazi Gravity Year-end Ceremony. At the summit forum of the conference, Jiazi Guangnian founder and CEO Zhang Yijia discussed with 360 Group founder Zhou Hongyi, Baichuan Intelligence founder and CEO Wang Xiaochuan, Zhiyuan Research Institute President Huang Tiejun, and Zhipu AI CEO Zhang Peng. Learn how artificial intelligence goes all the way to the singularity.

The four guests are all senior practitioners in the artificial intelligence industry. What kind of conflict of views will they have regarding the development of artificial intelligence?

The following is a transcript of the on-site guest exchanges. It has been compiled by "Jiazi Guangnian" and has been deleted.

1. Talking about technology: The gap between us and OpenAI

picture

Zhang Yijia: We have gathered together this summit forum360,Baichuan, ZhipuAI< /span> and Zhiyuan should be regarded as a very information-dense lineup in the AI ​​industry. My first question is that in April this year, both Xiaochuan and Mr. Zhou judged that OpenAI was two to three years ahead of domestic models, and ChatGPT was one year ahead of domestic models. Now half a year has passed, have domestic large models caught up with OpenAI? How far behind are we?

Zhou Hongyi: On the ranking list, China’s big models have pushed OpenAI out of the top 10, because we are all problem solvers, and we are very good at solving problems whenever we encounter them. But Wang Xiaochuan and I are both engaged in search. If you use GPT-4 seriously and give it very complex prompt words, you will find that its skills are still very profound. However, because some domestic colleagues accurately predicted that they would surpass GPT-4 on a certain day in a certain month, we were embarrassed, as if we had no confidence in the industry, so we spoke more generally.

picture

Zhou Hongyi, founder of 360 Group

But there are two things happening this year. One is that OpenAI launched the GPT Store. Many customized GPTs have complex processing capabilities for prompt words. We take these prompt words and try them on large domestic models. You will find that the gap between them still exists; The second is the recent OpenAI court battle incident. Although I don’t really believe that Q*, I think it is a story, but I have always believed that OpenAI must have some cards in its hand that it has not yet used, including its launch of GPT-4V so quickly. Multimodal version.

Objectively speaking, I think there is still a gap. I just want to say thatthis gap does not prevent our country from building its own GPT and its own large-scale model industry. On the contrary, if you do not admit this gap and are always blindly optimistic, it will cause some problems. When we released 360 Intelligent Brain, we humbly said that the gap between us and OpenAI was about a year and a half. Then our peers said that we had surpassed GPT-4, and customers felt that our peers were one year ahead of 360. So I don’t dare to predict the gap with OpenAI anymore.

Wang Xiaochuan:There are many things behind OpenAI that we cannot see. I think that by the end of the year, the level of domestic first-line large models will almost reach the GPT-3.5 version launched on November 30 last year. There may still be a gap between GPT-4 because the other party has upgraded to GPT-4V. So I expect that it may catch up with GPT-4V in about a year and a half. This is a gap visible to the naked eye, and things that have not been seen behind may be further away. I think the overall gap is still 2 to 3 years, and the gap visible to the naked eye is 1 to 2 years.

picture

Wang Xiaochuan, founder and CEO of Baichuan Intelligence

Zhang Peng:I particularly agree with what Mr. Zhou and Xiaochuan said just now. There must be clear comparison indicators for comparison, but everyone knows that OpenAI definitely did not show all its real efforts. Everyone It can only be compared with GPT4 and GPT-4V that OpenAI has announced. Judging from the situation at the beginning of the year, we are closing this distance. Our Zhipu AI has also recently upgraded its technology, and can approach it on some single points or a small number of indicators. We are trying our best to catch up with its situation, but there is still a big gap in the overall average ability, which is also the source of our pressure and motivation.

picture

Zhipu AI CEO Zhang Peng

Huang Tiejun:My last speaker is to hear how confident the three leading companies are. If you ask me, my answer is simple, We are definitely getting closer rather than widening the gap. ChatGPT was released on November 30 last year. The enthusiasm and anxiety of the entire society for this field increased instantly. However, because Zhiyuan started working earlier, we were not anxious in the past, and we are not now. If you are not anxious, you will not be anxious in the future.

Why do I think the gap is getting smaller this year? Because more and more people are participating in this field, more and more resources are invested, and experience is gradually accumulated. I often say that artificial intelligence is a technology, and a project like ChatGPT is a huge project, which depends on the resources and manpower invested. Now that we have so many companies and institutions trying, I believe that each of them has found some tips, or has their own experience and insights, and the gap between us will become smaller and smaller, and even It may one day be surpassed.

picture

Huang Tiejun, President of Zhiyuan Research Institute

Zhou Hongyi:I would like to add a few words to avoid being too pessimistic. First, OpenAI is seven years old. They also inherited some of Google's achievements and have a lot of past experience. They started releasing it last year, and basically domestic companies started to catch up. It took a year to reach the current level. I think this is already a super-speed development.

Secondly, there are now open source communities like Llama, which have a great impact on the entire open source industry. I also agree with what Dean Huang said. Some of the big models are algorithms and some are infrastructure, but there are also some things, like alchemy, which are engineering practices and are kept secret. , such as how to perform fine-tuning, fine-tuning, screening of training data, etc.. Models with longer context window sizes have been launched in China, Ogawa made 192K, GPT-4 Turbo made 128K, and new methods are emerging one after another. So I believe that on these basis, the domestic development speed is still very fast.

I have another point of view,If we make enterprise-level applications, industry applications, and vertical applications, we don’t have to wait until we achieve GPT-4 capabilities before launching them. In fact, now This ability is more than enough. And looking at it conversely, the continuous improvement of GPT-4 has a lot to do with its use by 100 million users every month. If we are still in the laboratory, there is no traffic and no users, then we cannot improve it anyway. Relying on the so-called practice of answering questions can only measure progress in certain aspects, but relying on the use of hundreds of millions of users, there will be many examples of users being dissatisfied every day. Only then will you know your own gaps and make adjustments. Such progress Just faster.

2. Talking about self-research: It’s not a black and white choice.

picture

Zhang Yijia: The next question is a little controversial - self-development of large models. There has been a lot of discussion about self-study recently. A small number of people believe that only large models written from the first line of code are self-developed large models. But others believe that since the open source ecosystem is quite mature and all large models are based on the transformer architecture, we do not need to reinvent the wheel. What exactly is self-research? What we usually call self-study shouldstart fromat which level? Should we pursue so-called absolute self-research?

Huang Tiejun:This topic may go beyond the large model itself. There is a kind of self-research called original innovation from 0 to 1, such as the invention of the airplane, the invention of the compass, the proposal of the theory of relativity, the principle of pulse photography invented by me, etc.; there is another kind, even if it is not the first to make it, but because it is the first invention The researchers keep strictly confidential, and latecomers can only explore it by their own methods, which is also self-research from 0 to 1, such as our "two bombs and one satellite".

Back to the big model, should we all go from 0 to 1? I think it's a bit absolute, and it's unreasonable. Artificial intelligence has been developed for so many years, including neural networks, deep learning, transformers, and OpenAI's GPT. In fact, they have all developed in the process of historical evolution. It has both inheritance and The self-study part from 0 to1.

Does a company want to go from 0 to 1? I think it can be pursued, but it cannot be regarded as a necessary requirement. Because some are willing to choose relatively mature experience, including code, and then move forward, and some are willing to do more self-research, even starting from the first line of code. These paths are all possible. It can be judged whether a product or technology system has more or less self-developed components and high or low technical content.

My conclusion is to encourage self-research and encouragefrom 0 to 1, but don’t do anything to become, this goes to the other extreme. From 0 to 1

Zhou Hongyi:I think everyone has a misunderstanding about large models, that is, they think that every large model is rewriting the code. In fact, many of the operators at the bottom of large models are universal. , includingOpenAI did not start from scratch. The transformer algorithm it used was proposed by Google. The training framework and inference framework used by everyone are mature. s things. What we do is to pre-train and fine-tune various data on the basis of large models. This is the key to determining whether a large model is easy to use.

There is still a difference between the large open source model released today and the open source code. Linux and many big data systems and operating systems release open source code, and open source large models are not actually codes, but the weights of the neural network after the large model has been trained. As for original innovation, I also agree with Dean Huang’s point of view. If you want China to catch up quickly, you also want to reinvent the wheel for everything. , I think it is unnecessary,Especially after the release of Llama2, there has been a great push for the open source large model ecosystem. Also based on Llama2, different companies have different training methods, and the final model capabilities are also different. Just like two children with the same brain foundation, they read different books and have different academic performance.

In fact, SFT (supervised fine-tuning) based on open source and combined with own data, as well as RLHF (human feedback reinforcement learning) launched by OpenAI, have no direct relationship with the underlying code of the transformer model. Of course, after reaching a certain level, you may need to innovate the model architecture. This is a spiral upward.

Everyone generates hundreds of thousands of problems when doing SFT, which requires a lot of manpower. Now many companies are trying to save trouble and directly use GPT-4 to generate fine-tuned data. The iteration speed will be very fast, but because they use the results of GPT-4 for training , the intelligence can never surpass GPT-4.

In short, my attitude towards open source is thatthere is no need to be one size fits all.

Zhang Yijia: Xiaochuan, Baichuan Intelligent is very fast this year. It will release a large model every 28 days on average. It must also borrow some strength from the shoulders of its predecessors. How do you weigh and choose between self-research and relying on previous achievements?

Wang Xiaochuan:Because we are doing it very quickly, someone has indeed asked me this question. From the first version 7B, 13B, and 53B, we started to issue technical reports to the second version. We disclosed the training methods, slices, and changes in the parameter model at each training step, so that everyone can believe that we started from the raw materials. .

Large models are different from previous enterprise operating systems. Large models are more like cooking. You have to have raw materials, meat and vegetables. Some people start from growing vegetables in the vegetable garden, while others start from cutting, washing and proportioning vegetables. There is a lot of manual work involved in training the model, including the ratio of vegetables and the order in which they are put in. After frying, you may need to add seasonings and return to the pot to stir-fry. This is such a process.

Baichuan's choice is to start by growing vegetables, using data from public and open source data, and then re-cleaning it. There are only about 1.5% Chinese data from Llama to ChatGPT, more than one-third of our Chinese data, and two-thirds of the English data are also processed. The process of cooking is also done by ourselves, but we can see what raw materials others use, and we can learn from good methods and experiences.

Without making any value judgments, we can roughly divide them into two categories: one is cooking by yourself, starting from the raw materials; the second category is cooking the dishes cooked by others, adding some seasonings, and transforming them into your own dishes. From this perspective, Baichuan belongs to the first type.

Zhang Yijia: Wisdom SpectrumAI particularly emphasized the self-developed pre-training framework. Why did you make this choice back then? Is this considered reinventing the wheel?

Zhang Peng:Of course it is not reinventing the wheel, because there are two words after self-research called algorithm, that is, pre-training framework. We have done some things at this level The success or failure of your own innovations and attempts is left to future generations to judge.

What is the gap between us and the leaders? You can simply think about it. If you follow the footsteps of your predecessors, it is difficult to surpass them. , at most you can catch up to the rear. If you want to surpass your predecessors in the shortest possible time, you must innovate.

I think that the so-called original innovation and self-research are actually two concepts, because the development of modern science and technology, including artificial intelligence, an engineering technology, all advances on the shoulders of predecessors, and almost no one To create a new genre, OpenAI also stands on the shoulders of its predecessors. This matter does not need to be so absolute. However, it should not be confused with saying that you should follow the footsteps of others. Under the premise of appropriate inheritance, we must have an original sense of innovation, which is also the self-developed algorithm framework of ZhipuAI the goal of. I believe that only in this way can we catch up with our predecessors as quickly as possible, or even surpass them.

Huang Tiejun:A few additional sentences. In fact, scientific research is divided into several levels. On the first level, many people from universities and research institutes write articles, not pursuing commercial interests, as contributions to public knowledge; on the second level, they apply for patents to protect themselves for 20 years, and the patents are made public, so others can move forward on this basis. Keep doing it; the other one is technical secrets (know how), the company’s pitfalls, know-how, and experience are not shared with the outside world; after that, you can write the code, that is, the specific implementation, and the large model, data, fine-tuning, and alignment wait. Any enterprise that develops a large-scale model must consider these levels. In the future, industrial development will be systematic, and we need to analyze what belongs to us and what belongs to others. I am also confident that our contributions will become more and more abundant and our level will become higher and higher.

Zhang Yijia: Everyone’s discussionlet us see that self-research is not a simple black and white concept, it is not an either/or choice. , is a complex systematic project.

3. Talking about implementation: China is better at making super products

picture

Zhang Yijia: Thecost of large models has always been a big challenge. It is not the same as the Internet that Mr. Zhou and Xiaochuan were particularly familiar with back then. Large models are usedThe more people there are, the more computing power is consumed and the higher the cost, it a>There is no network effect and it will not be as sexy as the Internet. So some media joked, "Using GPT-4 to summarize emails is like driving a Lamborghini to deliver pizza." How do you view and deal with these problems?

Zhou Hongyi:To provide services to millions, tens of millions or even hundreds of millions of users on the Internet, the cost can be controlled very low. Most of the original models were free services and advertising. Patterns prevail. However, the cost of each question answered by a large model is much higher than that of search, so a new business model will be born, which is the user charging model now adopted by OpenAI instead of the advertising model. This change in model will also make Google more uncomfortable.

However, any new thing always has shortcomings when it comes out. Large models are like Big Brother in the early days of the mobile phone industry. They are very expensive and difficult to enter the homes of ordinary people. We see a trend in Silicon Valley - OpenAI, Microsoft, Meta, and Amazon are all making chips. I asked around and found that everyone actually needs to solve the problem of inference chips first. CPUs from Apple and Qualcomm claim that the memory can be fully utilized for inference.

Therefore, in the next one or two years, if inference chips can no longer rely on expensive GPUs and reduce costs with the help of today's CPU upgrades, I personally think that cost will no longer be an issue. In addition, if the Internet and to C applications are excluded, there is no cost problem for internal privatization deployment for government and large enterprise customers, because the computing power is provided by the enterprise itself, and the number of users is relatively small, so this will not become a big problem. Resistance to model development.

Wang Xiaochuan:In the Internet era, search actually consumes a lot of computing power, but at the beginning of this year, the cost of each request for a large model was more than 40 times that of search. .

There are two points that can be solved. First, select high-value scenarios first. When Internet services were free, the requirements for service quality were not particularly high. However, some capabilities provided by large models have the opportunity to provide better services for some high-value scenarios, such as legal , medical care, education, these high-value scenarios can make money if they can be served well.

Secondly, the cost issue will eventually be significantly reduced through hardware upgrades or software iterations. OpenAI also takes the cost issue very seriously. We went to the United States to contact them in the middle of this year. Their goal in the second half of this year is to reduce the cost of GPT-4 by 4 times, and will reduce it by 4 times next year. times. Next, the annual cost of combining software and hardware may be reduced by 10 times, and in three years it will be reduced by a thousand times. It will eventually become something as low-threshold as surfing the Internet. In the long run, technological progress will overcome such difficulties.

Zhang Yijia: Youpreviously mentioned that you would launch a "super product" next year. I remember you said you would do it when you started your business in April. Super product. Everyone knows that ChatGPT is considered a “Killer APP”, but other subsequent products may not be defined as super products. What is a super product? What does your super product look like?

Wang Xiaochuan: It must be used by tens of millions of users to be called a super product. Today everyone is a bit demanding about the progress of super products. ChatGPT was only released a year ago. In the past year, we have mainly been catching up on technology. Talking about super products nowadays is a bit like asking a 1-year-old child whether he has gone to college or has a job. We will launch super products next year. I think this time point already represents the speed of China. China's application capabilities are very strong, but the current model foundation is not enough. If the model reaches the level of ChatGPT, China will be able to develop more super products.

Hongyi Zhou:The point I have been thinking about recently is that if you look carefully at what Adobe, Microsoft and Google are doing, you will find thatthe future a>So Microsoft adds AI to all its products and names them Copilot (co-pilot). Therefore, in the future, whether it is to B or to C, all products will be reshaped by AI, but there may not necessarily be products that rely solely on AI. AI still needs to be integrated with traditional businesses and products. Just like PCs were first used for typing, after the industrial revolution, PCs were integrated with all work and life. may not be a technology that independently produces killer APPs, but a technology that enhances all aspects of existing technologies and processes. AI

4. Talk about trends: GPT Store is not the only solution

picture

Zhang Yijia: From the very beginning of focusing on technology and parameters, to now focusing on application, implementation, and ecology, what direction will the big model take next? At the OpenAI developer conference some time ago, there were two interesting directions: one is GPT Store, which imitates Apple’s App Store; the other is GPTs, which is similar to AIAI Agent. My question is, are GPT Store and AI Agent the next stage of AI competition?

Wang Xiaochuan:For AI applications,China and the United States may take different paths. OpenAI can push the technology to the extreme and then branch out, without emphasizing optimization in a certain field. This is related to its background and is a universal technology. This is a path,but I don’t think OpenAI is the only paradigm. In China, application scenarios will be implemented faster. There are some companies in China or the United States that provide end-to-end services and are already capable of applying them in some scenarios. They will eventually be applied to thousands of industries and may explode at some single points. payment.

Zhang Peng:We are also paying close attention to recent hot topics, especially AI Agent. I think that going back to the first principles, the emergence of new things and how to implement them all come from the innovation and upgrading of the technology itself. Whether it is GPT Store, GPTs, or API, including the killer app just mentioned, they essentially benefit from the enhancement of the capabilities of the basic model itself, which can indeed change many of the original App development paradigms and Apply the process paradigm. Therefore,what we are looking forward to is an explosion in the capabilities of the base model.

Zhou Hongyi:I have some different understandings. Many people think that OpenAI is intended to be an App Store, but ChatGPT only relies on a chat dialog box and a Chatbot. After everyone played it, they found that it could not solve the problem and it was difficult to integrate with the current business. I understand that OpenAI makes GPTs precisely to solve this problem, allowing everyone to carry out in-depth customization in various industries and business scenarios to find killer applications. Even OpenAI is looking for it.

My understanding of Agent has also gone through several different processes. The first was the 25 Agents in Stamford Town, which felt like virtual robots at the time. Later, AutoGPT made people think that Agent can automatically decompose tasks, but the result failed; this time Behind GPTs is Agent, but OpenAI deliberately did not bring out the concept of Agent. In my opinion, Agent does not allow GPT to automatically complete many tasks, and it has not yet reached its goal. The most important value is to combine large models with real business work, because it can automatically mobilize APIs and can only be completed with workflow drivers. work tasks.

5. Talking about AI factions: e/acc, or EA?

picture

Zhang Yijia: The last topic is the factional struggle of AI. The debate on AI values ​​ is very hot now in Silicon Valley. Everyone has divided into two factions, one is called e/acc , Effective Accelerationism, a relatively radical belief that humans should unconditionally accelerate technological innovation, and that technological explosions must be good for humans. The approach of OpenAI founder Sam Altman is very similar to this school; the other school is called EA, effective altruism, which believes that it is necessary to ensure that AI is loving to humans and cannot harm humans. When AI may threaten humans, It should be stopped. The position of OpenAI chief scientist Ilya Sutskever is very similar to this group. Please tell me, are you e/acc or EA?

Wang Xiaochuan:Once labels are divided into two categories, it actually limits one’s true inner thoughts. I wrote when the official announcement was made on April 10 that we use AGI to help humans prosper and continue human civilization - not focusing on individual gains and losses, but focusing more on how to develop the entire human group into what it is today. The physical laws and culture created by the great intelligent life forms will continue to develop. Whether it is conservatism or radicalism, whichever faction can help prosper human civilization, or even continue human civilization, is meaningful to us.

Zhang Yijia: That means you can move between the two factions with ease.

Zhou Hongyi: So you asked Wang Xiaochuan if it was a bird or an animal, and he said it was a bat.

Zhang Yijia: Listen to Mr. Zhang Peng’s point of view. Don’t be as cunning as Xiaochuan.

Zhang Peng:This is not called cunning, this is called scientific truth. My opinions are somewhat similar to those of Xiaochuan. Don’t label yourself easily, especially if this label is defined by others. . When doing AGI, we still want to make the world a better place and make technology serve mankind. However, these two groups of people will have their own priorities and choose different paths and different policies. rhythm, unlike us Chinese who tend to be more moderate in their thinking.

The same is true for Zhipu AI. Looking at this problem purely from a technical or engineering perspective, technology is advancing by leaps and bounds, and there is an original motivation to promote this matter. On the other hand, within the scope of our knowledge, we try to ensure that the evolution of these technologies is not Endangering human interests.

Zhang Yijia: Teacher Huang should be e/acc, right?

Huang Tiejun:You may be saying the opposite. The theme of our forum, "All the way to the Singularity," should reflect the current status of industry e/acc.

Iten years ago was e/acc. It’s not a matter of whether I think about it or not, but I think this is the law of world development< /span>Artificial intelligence will definitely surpass humans in all aspects. At that time, few people believed it and said it was science fiction, but today the reverse is true. This is the destiny of human beings, ,

I used to think that the realization of AGI would have to wait until 2045, but now OpenAI may be able to realize it in 10 years, and it is said that it can be realized in 2 to 5 years. No matter how much it is,in fact, human beings are not mentally prepared to be surpassed by a stronger intelligence.

Why do you say I am not e/acc? Although I think it is independent of human will, I still tend to slow down——Find an appropriate coexistence before implementing AGI road. But time is a little tight now, and people rarely think about how to do this.

My current view is that AGI must not be regarded as the ideal that everyone pursues,because it is superman, a system that is more comprehensively powerful than humans. Once such a system can be made, it must control humans rather than humans controlling it. Of course, this does not mean that it is an absolute bad thing, and it does not mean that it will destroy mankind, but in any case, this is a major reversal in the history of human evolution. Be extremely cautious.

Zhang Yijia: If there was an open signal calling everyone to stop at this moment, and think about it together, you would still be resolute. to sign?

Huang Tiejun:It is never possible to stop, but we can slow down a little, or we invest more resources to think about it. In addition to running forward, we can also Think about the way out and possible coexistence solutions. At present, everyone’s attention and investment resources are too little.

Zhang Yijia: Which faction are you in, Mr. Zhou?

Zhou Hongyi: I am a firm believer in development.

First, people sometimes behave differently from what they think in their hearts. For example, Musk took the lead in asking everyone to write a signed letter, saying that they should stop and think about it, but he secretly bought 10,000 graphics cards. , established an X.AI, he wanted his colleagues to stop for a while so that I could catch up. Including this OpenAI palace fight incident, on the surface it is AI factional struggle, but I think there is a power struggle factor. It also exists.

I recently visited Silicon Valley and found that all startups in the United States are now working on AI, and many investors have stopped investing if it is not about AI. Why has Nvidia’s stock price risen so much? It has sold all its production capacity for next year. In addition to major Internet companies using AI to buy cards like crazy, startups and traditional companies are also buying cards like crazy. Everyone believes that AI will be an industrial revolution that is more revolutionary than the Internet and computers in the future. The United States defeated Japan not just by relying on the Plaza Accord, but by seizing the opportunity of industrial upgrading of PCs and the Internet. This time they want to seize the opportunity of AI.

If there is no innovation today, there will be no market for incremental creation, and many contradictions in the existing market can only be solved through technological innovation, so my point of view still needs to be developed.

Just now Zhang Peng mentioned that the brakes should be installed slowly, but the problem now is that we don’t know how to make the brakes or where to install them. This is the mission of a company like ours at 360. I think that with the development of large models, they will become more and more transparent from black boxes. So, Can one set of large models be used to monitor another set of large models< a i=2>? Can security auditing be added to the large model Agent framework? Can this super tool be placed under human control? Be sure to actively look for security solutions.

The results I saw in the movie "Oppenheimer" are quite positive. There are at least two things. One is that it solves the problem of nuclear power plants and is a real green energy. The other is that after the nuclear bomb, there has been no large-scale nuclear power plant in the past 80 years. After the war, checks and balances were formed before globalization and the rapid development of China's economy.

There are also very few people in the United States who do not develop. Some people always have to make some shocking remarks to win everyone's attention, and the whole world is developing.

The United States is already going all out to start the next industrial revolution. We should be more inclusive in the research, development, and product scenarios of artificial intelligenceand , artificial intelligence can still be fully developed in China for a few years. And drive chips, big data, new industrialization, etc. This is definitely a key point in the future competition between major powers, and there must be no directional deviation. developed first and then governed Just like the Internet . Understanding

Huang Tiejun:I would like to comment that what Mr. Zhou said represents the views and mentality of most people, and is also the mainstream view today. But this is the fate of human beings. We have to go like that, and thenhead to a place of no return.

Zhou Hongyi: WithoutAI, humans will encounter many unsolvable problems, like aging Changes in climate change and earth temperature issues, and ultimately human civilization may be destroyed by war. With AI, we may be able to find some opportunities.

Zhang Yijia:I can hear the difference between you,Ogawa isGod’s perspectiveI want to define my ownvalue, Mr. Zhou It is an optimistic and positive attitude, while Teacher Huang has a pessimistic and compassionate attitude.

Zhou Hongyi:All the people here are AI practitioners. Let me give you a suggestion. Many self-media always spread anxiety on the Internet, but there are two principle issues that should not be spread too much. One is to promote the victory of silicon-based organisms over carbon-based organisms. I guarantee that you will not see this in your lifetime; the other is that AI will cause large-scale Unemployment worries many policy makers and decision-makers, including companies and governments who are very concerned about using AI. We should still work together to promote the development of AI in China, so that everyone can have business opportunities.

Huang Tiejun: If silicon-based replaces carbon-based, what will Mr. Zhou do?

Zhou Hongyi: I think it’s up to us hackers to hack them in the end.

I will fight back again. Although the human brain is not as good as GPT and cannot add computing power as much as large models, the biggest advantage of the human brain is its low power. Teacher Huang has about 30 watts, and I have about 20 watts. Solarge models are good at everything, but the most fatal problem is energy. If ultra-large-scale big data centers are built all over the world, it is possible that the world’s energy will not be enough at all. I don’t think we need to worry about the day when silicon-based organisms surpass carbon-based organisms, because if energy cannot advance, it may cause the artificial intelligence industry to stagnate.

Huang Tiejun:First distinguish training and reasoning. The 20-30 watts of the human brain are mainly for reasoning.

Zhou Hongyi:People are integrated with promotion and training.

Huang Tiejun:But we have been trained since we were born.

Hongyi Zhou:On the other hand, if humans solve the energy problem, many problems will be solved. Otherwise, no matter how much Musk brags, interstellar travel will be impossible. Humankind has developed to this day, and science and technology have advanced by leaps and bounds in the past 100 years. However, today we still encounter obstacles in theoretical physics and basic raw materials, such as controllable nuclear fusion and room temperature superconductivity. I firmly believe that AI’s breakthrough is the best tool ever created by human civilization. Should human civilization itself leap forward? Is there any danger in the leap? Can the danger be controllable? More people need to study the safety of large models in addition to building large models.

Huang Tiejun:You talked about a paragraph, but didn’t go further. What you talked about was how to solve the problem within a period of time, or how to control safety, but you didn’t talk about what happened after jumping off the cliff.In fact, whether we are sliding down the cliff now, it doesn’t matter. have no idea.

Zhou Hongyi: Do you two believe the legend ofOpenAI’sQ*?

Zhang Peng:After reading it recently, I think there may be more elements of joking.

Huang Tiejun:I believe that every company has its own secrets and tricks. This is their competitiveness. I think you are the same. But whether the trick is that powerful, no one can say until it is verified.

Zhou Hongyi:If Q* is true, then your concerns are more powerful, indicating that human beings are not ready for safety and AGI has arrived; But I think Q* is fake. It is a story they made up to save themselves face. I think AGI still has time, and we can leave more time to think about security issues.

Huang Tiejun:I said AGI could be realized in 2045, 10 years ago. At that time, there was no OpenAI and no Q* at all,With or without it, it does not affect the occurrence of this matter.

Zhang Yijia:Teacher Huang and Mr. Zhou have very similar views on AGIYes" a> said, < /span>Today’s exchange was very thorough. Thank you to all the guests for their wonderful sharing. Stop the car before reaching the cliff. in fast enough, developMy braking skills can No matter how good the brakes are, we will meet you at the bottom of the cliff in the end. Mr. Zhou,. Teacher Huang said Stop the car party in front of the cliffandFall off the Cliff Party

Guess you like

Origin blog.csdn.net/richerg85/article/details/134951716