An artificial intelligence model that is infinitely close to human beings, why is computing power infinitely intelligent?

ChatGPT is a conversational artificial intelligence language model released by OpenAI, where GPT is the abbreviation of Generative Pre-trained Transformer (generative pre-trained transformation model).

It subverts the traditional artificial intelligence, it will make reinforcement learning adjustments based on feedback, it has a more autonomous learning ability and a reply closer to humans, it can be called "artificial intelligence".

After a period of use, ChatGPT has more distinctive features:

It can admit its own problems and adjust the model through user feedback;

It will analyze the correctness of the question and make corrections to the incorrect question;

It expresses inability to answer questions that cannot be answered, rather than forcefully answering traditional artificial intelligence;

It understands questions and can answer consecutively for multiple rounds of question and answer.

I still remember that when ChatGPT first became popular, everyone was shocked by its intelligent answers. It was also rumored on the Internet that ChatGPT was fake and not artificial intelligence at all. It was the Indian engineers in the company who replied to everyone in the background.

In addition, the number of users of ChatGPT exceeded 100 million in two months, breaking the shortest time record of breaking 100 million. Some time ago, part of the speech of the President of Israel was also written by ChatGPT. It can complete general program codes, design press releases and other human work, which is completely different from our previous Siri and Xiaoai classmates.

Someone gave the latest ChatGPT-4 an IQ test, which is basically around 80, while an ordinary adult has an IQ of 90+. It can be seen that ChatGPT is indeed infinitely close to the artificial intelligence of human reply.

These artificial intelligence models that are close to humans can be called real artificial intelligence, but under this model that is infinitely close to humans, it is the support of huge computing power.

What is the principle of this, and why is computing power infinitely close to artificial intelligence?


 

Challenges brought by the combination of AI+computing power

Students who have experienced deep learning will understand that the more training models are used, the more accurate the final results will be. This is the so-called "quantitative change leads to qualitative change".

The current artificial intelligence is "deep learning + large computing power" to obtain training models. That is, the large model, also called the pre-training model, is trained on large-scale unlabeled data to learn a kind of characteristics and rules. Then fine-tune training based on the large model to complete the set tasks.

ChatGPT's training model is called a generative pre-training variation model. Simple understanding can be divided into three steps:

First, use a huge amount of data for training. ChatGPT's data comes from all Internet information since 2021. Whether it is official website information or netizens' speeches, it is training data.

Then perform sorting training based on the correctness of the replies of the training results, and make multiple fine-tuning to get the correct reply answer.

Finally, it is trained by the majority of netizens. A large number of replies can also train this model. The more dialogues, the more correct the model can be trained.

This is also a new learning model, instead of mechanically replying only through internally trained data. Therefore, large models are the development trend and future of artificial intelligence.

It can also be seen from the ChatGPT model that there are three aspects that determine the smarter artificial intelligence model: 1. The number of training samples 2. The algorithm model 3. The scale of computing power.

First of all, the amount of training data. From the point of view of the major Internets, the amount of training is basically not bad. This cannot be the key point of the artificial intelligence model.

The more critical issue is the algorithm model and computing power, both of which are highly dependent on GPU chips. It can also be understood that whichever has more GPU chips, a more complete algorithm model can develop a more powerful artificial intelligence model.

Let’s look at the status quo of GPU chips. High-performance GPU chips are all made by Nvidia. Among them, the A100 series chips mainly used have a unit price of up to 100,000. What’s more important is that the sale is banned in the United States.

In fact, the A100 series is also priceless for American technology companies. Technology giants such as Tesla and Metaverse have a total of about 7,000 A100s, which cannot satisfy the artificial intelligence model with large computing power.

However, behind ChatGPT’s openAI company, Microsoft invested a huge sum of money to buy tens of thousands of A100s in advance to form a computing power model. What kind of concept is this? The A100 of all other companies in the world is barely on par with openAI. This gap in computing power makes ChatGPT stand alone.

Don’t be envious of openAI’s monopoly-level GPU computing power. According to Nvidia’s CEO Lao Huang, it costs $10 million to train ChatGPT once.

And this huge training cost reflects from the side that the current large model training has touched the ceiling of computing power , resulting in high training costs.

It can be seen from various aspects that the development of artificial intelligence requires a lot of financial support. A single A100 chip needs 100,000 yuan. Without the cost of billions of dollars, there will be no good results. Artificial intelligence has a long way to go.

There is no need to be nervous now, ChatGPT will replace humans. At present, many of its professional questions are not completely correct.

In fact, it can be regarded as a powerful artificial intelligence engine, which helps humans filter and filter the results of consultation questions, summarizes all the answers, and ultimately requires us humans to analyze them.

But it doesn't matter, this is also a big step in the development of artificial intelligence. Just like some people said, although it is incorrect to calculate 1+1=3 now, 1+1=2 will always be calculated in the future, or even more complicated calculations.

Although there are many problems in ChatGPT now, the R&D investment is staggering, and the use problems are frequent, but I believe that it is the leader of the new wave of artificial intelligence. It is now like the birth of the world's first computer. The original computer is huge, consumes amazing power, and the current situation of frequent problems is exactly the same, but in the end it will change the world.

Let us witness its development together!

For more information, please pay attention to "Moyu IT" -  chatGPT detonates computing power demand

Supongo que te gusta

Origin blog.csdn.net/qrx941017/article/details/131387697
Recomendado
Clasificación