Musk announced the creation of a ChatGPT competitor! OpenAI CEO poured cold water on him, GPT-5 may change drastically

Brainless increase

Model size

Already outdated

When you enter a question in ChatGPT or new bing, AI will call its cloud brain, think about it, and generate a more reasonable answer.

When OpenAI developed GPT, it was repeatedly mentioned that the parameters were becoming huge and the scale of the model was becoming more complex with each generation.

Training large-scale language models requires GPU clusters with huge computing power, and as the number of users grows exponentially, running them will also take up considerable computing power.

Every enterprise that wants to join AIGC first considers how to deploy a large-scale cloud computing center. It seems that the competition in AIGC has become a competition for hardware resources.

OpenAI CEO Sam Altman Picture from: wired

With a computing center with stronger computing power, it will be able to run larger-scale language models and algorithms, and the final AI product may also become "smarter".

However, in a recent speech at MIT, OpenAI CEO Sam Altman poured cold water on those latecomers who want to use "power to achieve miracles."

Altman: Mindless scaling of models is obsolete

"The era of large-scale models has come to an end. We need to use new ideas and methods to make AIGC achieve new progress."

Expanding the model size, utilizing more complex parameters, and calling on greater computing power are basically the iterative methods that OpenAI has used on GPT in the past few years.

GPT-2 has approximately 1.5 billion parameters, while GPT-3 has 175 billion parameters. Although GPT-4 has no official digital verification, many institutions have also speculated that it uses trillions of words of text and tens of thousands of computers. Cloud computing servers, the cost of training them has exceeded $100 million.

As ChatGPT's influence grows, Microsoft also uses its underlying technology to launch new bing.

Subsequently, Google launched Bard and Adobe launched Firefly. In addition to these large companies, many well-funded start-ups in Silicon Valley, such as Anthropic, AI21, Cohere and Character.AI, are also investing crazily to build larger-scale Algorithms to catch up with ChatGPT and OpenAI.

The second-generation Runway is a blockbuster created in one sentence

The huge demand for hardware resources has also caused a rush to buy NVIDIA A100 and H100 GPUs.

On eBay, Nvidia's H100 has become a hot commodity, with the price even reaching US$40,000, while its official selling price is only US$33,000. Moreover, H100 is usually sold in packages of 8 to form a server.

Currently, there are no other third-party GPUs competing with NVIDIA for external sales. Under the wave of AI, whoever has more NVIDIA GPUs seems to have the key to winning in the AIGC industry.

Similar to the capital monopoly and monopoly of large enterprises in traditional industries, the pursuit of computing power has also given rise to "computing power monopoly."

Sam Altman also said that OpenAI has no plans to develop GPT-5. The implication is that mindlessly expanding the model size will not allow GPT to maintain unlimited iterations.

Currently, both ChatGPT and Microsoft's new Bing have encountered many downtimes and unstable services due to insufficient computing power.

At the same time, the new Bing is not available to all users, and the "queuing" situation still exists.

This is one reason why Google has not been able to fully introduce similar generative AI into its search.

Nick Frosst, who once worked on AI at Google and is now the founder of Cohere, also said that Altman was very prescient, and also said that new artificial intelligence model designs or architectures may be adjusted based on human feedback.

According to this idea, OpenAI may already be using new ideas to conceive GPT-5.

Musk: Poach people, buy graphics cards, and form a team to join the game

Even if Sam Altman publicly stated that at this stage it will continue to follow the old path of OpenAI and improve by expanding the model scale, it will be difficult to catch up with ChatGPT.

But Musk, who was Altman's old friend, resolutely invested in the AIGC industry.

According to the Wall Street Journal, Musk quietly registered a company called X.AI Crop, poached several researchers from Google, and bought thousands of graphics cards from Nvidia.

Musk's purpose is clear, which is to compete with OpenAI and Google.

Just after the relevant reports were issued, Musk admitted in an interview with Fox News that he wanted to launch a product similar to ChatGPT, named TruthGPT.

Musk’s intention is clear. TruthGPT will “maximize truth-seeking AI” and try to understand the nature of the universe, hoping to bring more benefits than harm.

Musk’s interview and response are actually a bit nonsensical. After all, based on the current scale of large language models, X.AI Crop may not be as good as some start-up teams in Silicon Valley.

And the name TruthGPT also seems to be a protest to ChatGPT.

Musk has always believed that GPT-4 type of generative AI will have considerable risks, calling for a six-month suspension of development and the introduction of corresponding regulations.

He even said "it has the potential to destroy civilization."

Just saying that research is suspended, he established X.AI Crop and began to enter AIGC. It is difficult not to believe that this is hype.

In addition, suspending research and development for six months is more like sealing OpenAI, giving yourself six months to catch up.

It is undeniable that Musk’s SpaceX and Tesla have subverted an industry and become the most famous companies at the moment.

Musk and Starship

But after leaving OpenAI, seeing the current achievements, and using some controversial remarks, it showed Musk's unwillingness to miss the opportunity long ago.

As for whether TruthGPT can live up to what he said, we can only wait six months to see.

Jen-Hsun Huang: We need an App

Currently, generative AI only exists in dialog boxes and appears in all current smart devices in a very classical form.

Whether it is a plug-in or a third-party app, what the public ultimately interacts with is dialogue text boxes.

This was also the most rudimentary form of human-computer interaction when computers first appeared.

In a podcast by Nicolai Tangen, he chatted with Nvidia CEO Jensen Huang about how AI will change the way people live and work.

The current rapid development of AI is inseparable from the help of NVIDIA's GPUs. NVIDIA has almost monopolized the cloud computing power market.

According to Altman's point of view, current AI cannot escape the support of ultra-high computing power, ultra-large models and huge algorithms.

That's a challenge for startups, but it's also a challenge for Nvidia.

NVIDIA needs to continue to develop and manufacture more powerful GPUs to adapt to the development of AI. And currently AI still exists in huge data centers and requires multiple supercomputers to provide computing power.

It's unlike any application or software before. GPT-3 has 175 billion parameters. When faced with such a calculation amount, NVIDIA's AI GPU was redesigned from the bottom up.

But currently, processing large-scale data and learning large models still takes weeks. It cannot yet be condensed into an App or personal PC.

This actually explains why AI’s promotion of software so far has actually existed in the form of clouds and plug-ins.

Nvidia also said that AI has begun to penetrate into the design of its own chips. "While the chip architect is sleeping, AI is still constantly iterating and optimizing and improving the corresponding architecture."

"It helps us better design and manufacture chips."

In other words, AI intervenes in the production of chips, and the produced chips are used for cloud computing power to run AI. This can be considered self-sufficiency of AI.

In addition, Huang Renxun also believes that AI will also trigger the next industrial revolution, and it will not awaken itself to replace humans for the time being.

On the contrary, it will greatly liberate manpower and increase everyone's productivity. People can program directly using natural language without learning complex Python, Java, and C languages.

Moreover, he also gave Nvidia's software engineers as an example. With the help of AI, about 40 to 50% of the code and software are supplemented by generative AI. Engineers only need to provide suggestions and ideas.

Huang Renxun also estimates that the addition of AI will increase the current productivity of Nvidia software engineers by 10 times.

Github’s Copilot feature

The intervention of AI allows people to complete many tedious tasks 10,000 times faster, indirectly improving productivity.

Before the emergence of GPT-4, the AI ​​industry could simply rely on expanding model scale and increasing hardware deployment to maintain high-speed iteration.

With the emergence of computing power bottlenecks, OpenAI has to rethink how to optimize and find the next development direction of AI.

Similarly, Huang Renxun also clearly realizes that the current AI demand for computing power is too huge, and it needs to exist in a location with huge space and many chips. Today's AI is like the world's first-generation computer ENIAC.

The next step for AI is to reduce resource requirements, but also need to be slim and be able to exist independently in an App or software.

At present, this is not realistic, but the good news is that OpenAI has begun recruiting Android and iOS engineers, and the corresponding App should be available soon.

The reason why ChatGPT is difficult to copy is not only because it consumes graphics cards, but also because the water and electricity bills are too expensive?

The father of ChatGPT admitted that GPT-5 does not exist. Why is OpenAI always so honest? |10,000-word detailed description

Guess you like

Origin blog.csdn.net/2301_76935063/article/details/130231222