The "open source" LLaMA became the biggest winner

This article originates from an internal Google document that was accidentally leaked by a Google researcher. While he makes some interesting points in his article, these are his own, not those of Google, and many other researchers disagree with them.

Original: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

The Google employee claimed that in the current AI arms race, Open Source AI will outperform Google and OpenAI and become the biggest winner. He also believes that Google has no moat in the field of AI, and neither does OpenAI.

The document mentioned that after Meta's large language model LLaMA was leaked in early March this year, the open source community got the first truly capable basic model. Although LLaMA has no instruction or dialogue adjustments, nor RLHF, its importance is deeply recognized by the community. What followed was an explosion of great innovation, and just a month later there were variants of LLaMA that included instruction tuning, quantization, quality improvement, human evaluation, multimodality, RLHF, etc., many of which built on top of each other .

Further reading: The leaked large language model LLaMA has contributed to a series of ChatGPT open source alternatives

While the big-company models still have a slight edge in quality, the gap is closing at an alarming rate. Open source models are faster, support deep customization, are more privacy-conscious, and more powerful. Based on the open source model, developers can even use $100 and parameters of 13B to achieve their needs, while large companies are struggling with parameters of $10 million and 540B. Not to mention that they can complete tasks in weeks, not months.

In short, there is no "secret weapon" in the large language model. A large model with too many parameters will cause drag. The best model can be iterated quickly. The researcher believes that focusing on development with an open-source model helps avoid reinventing the wheel.

Since the "open source" LLaMA came from Meta, the researcher believes that Meta is one of the biggest beneficiaries in this AI competition-obtaining free labor of programmers all over the world. Because most open-source AI innovations happen on top of their architecture, there's nothing stopping Meta from integrating those efforts directly into their products.

This situation is like Google's successful use of paradigms in its open source products such as Chrome and Android. Based on a platform with "grow innovation", Google has consolidated its position as a thought leader and direction setter, gaining the ability to shape more than itself. The ability to think big.

It can be seen that OpenAI has made the same mistake as Google in its attitude towards open source AI, and has adopted a relatively closed policy, but this does not help them build a moat.

Guess you like

Origin www.oschina.net/news/239488/google-we-have-no-moat-and-neither