Many companies have released AI products based on large models. Which company is better at implementing large model applications?

https://m.mp.oeeee.com/a/BAAFRD000020230603805161.html

"There is no industry without AI, and there is no application without AI." With the implementation of AI (artificial intelligence) large-model technology, AI applications are blooming everywhere. In recent days, many companies have released AI application products based on large models. In the era of "Battle of Hundreds of Models", how to create domestic large-model application products? How to provide more inclusive computing power for large models and find more suitable scenarios?

On June 1, Alibaba Cloud disclosed the latest progress of the Tongyi large model and launched a new AI product "Tongyi Listening" focusing on audio and video content, becoming the first large model application product in China to be open for public testing. Some experts believe that cloud computing is the most suitable form for building large models, and the evolution of large models may start a new round of transformation of traditional cloud computing architecture.

Alibaba Cloud’s new AI product “Tongyi Tingwu” is open for public testing

The reporter learned at the launch site that the "Tongyi Listening" released by Alibaba Cloud this time is connected to the understanding and summarization capabilities of the "Tongyi Qianwen" large model. It is a work-study AI product that focuses on the audio and video field. Different from the content feedback of traditional recording software, "Tongyi Listening", based on real-time recording, can present the conversation content to the user more intuitively in the form of text, and archive and summarize it. For multi-language communication scenarios, "Tongyi Listening" will also launch a translation function in the future to bridge language differences and truly enable communication to be completed without barriers.

In addition to real-time recording of voice content, "Tongyi Listening" can also independently process video information, generate brief summaries, segment the content and extract central ideas, and locate corresponding video clips according to user needs. It is worth mentioning that “Tongyi Tingwu” is also linked with Alibaba Cloud Disk. Users can easily upload videos from Alibaba Cloud Disk to “Tongyi Tingwu” for processing, which greatly improves users’ work efficiency. and user experience.

In addition to taking care of the office and study needs of ordinary users, "Tongyi Listening" also sets customized functions for different groups in other segments: relying on the Chrome plug-in, foreign language learners and the hearing-impaired can watch through bilingual floating subtitles Videos without subtitles; for professionals who often have schedule conflicts, "Tongyi Listening" can also be used as a "meeting stand-in" to join the meeting in a muted state, and AI will record the meeting and organize key points.

"We live in an era of technological change." Zhou Jingren, chief technology officer of Alibaba Cloud Intelligence, said: "With the development of AI, more and more AI assistants will be born. They will not only improve the efficiency of our work, but also Dramatically improve the experience of our lives.”

Domestic technology giants accelerate their layout, and competition among large AI models escalates

The application of the "Tongyi Tingwu" large model has entered the implementation stage, which has undoubtedly caused a wave of waves in the industry. However, it is worth noting that Alibaba Cloud is not the only player on this track. The undercurrent of "subversion" in the domestic Internet technology circle is constantly surging. There are more and more new large AI models being born, and existing large AI models are getting stronger and stronger. The reporter combed through and found that there are still many runners on this track coveting the same piece of cake and the same blue ocean.

The first is the “giant faction” represented by Baidu and Alibaba. On March 16 this year, Baidu’s “Wen Xin Yi Yan” was quickly released, marking the first shot at the involution of domestic large-scale language models; less than a month later, at the Alibaba Cloud Summit on April 11, Alibaba Cloud Intelligence Chief Technology Officer Zhou Jingren officially announced the launch of the large language model "Tongyi Qianwen". As the two major Internet giants today, Baidu and Alibaba have a profound insight into the disruptive capabilities that AI can bring to the industry. Only by entering the game as early as possible can they seize the opportunity.

Followed by "Internet technology schools" such as Xiaomi, 360, and Zhihu. After Xiaomi Group stated in March this year that it was exploring AI large models, in the first quarter financial report conference call on the evening of May 24, Xiaomi President Lu Weibing said that the company had officially established an AI laboratory large model team in April. Currently, AI There are more than 1,200 people related to the field. Lu Wei said: "Xiaomi will actively embrace large models, but it will not make general large models like Open AI. Instead, it will deeply integrate and collaborate with its business and use AI technology to improve internal efficiency."

At the "2023 Zhihu Discovery Conference" in April, Zhihu released the large language model "Zhihaitu AI" and internally tested the first on-site large model application function "Hot List Summary". A month later, Zhihu brought another large-scale model application function "search aggregation" on the site at the "2023 Digital Expo"; at the 7th World Intelligence Conference on May 18, 360 Group CEO Zhou Hongyi, chairman of the board of directors, showed off two large-scale model products "360 Intelligent Brain" and the AI ​​drawing tool "360 Hongtu".

On May 24, Weimob released WAI, an AI application product based on large models. As of the day of release, Weimob WAI has officially launched 25 practical application scenarios including "word production, text message templates, product descriptions, grass planting notes, live broadcast scripts, public account tweets, short video copywriting".

In addition, there are "stick-tolerances" represented by iFlytek, SenseTime, and Yuncong. These companies have always stood firm on the AI ​​front no matter whether they are at the peak or the trough of the AI ​​industry. The "overlord" who has been working hard for many years is bound to clash with the new forces who are trying to catch up. The reporter noticed that iFlytek, as the first domestic manufacturer to put large models into practice, has launched solutions for education, office, automotive and other industries.

A report released by the China Institute of Scientific and Technological Information shows that according to incomplete statistics, between 2020 and 2023, China has released 79 large models with parameters of more than 1 billion. Industry insiders believe that the rapid development of localized large models is partly due to companies responding to the "catfish effect" brought about by Open AI, and partly due to the long-term benefits and upgrading power that the development of large models can bring to the industry. . The heroes are vying for the throne, and each company has released big moves, advancing in the frenzied market wave, accelerating the iterative upgrade of AI. With the trend of escalating competition in large AI models, the "Battle of 100 Models" is about to begin.

Implementation is king: How can large model products avoid being “flashy”?

As Alibaba CEO Zhang Yong said, "In the AI ​​era, all products are worth redoing with large models." Faced with the great opportunities brought by the era of large models, various companies are rushing to be the first to seize the ecological niche. However, no matter how noisy the voice is, if there is no definite prospect of commercialization and no solid implementation service capabilities, no matter how good the large model is, it will have no chance of winning. As a large model of an emerging product, how easy is it to implement it? Some analysts believe that there are two main problems standing between the conception and productization of large models: First, the problem of market cultivation. At present, large models are still in the stage of educating the market and educating customers. As a new technology, the demand side does not have a clear understanding of the boundaries of the capabilities of large models. Customers are not clear about the technical implementation level of large models and the ability to implement specific segmented scenarios. We don’t quite understand yet, and this requires both large model companies and customers to make progress together.

The emergence of ChatGPT actually helped software users conduct a popular science about AI, which to some extent brought more demand for commercial applications of large language models. The "Tongyi Listening" released by Alibaba Cloud this time is a good example of large-model products adapting to scenario-based needs. After long-term use, users may even develop the working habit of "working side by side" with AI, which is very important for enterprises. It is said to be a potential consumer market.

Another issue is cost. AI implemented in different subdivision scenarios requires different training corpus. To obtain a large model that is effective and easy to use, you need to invest in enough and targeted corpus, which means a lot of cost investment and in-depth knowledge. Technology precipitation. Tian Qi, chief scientist in the artificial intelligence field of Huawei Cloud, said that the development and training of a large model costs US$12 million at a time. The most intuitive reflection of the high capital threshold and technical threshold for consumers is the high price they need to pay to obtain services. For example, "iFlytek Hearing", built on the iFlytek Spark cognitive large model, has speech-to-text-machine fast-forwarding function packages ranging from 19.8 yuan/2 hours to 888 yuan/100 hours; the smash hit OpenAI, by Upgrading the GPT-3 model to the "smarter" GPT-4 costs an additional $20 per month.

In addition, for large domestic AI models, computing power is another key issue. The creation of large models requires inclusive computing infrastructure, and cloud computing is the most suitable form for building large models. However, with the implementation of large model technology, it may have an impact and change on the traditional cloud computing architecture. It is necessary to add more powerful computing nodes and storage devices, optimize data transmission speed and reliability, and provide customized solutions.

The future development of large models also faces challenges from security and authenticity. In April this year, the "Generative Artificial Intelligence Service Management Measures (Draft for Comments)" issued by the Cyberspace Administration of China proposed that the state supports independent innovation, promotion and application, and international cooperation of basic technologies such as artificial intelligence algorithms and frameworks, and encourages the priority to adopt safe Trusted software, tools, computing and data resources. At the same time, it is proposed that generative artificial intelligence products need to declare a security assessment before providing services.

Some insiders believe that while large model technology brings opportunities for social development, it will also bring many governance challenges. The next step is not only to create an innovation ecosystem, but also to pay attention to risk prevention. Only on the basis of solving these problems can large models truly realize their potential and achieve wide application in various fields.

The large model competition is a marathon. The competition is not about who is running fast now, but who is going far in the future. In the future, who can create a domestic AI application comparable to ChatGPT? We will continue to pay attention.

Guess you like

Origin blog.csdn.net/WitsMakeMen/article/details/133387335