"Explore Wenxin Qianfan Large-scale Model Platform: Code Writing Has Been Easier"

foreword

The rapid development and wide application of artificial intelligence technology this year are changing the way we live and work. As a cutting-edge technology, generative artificial intelligence (AIGC) is gradually emerging in the fields of code writing, artistic creation, and content generation. After Baidu took the lead in releasing the first domestic artificial intelligence large-scale language model "Wenxin Yiyan", it launched the Wenxin Qianfan large-scale model platform to help enterprises and developers accelerate the implementation of large-scale model applications.

As an AI enthusiast, we often use GPT to help write code, and we can't wait to apply for the experience qualification of Wenxin Qianfan. The following will tell the friends about my experience of Wenxin Qianfan and the introduction of its powerful functions.

1. Getting to Know the Literary Mind and Thousand Sails for the First Time

As far as I know, Wenxin Qianfan is the world's first one-stop enterprise-level large-scale model platform. The visual interface realizes the full life cycle management of models, simplifies the implementation process of large models from data to services, and is easy to use and understand. The feature that attracts me the most is the built-in 33 large models (including: Wenxin Yiyan, BLOOMZ-7B, Llama model family bucket, etc.) and 103 Prompt project templates .

1.1 Rich in functions

The one-stop large model deployment platform is really not a boast, from data management-"model training-"evaluation & optimization-"prediction service-"Prompt project, everything:


Follow me to register and log in to experience it!
insert image description here

1.2 Register and log in

Open Wenxin Qianfan's official website, click the free registration in the upper left corner: https://cloud.baidu.com/product/wenxinworkshop?track=csdn

It takes about 3 hours to review, and after passing, a 20 yuan voucher will be sent, which can be used for free prostitution and inner ecstasy:

insert image description here

After the application is approved, you can log in and use:

2. Built-in third-party large model

Log in to the Wenxin Qianfan large model console, select the preset model in the model management on the left function column, and view the list of platform preset models. The platform has preset 33 large language models for you to use directly, including Wenxin Yiyan and well-known open source large models in the industry:

insert image description here

At present, the above 33 preset large models have been supported. These preset models are all leading-edge large models at present. There are domestic leading Wenxin series and foreign leading Llama system models. The Chinese and English large models complement each other and have a wide range of choices. More large models will be added in the future.

2.1 ERNIE-Bot model

ERNIE-Bot (Chinese name Wenxin Yiyan) is a large language model developed by Baidu itself, covering massive Chinese data, and has stronger dialogue question and answer, content creation and generation capabilities. It is located at the model layer and can be deployed in a full-stack in the four-layer architecture of artificial intelligence, including the underlying chip, deep learning framework, large model, and top-level search applications.

In the latest evaluation list of SuperCLUE, Baidu Wenxin’s total score exceeds GPT-3.5-Turbo, second only to GPT4, leading the domestic large-scale models:

The eight core indicators (algorithm model, general capability, innovation capability, platform capability, ecological cooperation, industry coverage, energy industry, service capability) are compared with other large domestic models, and their specific performance is as follows:

2.2 ERNIE-Bot-turbo model

The main difference between ERNIE-Bot-turbo and ERNIE-Bot is their responsiveness and performance . In general, ERNIE-Bot-turbo is faster than ERNIE-Bot in terms of response speed, and both have powerful dialogue question answering and content creation generation capabilities.

2.3 BLOOMZ-7B model

BLOOMZ-7B is a large-scale pre-trained language model developed by BigScience, which has the ability to understand 46 languages ​​and can output text in 13 programming languages. It adopts the Transformer architecture and is trained with a large-scale corpus to provide powerful support for various natural language processing tasks.

The BLOOMZ-7B model also has the capabilities of text generation, text classification, and entity recognition, and can be used in various application scenarios, such as question answering systems, text generation, chatbots, etc. The model is trained on public datasets and has been used and validated by many developers and researchers.

2.4 Llama model family bucket

model name Model description
Llama-2-7b-chat Developed and open-sourced by Meta AI, it performs well in scenarios such as coding, reasoning, and knowledge application. Llama-2-7b-chat is a high-performance version.
Llama-2-13b-chat Developed and open-sourced by Meta AI, it performs well in coding, reasoning, and knowledge application scenarios. Llama-2-13b-chat is a version with balanced performance and effects.
Llama-2-70b-chat Developed and open sourced by Meta AI, it performs well in scenarios such as coding, reasoning, and knowledge application. Llama-2-70b-chat is a version with high-precision effects.

You can learn about other models by yourself, so I won’t introduce them one by one here.

2.5 Online experience

After introducing so many models, it is better to test and experience it. I tested the ERNIE-Bot model here, and other model friends can log in and experience it by themselves.

2.5.1 Code writing ability

As a programmer, the first thing you must do is to test the code writing ability of the large model. First of all, the statement is not for laziness!

simple question

First, use "Python to implement selection sorting" as an example to test the performance of the model on simple programming problems. The Wenxin Yiyan model gives the correct code and comments, algorithm ideas, time complexity, and final suggestions, which can be solved in solving Let us get inspiration and guidance while asking questions:

complex question

Let's test the classic question "Python Realizes Narcissus Number". The Wenxin Yiyan model introduces what is Narcissus Number, the correct code and ideas:

Error resolution

Just recently, there was an error bug that has troubled me for a long time. I searched a lot of information but couldn’t solve it. With the attitude of giving it a try, I asked Wenxin about the big model. The result unexpectedly gave the correct error reason and Solution, help me quickly locate and solve the problem:

add notes

My former colleague recently resigned, and some of the Python script codes he wrote have no comments. I need to maintain his codes.
insert image description here

The results given by the Wenxin Yiyan big model, each line of code with a pretty good effect has added a corresponding comment:
insert image description here

2.5.2 Logical judgment

I give Wenxin Yiyan 9 points for the code ability of the large model, and one point for fear of its pride. Next, I will give it a logical judgment of difficulty:

Which came first, the chicken or the egg?

The classic question: which came first, the chicken or the egg? For this kind of controversial question, the Wenxinyiyan big model gave a particularly clever answer, giving two different viewpoints:

chicken and rabbit cage problem

Question: There is a cage with a number of chickens and rabbits in it. If you count them, there are 14 heads and 38 legs.

Ask: How many chickens and rabbits are there?

There is no problem with Wenxin's big model thinking logic:

3. First experience of Prompt project

3.1 What is Prompt

Prompt engineering refers to the AI ​​technology that optimizes the structure, content and other dimensions of prompts, and limits the input of large models to a specific range to better control the output of the model. It can be applied to various scenarios such as dialogue and communication, content creation, analysis and control, government services, financial services, and travel services.

To put it simply, it is the prompt word ability, which makes the model answer more in line with our expectations . Currently, Wenxin Qianfan platform has 103 built-in prompt templates, covering various fields:

insert image description here

Let's test how the Prompt template experience is.

3.2 Python parser

When we are on a business trip and there is no Python environment in the computer around us and we are anxious to test the code, we can use this built-in "Python parser" template. For example, my colleague here gave me a string extraction code but there is a problem. Let me help to change it. Next, I will use the Prompt template to test the output:
insert image description here

3.3 Linux terminal

When we do not have a Linux environment and want to test the output results of Linux commands, we can use the "Linux Terminal" template. I have tested several commands below and the results are correct: In addition to encoding-related Prompt templates, there are various
insert image description here
other Let's also try the domain template.

3.4 Movie Reviews

The recently released movie "In the Octagonal Cage" is very popular, we asked Wen Xinyi to write a movie review

It can be seen that the plot, theme, background, etc. written by AI are very appropriate:

3.5 AI doctor

"AI doctor" means artificial intelligence doctor. The deep integration of medical treatment and technology has brought more convenience for the masses to seek medical treatment and provided support for disease diagnosis. Let’s test how the AI ​​doctor responds by pretending to have a cold:

insert image description here

It can be seen that the reply given involves many aspects and is relatively objective.

3.6 Diet plan

Contemporary young people are struggling with what to eat every day, except for ordering takeaway. Let's use the Prompt template to arrange a healthy diet for a week:

It can be seen that the daily diet is balanced, and the nutrients ingested should include protein, carbohydrates, fat, fiber, vitamins and minerals. Looks pretty good! ! !

3.7 Route Planning

The holidays are here, and many friends want to travel. Let Wenxin Yiyan model write a travel route plan to see:

Two routes are given, how long is the total length, and how much time the self-driving tour takes is quite detailed!

There are also many interesting templates, you can test and experience by yourself!

3.8 Custom template

Of course, if the built-in Prompt template is not what you want, we can also see the following to operate the custom template

(1) Create a template:

(2) After creation, we can go to the online test to reference the template for testing. Here is the summary template I created:

(3) Here we randomly found a piece of news and summarized it in ten words:

锦里古街是成都最古老也是最现代的步行街,拥有超多的酒吧、闻名全国的餐饮店等等,晚上八点之后那灯光效果可以说是超赞的。锦里古街位于武侯区,是当地非常知名的商业街,建筑都是仿照明清时期的风格而建。在这里,你可以尽情感受西蜀文化,一进锦里大门仿佛来到了另一个世界。锦里古街集吃、喝、游、购为一体,是成都人悠闲适宜的生活态度完全展示出来的。无论是吃、穿、住、行,在锦里都能找到绝佳的去处。

insert image description here

The summary effect is simple and clear:

insert image description here

Four. Summary

In the experience of using the Wenxin Qianfan large-scale model platform, I first feel that it is simple and convenient. The first time I used it, I followed the operation document and created it successfully. Provide one-stop services, covering dataset management, model training, service release and supervision. Realize the full lifecycle management of models through a visual interface, simplify the implementation process of large models from data to services, and are easy to use and understand.

Secondly, 33 different large-scale models are built into the Wenxin Qianfan large-scale model platform , and more open-source large-scale models will be connected in the future. Users can choose the most suitable model according to their needs.

Finally, the Prompt project has 103 built-in prompt templates , covering various fields. Currently, the platform with the most prompt templates is the most abundant!

If you are also interested in artificial intelligence and large-scale models, you can apply for a trial of Wenxin Qianfan and experience its powerful functions yourself! ! !

Guess you like

Origin blog.csdn.net/yuan2019035055/article/details/132075970