Eating fish but not raising fish: Discussion on the application method of large language model (LLM)

A large language model needs 65B or more parameters to emerge with sufficient reasoning ability, which is a great resource and manpower challenge for training or fine-tuning. Is there a way to make full use of the ability of a large language model without training the model? This is true, the way is the advisory group and assistants.

The core structure is centered on LLM, as the central scheduling, Langchain as the advisory group (providing business-related information), and Tools as assistants, which are called by LLM to obtain specific capabilities. This method can supplement the lack of real-time knowledge and specific business capabilities (such as complex data calculation) of the large model, and use the reasoning and induction capabilities of the large model.

lanchain:   The Complete Guide to LangChain: Building Powerful Applications with Large Language Models - zhihu.com

tool learning:   OpenBMB/BMTools: Tool Learning for Big Models, Open-Source Solutions of ChatGPT-Plugins (github.com)

System architecture diagram:

 

Essentially, the langchain toolkit also has some functions similar to tool learning:

Tool Learning architecture diagram:

 

 

Guess you like

Origin blog.csdn.net/znsoft/article/details/130570988