LLM-based SQL application development practice (2)

Practice of LLM-based SQL application development (2)
16.2 Using LangChain SQL proxy
Back to the case application itself, we use the "Run All" method to run it again, so that everyone can see more internal content, as shown in Figure 16-5 , because in the VSCode code editor, you can see Jupyter's variables about the current application.
insert image description here

Figure 16-5 Query Jupyter Variable
These instantiated variables are very helpful for understanding how the program actually runs. You can take a look, we have used variables such as ZeroShotAgent, which combine the current application and the running process of the framework itself to give us feedback on content information. We also covered the AgentExecutor, which is where the actual execution takes place. Through AgentExecutor, we can call the tool and get the specific return content of the tool.
Gavin big coffee WeChat: NLP_Matrix_Space
is shown in Figure 16-6, which is a schematic diagram of AutoGPT operation. From the perspective of the entire process, we will have three cores, one is the language model, the other is the tool, and the other is the agent. For the tool and the language model and context management.
insert image description here

Figure 16-6 Schematic diagram of AutoGPT operation
Depending on the design of the prompt words, many intermediate steps may be involved. Each of these intermediate steps interacts with our larger model or tool. Under normal circumstances, no matter who you interact with

Guess you like

Origin blog.csdn.net/duan_zhihua/article/details/132103830