[Large Model Knowledge Base] (4): Run the ChatGLM3 model of dity+fastchat in the local environment. You can use the chat/completions interface to call the chatglm3 model.

1. Video demonstration address

https://www.bilibili.com/video/BV18i4y1a78u/?vd_source=4b290247452adda4e56d84b659b0c8a2

[Large Model Knowledge Base] (4): Run the ChatGLM3 model of dity+fastchat in the local environment. You can use the chat/completions interface to call the chatglm3 model.

2. About dify

https://github.com/langgenius/dify/blob/main/README_CN.md

Dify is an LLM application development platform, and more than 100,000 applications have been built based on Dify.AI. It combines the concepts of Backend as Service and LLMOps, covering the core technology stack required to build generative AI native applications, including a built-in RAG engine. With Dify, you can self-deploy capabilities like Assistants API and GPTs based on any model.

3. Project startup script

Publish gitee code:
https://gitee.com/fly-llm/dify-docker-compose
Insert image description here

4. Start successfully configuring the local model:

Insert image description here

Then you can chat:
Insert image description here

You can also quickly configure prompt words, then test them, and publish them directly after the test is completed:

Insert image description here

5. Summary

dify is already a very complete product that can be quickly configured to develop a chat application.
It also supports configuring prompt words. Very convenient. There is also a knowledge base that can be configured and used.

Guess you like

Origin blog.csdn.net/freewebsys/article/details/135072438
Recommended