The initial release of the article and subsequent updates: https://mwhls.top/4500.html , no pictures/no table of contents/format errors/for more information, please go to the initial release page. Please check mwhls.top
for new updated content . Any questions and criticisms are welcome, thank you very much!
Originally this article was written for a competition, and I found two problems, AI deployment, several articles in a row, and when I finished the competition and released the model, I didn’t know which version to update to. So I sent it directly. Just place the title picture, and then I will assign a Vincent picture to generate it. Hee hee, let’s get some buzz. If you
have any questions, I recommend you go to GitHub Issues. Of course, you can also ask me in the comments.2023/05/06 update: Added API calls and
ChatGLM
Tsinghua Open Source Large Model-ChatGLM-2023/04/29
- ChatGLM_GitHub:THUDM/GLM: GLM (General Language Model)
- ChatGLM-6B_GitHub: THUDM/ChatGLM-6B: ChatGLM-6B: An Open Bilingual Dialogue Language Model | Open source bilingual dialogue language model
- Contact the author or commercial use: About the commercial use of the ChatGLM model · Issue #799 · THUDM/ChatGLM-6B
ChatGLM-6B environment configuration and startup-2023/04/29
- GitHub official tutorial
- Create a virtual environment:
conda create --name ChatGLM python=3.8
- Enter the virtual environment:
conda activate ChatGLM
- Download source code: https://github.com/THUDM/ChatGLM-6B/archive/refs/heads/main.zip
- Enter this directory and install the dependencies:
pip install -r requirements.txt
- Install Torch:
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
- This is what is available in my environment. If not, you can install it through Baidu PyTorch.
- Download model
- Note: What I downloaded is int8 quantization.
- Tsinghua Cloud
- There will be missing files here. After downloading, you need to download all files except the largest file in HuggingFace (there is a duplicate, no need to download).
- HuggingFace
- File modification: Modify the first few lines of the web_demo.py file as follows,
pretrain_path
referring to the absolute path to the model folder. - Run web_demo.py, successful
pretrain_path = r"F:\0_DATA\1_DATA\CODE\PYTHON\202304_RJB_C4\ChatGLM\chatglm-6b-int8"
tokenizer = AutoTokenizer.from_pretrained(pretrain_path, trust_remote_code=True)
model = AutoModel.from_pretrained(pretrain_path, trust_remote_code=True).half().cuda()
ChatGLM Web demo source code reading-2023/04/30
- It involves guesswork, and because it can be tried and made, some parts may not be accurate.
- Xiao added two modes, one precise and one creative, similar to New Bing.
from transformers import AutoModel, AutoTokenizer
import gradio as gr
import mdtex2html
import os
pretrain_path = "chatglm-6b-int8"
pretrain_path = os.path.abspath(pretrain_path)
tokenizer = AutoTokenizer.from_pretrained(pretrain_path, trust_remote_code=True)
model = AutoModel.from_pretrained(pretrain_path, trust_remote_code=True).half().cuda()
model = model.eval()
"""Override Chatbot.postprocess"""
# 原来self还可以这样用,学习了
def postprocess(self, y):
if y is None:
return []
for i, (message, response) in enumerate(y):
y[i] = (
None if message is None else mdtex2html.convert((message)),
None if response is None else mdtex2html.convert(response),
)
return y
gr.Chatbot.postprocess = postprocess
def parse_text(text):
# markdown代码转html,我猜我博客的插件也是这样
"""copy from https://github.com/GaiZhenbiao/ChuanhuChatGPT/"""
lines = text.split("\n")
lines = [line for line in lines if line != ""]
count = 0
for i, line in enumerate(lines):
if "```" in line:
count += 1
items = line.split('`')
if count % 2 == 1:
lines[i] = f'<pre><code class="language-{
items[-1]}">'
else:
lines[i] = f'<br></code></pre>'
else:
if i > 0:
if count % 2 == 1:
line = line.replace("`", "\`")
line = line.replace("<", "<")
line = line.replace(">", ">")
line = line.replace(" ", " ")
line = line.replace("*", "*")
line = line.replace("_", "_")
line = line.replace("-", "-")
line = line.replace(".", ".")
line = line.replace("!", "!")
line = line.replace("(", "(")
line = line.replace(")", ")")
line = line.replace("$", "$")
lines[i] = "<br>"+line
text = "".join(lines)
return text
# def set_mode(temperatrue_value, top_p_value):
# return gr.Slider.update(value=temperatrue_value), gr.update(value=top_p_value)
def set_mode(radio_mode):
# 不理解,为什么点击按钮就不能传过去,用这玩意就可以传
mode = {
"创造性": [0.95, 0.7],
"精准": [0.01, 0.01]}
return gr.Slider.update(value=mode[radio_mode][0]), gr.update(value=mode[radio_mode][1])
def predict(input, chatbot, max_length, top_p, temperature, history):
# 新增一项
chatbot.append((parse_text(input), ""))
for response, history in model.stream_chat(tokenizer, input, history, max_length=max_length, top_p=top_p,
temperature=temperature):
# 新增的那项修改为推理结果
chatbot[-1] = (parse_text(input), parse_text(response))
yield chatbot, history
def reset_user_input():
return gr.update(value='')
def reset_state():
return [], []
with gr.Blocks() as demo:
gr.HTML("""<h1 align="center">ChatGLM</h1>""")
# 聊天记录器
chatbot = gr.Chatbot()
# 同一行内,第一行
with gr.Row():
# 第一列
with gr.Column(scale=4):
# 第一列第一行,输入框
with gr.Column(scale=12):
user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=10).style(
container=False)
# 第一列第二行,提交按钮
with gr.Column(min_width=32, scale=1):
submitBtn = gr.Button("Submit", variant="primary")
# 第二列
with gr.Column(scale=1):
with gr.Row():
# with gr.Column():
# button_accuracy_mode = gr.Button("精准")
# with gr.Column():
# button_creative_mode = gr.Button("创造性")
radio_mode = gr.Radio(label="对话模式", choices=["精准", "创造性"], show_label=False)
# 靠右的四个输入
emptyBtn = gr.Button("清除历史")
max_length = gr.Slider(0, 4096, value=2048, step=1.0, label="Maximum length", interactive=True)
top_p = gr.Slider(0, 1, value=0.7, step=0.01, label="Top P", interactive=True)
temperature = gr.Slider(0, 1, value=0.95, step=0.01, label="Temperature", interactive=True)
# 历史记录
history = gr.State([])
# 按钮点击后,执行predict,并将第二个参数[...]传给predict,将predict结果传给第三个参数[...]
submitBtn.click(predict, [user_input, chatbot, max_length, top_p, temperature, history], [chatbot, history], show_progress=True)
submitBtn.click(reset_user_input, [], [user_input])
# 将reset_state的结果传给outputs
emptyBtn.click(reset_state, outputs=[chatbot, history], show_progress=True)
radio_mode.change(set_mode, radio_mode, [temperature, top_p])
# button_accuracy_mode.click(set_mode, [0.95, 0.7], outputs=[temperature, top_p])
# button_creative_mode.click(set_mode, [0.01, 0.01], outputs=[temperature, top_p])
# 这玩意debug不会用啊
demo.queue().launch(share=False, inbrowser=False)
ChatGLM API call-2023/05/05
- Official document: THUDM/ChatGLM-6B: ChatGLM-6B: An Open Bilingual Dialogue Language Model | Open source bilingual dialogue language model
- Modify ChatGLM/api.py:
- Model storage location.
- Host address.
- The last line
host='0.0.0.0'
actshost='localhost'
. - Right now
uvicorn.run(app, host='localhost', port=8000, workers=1)
- The last line
if __name__ == '__main__':
pretrain_path = "chatglm-6b-int8"
pretrain_path = os.path.abspath(pretrain_path)
tokenizer = AutoTokenizer.from_pretrained(pretrain_path, trust_remote_code=True)
model = AutoModel.from_pretrained(pretrain_path, trust_remote_code=True).half().cuda()
model.eval()
uvicorn.run(app, host='localhost', port=8000, workers=1)
- Call example using python requests package
import requests
url = "http://localhost:8000"
data = {
"prompt": "你好", "history": []}
headers = {
'Content-Type': 'application/json'}
response = requests.post(url, json=data, headers=headers)
print(response.text)
- In fact, I also wrote a special class, but it has not yet been open sourced on GitHub. It may be updated in a few days: asd123pwj/asdTools: Simple tools for simple goals
Wen Da
- If you don’t know how to use it, you don’t need to read it below.
Introduction-2023/05/02
- l15y/wenda: Wenda: an LLM calling platform. Find and design automatic execution actions for small model plug-in knowledge bases, achieving generation capabilities that are no less than those of large models
- When I was watching the video last night, I discovered that GPT reads local documents, discovered the fess solution, and then discovered this Wendaku, NB.
Lazy bag-2023/05/02
- The author provides a lazy package, and someone at Station B also provides tutorials. I have never used it, and there is no traffic. All the traffic is used to download games, delete games, delete games, delete games, delete games, delete games, delete games.
Deployment-2023/05/02
- Considering that the author is a lazy man, he should know how to deploy manually, so I will briefly talk about my deployment operation.
- Copy example.config.xml as config.xml and modify it accordingly.
- Modify environment.bat, comment out the lazy package path, and modify your own python path, as follows. I will not release the commented out parts.
chcp 65001
title 闻达
set "PYTHON=F:\0_DATA\2_CODE\Anaconda\envs\ChatGLM\python.exe "
:end
I am a waste, so I use lazy bag instead-2023/05/02
- I don’t know how to use local document search and network search. I can’t figure it out, so I downloaded the lazy bag again.
- But why still nothing?
fess installation-2023/05/02
-
Wenda needs to install fess, but the lazy package doesn't seem to be available?
-
Installation: Elasticsearch
- This thing seems to have to be charged for commercial use. Forget it, let’s not use Wenda. Let’s take a look at how its source code is written. Exhausted. I'm a waste.