Comparison of large Chinese models that have been open source, support for updates

Large model download: Interlink Hi-Tech

ClueAI/PromptCLUE-base-v1-5 at main (huggingface.co) supports multi-task generation, supports Chinese, does not support multiple rounds of dialogue, experience: ClueAI (cluebenchmarks.com)

A further trained model based on promptclue-base: ClueAI/ChatYuan-large-v1 at main (huggingface.co) supports multi-task generation, supports Chinese, and supports simple dialogue

About huggingface model download:

Manual download: https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models/hfl/

Code download:


import llama
#MODEL = '/home/guo/llama_test/llama_model'
MODEL = 'decapoda-research/llama-7b-hf'
# MODEL = 'decapoda-research/llama-13b-hf'
# MODEL = 'decapoda-research/llama-30b-hf'
# MODEL = 'decapoda-research/llama-65b-hf'

#tokenizer = llama.LLaMATokenizer.from_pretrained(MODEL,mirror='tuna')
#model = llama.LLaMAForCausalLM.from_pretrained(MODEL, mirror='tuna',low_cpu_mem_usage = True)
tokenizer = llama.LLaMATokenizer.from_pretrained(MODEL,mirror='https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models')
model = llama.LLaMAForCausalLM.from_pretrained(MODEL, mirror='https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models',low_cpu_mem_usage = True)
model.to('cpu')
batch = tokenizer("Yo mama", return_tensors = "pt")
print(tokenizer.decode(model.generate(batch["input_ids"], max_length=100)[0]))

Regarding the download of github's mirror warehouse, refer to: (4 messages) git clone source change/GitHub domestic mirror_git change source_Mianli Duojiatang blog-CSDN blog :


https://gitclone.com
# 服务器位于杭州(可用)
使用方式:原始git地址:https://github.com/junegunn/vim-plug
克隆地址: https://gitclone.com/github.com/junegunn/vim-plug
#香港服务器https://doc.fastgit.org ,当前不可用

Welcome to leave a message

Guess you like

Origin blog.csdn.net/sslfk/article/details/129416787