Similar to ChatGPT large model ranking list - SuperCLUE Langya list TOP16

SuperCLUE uses the Elo rating system to calculate the relative performance of models. SuperCLUE Nirvana in Fire will release new rankings regularly.
Last update: 2023-05-29 18:22:35

1、 anthropic-claude

由 Anthropic 开发,基于 Anthropic 对训练有用、诚实、无害的人工智能系统的研究。

2、MiniMax-abab5

由 MINIMAX 公司开发,帮助人类高效写作、激发创意、获取知识、做出决策。

3、gpt-3.5-turbo

由 OpenAI 开发,当前大家最频繁使用的版本。

4、ChatGLM-130B

Developed by Tsinghua University and Zhipu AI, it is an open bilingual dialogue model.

5、ChatGLM-6B

Developed by Tsinghua University and Zhipu AI, it is an open source bilingual dialogue model.

6、 phoenix-inst-chat-7b

Developed by The Chinese University of Hong Kong (Shenzhen), a multilingual chat assistant based on fine-tuning of Bloomz.

7、 moss-moon-003-sft

Developed by the Natural Language Processing Laboratory of Fudan University, it is an open-source dialogue language model that supports Chinese-English bilingual and various plug-ins.

8. Longjing-7B

New branch Large Language Models under ChatYuan.

9. idea-jiangziya

Developed by IDEA Research Institute CCNL, it rebuilds the Chinese vocabulary from LLaMA-13B, and performs pre-training on the scale of 100 billion tokens, so that the model has native Chinese capabilities.

10、vicuna-13b

Developed by LMSYS, based on LLaMA and fine-tuned using user-shared dialogue.

11、Belle-13B

Developed by LianjiaTech, optimized for Chinese based on BLOOM and LLAMA, model tuning only uses data generated by ChatGPT to provide better support for Chinese instructions.

12、LMFlow-Robin-7B

Developed by the HKUST Statistics and Machine Learning Laboratory team, the model is a fine-tuned version of pinkmanlove/llama-7b-hf on a custom dataset.

13、Linly-ChatFlow-7B

Developed by the National Engineering Laboratory of Big Data System Computing Technology, it is obtained by fine-tuning the Chinese basic model on the 4 million instruction data set

14、RWKV-4-Raven-7B

Developed by the RWKV Foundation, it is a language model that combines RNN and Transformer. It is suitable for long texts, runs faster, has better fitting performance, occupies less video memory, and takes less training time.

15、Chinese-Alpaca-Plus-13B

A version of the Chinese Alpaca model, on the basis of the original LLaMA model, expands the Chinese vocabulary and uses Chinese data for secondary pre-training, further improving the basic semantic understanding of Chinese.

16、Bloomz-7b1-mt

BLOOMZ & mT0 series from bigscience, a family of models capable of executing human instructions in dozens of languages.

Check out the latest rankings on the official website of SuperCLUE Nirvana Rankings ;

Guess you like

Origin blog.csdn.net/bfhelin/article/details/131143027