180 billion parameters, supports Chinese, 3.5 trillion training data! Open source ChatGPT-like model

This must be recommended: 180 billion parameters, supports Chinese, 3.5 trillion training data! Open source ChatGPT-like model

The Technology Innovation Institute (TII) in Abu Dhabi, UAE, has released Falcon 180B, one of the most powerful open source large language models, on its official website.

TII stated that Falcon 180B has 180 billion parameters and uses 4096 GPUs to train on a 3.5 trillion token data set. This is also one of the largest pre-training data sets in open source models. The Falcon 180B is available in both base and chat models, allowing for commercialization.

Falcon 180B has surpassed Meta's latest Llama 2 70B and OpenAI's GPT-3.5 in terms of reasoning, programming, and knowledge testing on multiple authoritative test platforms, and is comparable to Google's PaLM 2-Large and second only to GPT-4 .

Basic open source address: https://huggingface.co/tiiuae/falcon-180B

Chat open source address: https://huggingface.co/tiiuae/falcon-180B-chat

Online test address: https://huggingface.co/spaces/tiiuae/falcon-180b-demo

Guess you like

Origin blog.csdn.net/huapeng_guo/article/details/132813119