CODE LLM
Model | parameter | Model size | Model accuracy (Pass@1) | release time | License | mechanism | GPU consumption | Respository |
---|---|---|---|---|---|---|---|---|
CodeGen-16B-multi | 16 billion | 27.5G |
19.2 | 2022-04-01 |
Free commercial license |
Salesforce |
https://huggingface.co/Salesforce/codegen-16B-multi/tree/main https://github.com/salesforce/CodeGen |
|
CodeGeeX-13B | 13 billion | 22.9 | 2022-09-30 | Open source | Tsinghua University | https://github.com/THUDM/CodeGeeX https://huggingface.co/spaces/THUDM/CodeGeeX |
||
Codex-12B | 12 billion | 28.8 | Not open source | OpenAI | ||||
CodeT5Plus-16B-mono | 16 billion | 41GB | 30.9 | 2023-05-13 | Free commercial license | Salesforce |
https://github.com/salesforce/CodeT5https://huggingface.co/Salesforce/codet5p-16b | |
Code-Cushman-001 | 33.5 | Not open source | OpenAI | |||||
LLaMA-65B | 65 billion | 120GB | 23.7 | 2023-02-24 | Open source and not for commercial use | Meta | https://github.com/facebookresearch/llama | |
LLaMA2-70B | 70 billion | 129GB | 29.9 | 2023-07-18 |
Free commercial license | Meta | https://github.com/facebookresearch/llamahttps://huggingface.co/meta-llama/Llama-2-70b | |
CodeGen2.5-7B-mono | 7 billion | 27GB | 33.4 | 2023-07-07 | Free commercial license | Salesforce | https://github.com/salesforce/CodeGenhttps://huggingface.co/Salesforce/codegen25-7b-multi | |
StarCoder-15B | 15 billion | 64GB | 33.2 | 2023-05-05 | Free commercial license | BigCode | https://huggingface.co/bigcode/starcoderhttps://github.com/bigcode-project/starcoder/tree/main | |
CodeGeeX2-6B | 6 billion | 12.5GB | 35.9 | 2023-07-25 | Free commercial license | Tsinghua University | GPU>13G Memory 14G |
https://github.com/salesforce/CodeGen https://huggingface.co/Salesforce/codegen25-7b-multi |
GPT-3.5 - OpenAI- 175B |
175 billion | 48.1 | 2022-11-30 |
Not open source | OpenAI | |||
WizardCoder-15B | 15 billion | 31GB | 57.3 | 2023-06-14 | Free commercial license | Microsoft | Memory 40G | |
PanGu-Coder2-150B | 150 billion | 61.64 | 2023-07-27 |
Not open source | Huawei | https://arxiv.org/pdf/2307.14936.pdf | ||
GPT-4 - OpenAI-175B | 175 billion | 67.0 | 2023-03-14 |
Not open source | OpenAI | https://cdn.openai.com/papers/gpt-4.pdf | ||
Qwen-7B | 7 billion | 15.4GB | 2023-08-03 | Free commercial license | Ali | GPU> 23g |
Qwen/Qwen-7B · Hugging Face https://github.com/QwenLM/Qwen-7B |
|
ChatGLM-6B | 6.2 billion | 8GB | 2023-03-14 | Tsinghua University | https://github.com/THUDM/ChatGLM-6B https://huggingface.co/THUDM/chatglm-6b |
reference:
https://github.com/abacaj/code-eval
Chatbot Arena Leaderboard - a Hugging Face Space by lmsys
https://huggingface.co/WizardLM/WizardLM-30B-V1.0
https://github.com/QwenLM/Qwen-7B
https://github.com/THUDM/ChatGLM2-6B