活动报名|如何使用70万预算从头训练千亿语言大模型

9c32bb12ce25beef466924445778e17d.png

2d09f1e20d1309797f40e3adf6d1546f.jpeg

王业全

北京智源人工智能研究院认知模型团队负责人,清华大学博士,中国中文信息学会情感计算专委会委员,2022年被评为AI 2000全球最具影响力人工智能学者(自然语言处理领域)。主要从事语言大模型、自然语言处理方面的研究工作,代表成果有 FLM-101B、FreeLM、Mu-Scaling、MSG和ATAE-LSTM等。

在国际顶级会议发表多项研究成果,谷歌学术引用超过2,500次。研究成果ATAE-LSTM和RNN-Capsule被PAPER DIGEST评为最具影响力论文,同时多次入选谷歌学术刊物指标榜单。

如何使用70万预算从头训练千亿语言大模型

以GPT系列为代表的语言大模型已经取得了显著的成功,但是其高昂的成本限制了大模型进一步的快速发展。同时,这也给学术界和工业界带来了新的机遇和挑战。为了进一步降低模型成本,我们采用了生长策略,成功地将千亿稠密大模型的成本降低到70万。

此外,为了更加全面合理地评估大模型,在目前已有的知识类评估的基础上,借鉴IQ测试的概念,提出了大模型的IQ测试方案。实验显示,70万训练成功的千亿大模型表现了非常好的能力。我们相信生长策略可以为突破单体稠密万亿模型带来全新的可能性。

Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks. However, their high costs constrain the further development of LLMs, which also brings both opportunities and challenges for academia and industry. To break down this barrier, FLM-101B employs a growth strategy and successfully lowers the cost of training a 100B-level dense model down to ¥700,000 CNY. Additionally, in order to evaluate LLMs systematically and more rationally, besides existing knowledge-based assessments, the IQ test in LLMs, whose concept is partially borrowed from psychology, is proposed. Experimental results show that the model trained with a budget of ¥700K, achieves comparable performance to powerful and well-known models and demonstrates impressive capabilities. We believe that the growth strategy offers new possibilities for breakthroughs in training 1T+ dense models.

活动时间9月21日(周四)14:30-15:30

活动形式:线上直播,扫描下方二维码报名

1c7d0bc980591ccd137259a25bd3afa4.png

点击阅读原文,与讲者线上交流

猜你喜欢

转载自blog.csdn.net/BAAIBeijing/article/details/133054140