大模型LLM论文目录

持续更新中ing!!!
友情链接:大模型相关资料、基础技术和排行榜

大模型LLM论文目录

标题和时间 作者 来源 简介
Artificial General Intelligence: Concept, State of the Art, and Future Prospects,2014 Goertzel Journal of Artificial General Intelligence 14年的一篇AGI综述,里面探讨了AGI的定义、分类和评估方法等,作者貌似现在是AGI大会的编辑了hh
Towards artificial general intelligence with hybrid Tianjic chip architecture,2020 Pei jing Nature 2020年的一个讨论实现AGI硬件的论文,其实现了在同一芯片上同时运行MLP-like和SNN神经网络的硬件环境
AGI Brain II: The Upgraded Version with Increased Versatility Index,2021 Mohammadreza Alidoust AGI2021 1.提出一个AGI指标,2.用Mamdani模糊推理联想记忆代替原本的神经网络NN表示外显记忆
Training language models to follow instructions with human feedback,2022 Long Ouyang等人 OpenAI InstructGPT,在大型语言模型的基础上引入人工引导和强化学习,大大提升模型性能
Yann Lecun: A Path Towards Autonomous Machine Intelligence 自主机器学习和AGI,2022 Yann Lecun Machine Learning 提出了自主智能体的架构和训练范式,论文地址
GPT-4原论文详细解读(GPT-4 Technical Report),2023 OpenAI OpenAI GPT-4,提出了多模态的大型语言模型,具备一定的常识和认知能力
ChatGLM,2023 Aohan Zeng,Du等人 International Conference on Learning Representations (ICLR) ChatGLM,ChatGLM-6B结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)
LLaMA: Open and Efficient Foundation Language Models,2023 Hugo Touvron preprint LLaMA 是 Meta AI 发布的包含 7-65B 参数规模的LLM,其中LLaMA-13B 仅以 1/10 规模的参数在多数的 benchmarks 上性能优于 GPT-3(175B)。开源
A Survey of Large Language Models,2023 Wayne Xin Zhao, preprint 大型语言模型综述,非常详细,格局打开!
ChatDB: AUGMENTING LLMS WITH DATABASES AS THEIR SYMBOLIC MEMORY,2023 Chenxu Hu preprint ChatDB清华团队针对大模型LLMs的长期记忆能力进行的改进,提出数据库与大模型结合开源
LONGNET: Scaling Transformers to1,000,000,000 Tokens,2023 Jiayu Ding preprint LONGNET微软做的针对大模型的长文本学习,长期记忆进行的改进,开源
Focused Transformer: Contrastive Training for Context Scaling,2023 Szymon Tworkowski preprint LongLlama谷歌DeepMind研究团队提出了一种注意力集中的transformer架构FOT
Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models,2023 谭清宇,Hwee Tou Ng,邴立东 ACL 2023 main conference LLM理解时间变迁。达摩院联合NUS提出时间推理数据集以及时间强化的训练范式
UnIVAL: Unified Model for Image, Video, Audio and Language Tasks,2023 Mustafa Shukor preprint UnIVAL,该算法不依赖于数据集大小或具有数十亿参数的大模型,仅仅具有约0.25B的参数量,而且将文本、图像、视频和音频这4个多模态任务统一到了一个模型中
Graph of Thoughts: Solving Elaborate Problems with Large Language Models,2023 Besta Maciej preprint 思维图,将LLM生成的信息建模为任意图,其中信息单位是顶点,边代表顶点之间的依赖关系
The Rise and Potential of Large Language Model Based Agents: A Survey,2023 Xi Zhi heng preprint Agent,综述
NExT-GPT: Any-to-Any Multimodal LLM,2023 Wu Shengqiong preprint NExT-GPT,多模态大模型,实现任意模态之间的转换。NextGPT整体结构图、模型示意图和使用模型时示意图
Toolformer: Language Models Can Teach Themselves to Use Tools,2023 Schick Timo preprint Toolsformer,GPT与各种工具结合
The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision),2023 Yang Zhengyuan preprint GPT-4V测评报告
EFFICIENT STREAMING LANGUAGE MODELS WITH ATTENTION SINKS,2023 Xiao Guangxuan preprint 流式LLM,无限扩展LLM长度
Improving Image Generation with Better Captions,2023 Betker James Open AI DaLLE3,作画大师接入chatgpt,论文中文版见这
Instruction Tuning for Large Language Models: A Survey,2023 Zhang Linfeng preprint 思维链综述
RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models Wang Zekun Moore preprint 角色扮演大模型
A Survey on Multimodal Large Language Models,2023 Yin Chaoyou preprint 多模态大模型综述
Visual Instruction Tuning,2023 Liu Haotian preprint 视觉大模型llava,通过视觉调优,支持基于图片的聊天
ChatGLM3,2023 ZHIPU, Tinghua web ChatGLM3
AI Alignment: A Comprehensive Survey,2023 Jiaming Ji preprint AI对齐技术综述,怎么让AI符合人类意图和价值观
RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation,2023 Yufei Wang preprint 具身智能代表性工作
A Comprehensive Overview of Large Language Models, 2023 Naveed Humza arXiv 大模型的全面回顾,看透大模型

猜你喜欢

转载自blog.csdn.net/a1920993165/article/details/134228111