Click on the blue text
Follow us
AI TIME welcomes every AI enthusiast to join!
Bilibili live channel
Scan the QR code to follow AI TIME Bilibili official account to make an appointment for live streaming
13:30—13:50
Yan Jun
Virtual Prompt Injection for Instruction-Tuned Large Language Models
13:50—14:10
Ning Xuefei
SoT: An attempt to accelerate LLM using parallel decoding
14:10—14:30
Zhang Ruiqi
Trained Transformers Learn Linear Models In-Context
14:30—14:50
Wei Lai
InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4
14:50—15:10
Yuan Zheng
Scaling relationship on Learning Mathematical Reasoning with Large Language Models
15:10—16:00
Panel
1. How to comprehensively evaluate the capabilities of large language models? How to balance the power and security of large language models?
2. Can large model fine-tuning methods effectively improve the performance of the model on specific tasks? What are the potential limitations in practical applications?
3. How do large models perform in specific abilities (such as mathematical reasoning)? How to give full play to the advantages of large models in specific capabilities?
Guest introduction
Yan Jun
A fifth-year doctoral student in the Department of Computer Science at the University of Southern California. His supervisor is Professor Xiang Ren. His research field is trustworthy natural language processing. Currently, the main focus is on the security of large oracle models, including data poisoning attacks and model robustness.
Personal homepage: https://junyann.github.io/
Ning Xuefei
Postdoctoral fellow at Tsinghua University, co-supervisor is Professor Wang Yu. His research area is efficient machine learning. Currently the main focus is on compression and acceleration of generative models.
Personal homepage: https://www.ningxuefei.cc/
Zhang Ruiqi
A second-year doctoral student in the Department of Statistics at the University of California, Berkeley, working mainly under the supervision of Professor Peter L. Bartlett. The research areas are mainly theoretical deep learning and theoretical reinforcement learning. Currently focusing on Transformer, large language models and theories based on context learning (In-Context Learning).
Personal homepage: https://rqzhangberkeley.github.io/
Wei Lai
A fourth-year undergraduate student at Shanghai Jiao Tong University. His research fields are multi-modal large models and natural language processing.
Personal homepage: https://waltonfuture.github.io/
Yuan Zheng
PhD from the Statistics Center of Tsinghua University, and serves as a senior algorithm engineer at Alibaba Damo Academy. The main research direction is alignment and logical reasoning in large models.
Recommended articles from past issues
- Follow us and remember the star -
About AI TIME
AI TIME originated in 2019, aiming to carry forward the spirit of scientific speculation, invite people from all walks of life to explore the essential issues of artificial intelligence theory, algorithms and scenario applications, strengthen the collision of ideas, and connect global AI scholars, industry experts and enthusiasts, hoping to In the form of a debate, we explore the contradiction between artificial intelligence and the future of mankind, and explore the future of the field of artificial intelligence.
To date, AI TIME has invited more than 1,300 speakers at home and abroad, held more than 600 events, and been watched by more than 6 million people.
I know you
look in
oh
~
Click to read the original text and book a live broadcast!