"ByteDance Scholarship Program ByteDance Scholars Program" is an annual talent training project initiated by ByteDance since 2021. It provides each award-winning student with a subsidy fund of RMB 100,000 , aiming to help innovative students Scientific and technological talents use professional knowledge to solve practical problems, use technology to give back to society and lead the future.
The second "ByteDance Scholarship Program" was launched in May 2022, attracting applications from more than 300 domestic outstanding young students from 34 colleges and universities, covering a total of 22 technical fields. After the preliminary examination, re-examination, and final examination of the expert team, the following 10 students won 100,000 scholarships each for their outstanding academic achievements and excellent personal practice:
Bao Fan (Tsinghua University)
Li Xin (University of Science and Technology of China)
Liu Minghuan (Shanghai Jiaotong University)
Meng Zili (Tsinghua University)
Qin Haotong (Beijing University of Aeronautics and Astronautics)
Wang Yulin (Tsinghua University)
You Kaichao (Tsinghua University)
Zhou Kun (Renmin University of China)
Zhou Xuanhe (Tsinghua University)
Zhou Zhe (Peking University)
(The above list of winners is sorted in alphabetical order of names.)
Next, we will introduce the academic experience and scientific research achievements of the 10 scholarship recipients.
Bao Fan
Tsinghua University Statistical Artificial Intelligence and Learning Group
Research field: machine learning, deep learning
Mentors: Zhang Bo, Zhu Jun
Bao Fan has made outstanding achievements in the diffusion probability model. His first paper "Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models" won the world-class academic award ICLR 2022 Outstanding Paper Award . The conference's first and only award-winning paper independently completed by a Chinese unit. The project has had a wide range of influences and has been applied as a core technology to OpenAI 's recently released ultra-large-scale cross- modal generative model DALL·E 2 .
He actively explores the application scenarios of the diffusion probability model, and has produced nearly ten papers on the acceleration, controllable generation, and basic architecture of the diffusion model. He proposed an energy-guided stochastic differential equation framework, and applied the diffusion probability model to image translation and reverse molecular design; in terms of basic architecture, he devoted himself to exploring the feasible replacement of UNet, and proposed UViT with better computational parallelism. There are outstanding results and contributions in practical application.
Li Xin
University of Science and Technology of China Multimedia Computing and Communication Ministry of Education-Microsoft Key Laboratory, Intelligent Media Computing Laboratory (IMCL)
Research field: intelligent media data processing, coding, model generalization
Mentor: Chen Zhibo
Li Xin has been committed to the cross-domain system research of intelligent media data coding, enhancement, and quality evaluation, as well as the mining of common scientific issues, and involves basic fields such as domain generalization and causal inference.
He has published more than 10 papers in top international journals and conferences such as IEEE TIP, IEEE TCSVT, ECCV, and AAAI, and has served as a reviewer for many internationally renowned journals and conferences. He has won many honors such as the first prize in the CVPR CLIC International Competition, the third prize in the National Artificial Intelligence Competition, and the National Scholarship for Postgraduates.
In 2021, Li Xin published a paper on IEEE TIP, the top journal in the field of image processing, which used reinforcement learning to realize the optimization of the semantic coding of traditional encoders for different intelligent tasks for the first time, which greatly improved the intelligent application prospects of traditional encoders. The results have been adopted into several practical applications.
Liu Minghuan
Shanghai Jiaotong University APEX LAB
Research Field: Reinforcement Learning
Mentor: Zhang Weinan
Liu Minghuan's research direction is mainly reinforcement learning and offline data-driven reinforcement learning methods, including imitation learning, offline reinforcement learning, and multi-agents. He has won a national scholarship for doctoral students.
During his doctoral period, Liu Minghuan published more than 10 academic papers in several top conferences, 8 of which were the first author. Work results include work on imitation learning to directly recover rewards using the energy function of offline data (AAMAS 2020), curriculum reinforcement learning strategies based on offline data (NeurIPS 2021), and research on decoupling policies, giving policy modules via policy decoupling migration capabilities (ICML 2022). These works have improved the utilization of reinforcement learning to existing offline data from different perspectives.
Mencius
Network Architecture Research Office, Network Research Institute, Tsinghua University
Research field: real-time audio and video transmission
Mentor: Xu Mingwei
Meng Zili has been researching real-time audio and video transmission on the Internet and related fields in recent years. The core problem he is trying to solve is how to reduce the delay for real-time audio and video transmission applications such as video conferencing, cloud games, and VR, so as to provide better Good user experience. Many of his achievements attempted to analyze and optimize the delay from all levels of the Internet architecture, and published at the international conferences SIGCOMM and NSDI on the network.
In addition to scientific research work, Meng Zili actively participated in international exchanges. He had visited and studied at MIT and Carnegie Mellon University, and cooperated with many teams extensively. He also worked as an intern in 3 companies, and has successfully tested and implemented a number of audio and video transmission delay optimization technologies in the industry. In the future, he will continue to work on reducing the interactive delay of real-time audio and video transmission, and provide better network support for future-oriented applications such as VR.
Qin Haotong
State Key Laboratory of Software Development Environment, Beihang University
Research Field: Neural Network Quantization Compression
Mentors: Li Wei, Liu Xianglong
Qin Haotong is committed to the quantitative research of hardware-friendly neural networks. During his doctoral period, he published 16 papers in ICLR, IJCV, CVPR, etc., including 8 papers, which were cited more than 500 times. He is currently co-cultivating at ETH Zürich CVL in Switzerland.
His binary quantization Bi series work has promoted extreme bit width compression of various neural structures. Among them, IR-Net introduces information theory into binary quantization research and has become one of the popular baselines in this field; BiPointNet solves the binary compatibility problem of large-scale aggregation units and achieves nearly 20 times compression acceleration; the latest achievements BiBERT, BiFSMN shows the great potential of full binary quantization on Transformer and other structures, and DIR-Net further pushes binary CNN to practicality.
In addition, he also held academic activities at AAAI, CVPR, PRCV and other conferences to promote exchanges in the research field. At present, AAAI's Practical-DL series Workshop, organized by Qin Haotong, has been successfully held for 2 sessions, promoting deep learning to become more efficient.
Wang Yulin
LEAP Laboratory, Department of Automation, Tsinghua University
Research Field: Efficient Deep Learning, Computer Vision
Mentors: Wu Cheng, Huang Gao
Wang Yulin's research interest is efficient deep learning, especially focusing on designing dynamic training and reasoning paradigms for deep learning models for computer vision problems, so as to improve their data efficiency and computational efficiency. His representative work is the implicit semantic data augmentation algorithm and the spatially adaptive and efficient dynamic reasoning framework for visual data such as pictures and videos.
He has published academic papers as the first author in authoritative international journals and conferences such as T-PAMI, NeurIPS, ICLR, CVPR, ICCV, ECCV, etc., and has been cited by Google Scholar more than 600 times. He has won CCF-CV Academic Emerging Award and CVPR Outstanding Reviewer, national scholarship and other honors. He also actively contributes to the open source community, and the open source code of his research work on the GitHub website has received more than 1,000 stars.
You Kaichao
National Engineering Research Center for Big Data System Software, School of Software, Tsinghua University
Research field: machine learning, deep learning
Mentor: Long Mingsheng
You Kaichao continued to work in the direction of deep transfer learning, and provided efficient solutions for general domain adaptive learning, unsupervised domain adaptive model selection, deep model transfer methods, pre-trained model transfer evaluation indicators, etc., and published a top conference paper 11 Among them, the first author/co-author of 9 papers, won the 2019 Tsinghua University Special Scholarship, the 2021 National Scholarship, the 2021 NeurIPS Conference Outstanding Reviewer Award and other honors, and has been cited by Google Scholar more than 700 times.
Kaichao enjoys the process of using machine learning principles to guide deep learning practice, actively explores the boundaries of machine learning research, and is willing to share machine learning knowledge with more readers. He is active in various communities, with more than 50,000 followers on Zhihu, and more than 5,000 stars on GitHub.
Zhou Kun
Beijing Key Laboratory of Big Data Management and Analysis Method Research, Renmin University of China
Research Field: Natural Language Processing, Information Retrieval
Mentors: Wen Jirong, Zhao Xin
Zhou Kun studied under Professor Wen Jirong and Professor Zhao Xin, and focused on natural language processing and information retrieval. He published more than 10 papers as the first author, with a total of 550+ citations. Among them, two papers published by Zhou Kun as a first author on KDD 2020 and CIKM 2020 were selected by PaperDigest as one of the most influential papers on KDD and CIKM.
Zhou Kun's current main research direction is to improve the quality of sequence data (such as text, user behavior sequence) representation and endow it with the ability to solve complex knowledge reasoning problems, and propose various solutions to technical pain points to improve downstream multiple The effect in each scenario, and some scientific research results have been put into practical application.
Zhou Xuanhe
Database Team, Department of Computer Science, Tsinghua University
Research Field: Autonomous Database, AI Engineering
Mentors: Li Guoliang, Feng Jianhua
Zhou Xuanhe focused on three challenging problems: intelligent prediction and optimization, automatic query rewriting, and autonomous system design. He has published more than ten papers in CCF-A conferences and journals, Google Scholar has cited more than 350 times, and his research results have been selected as the most cited papers in the field of database by QTune in the past five years.
In addition, Zhou Xuanhe actively participates in academic and social services, and has served as a reviewer for top international conferences and journals such as VLDB (Journal), JCST, etc. academic report.
His related open source projects have received more than 2,000 stars on Gitee and Github.
Zhou Zhe
Center for Energy Efficient Computing and Applications, Peking University
Research Field: Computer System Architecture
Mentor: Sun Guangyu
Zhou Zhe is committed to optimizing the reasoning and training performance of deep learning applications from the perspective of system and hardware architecture design. He has published 5 papers in top conferences/journals in the fields of computer architecture, computer systems and electronic design automation.
He actively explores the method of "near memory computing" to accelerate the performance of memory-intensive deep learning applications, thinking and improving from the three levels of application, system and hardware architecture. His paper on optimizing the "communication bottleneck" problem in the "near memory computing" system was accepted by HPCA, the top conference of computer architecture. He also focuses on applying "near memory computing" technology to a wider range of scenarios, and explores the possible shape of future data centers in combination with practical issues of concern to the industry.
Congratulations to the above students for winning the 2nd ByteDance Scholarship! Bytedance hopes to use this to stimulate the creativity of outstanding technical talents, grow together with outstanding students, promote the solution of real problems with a pragmatic spirit, and give back to the society together.
Review of the first "ByteDance Scholarship Program"
Bytedance 2023 campus recruitment is in progress
Welcome to click "Read the original text" at the end of the article to submit your resume
Join ByteDance technical team