The TOP100 list of NeurIPS ten-year highly cited scholars is released! These big cows are worthy of worship!

On December 8th (this Sunday), the long-awaited NeurIPS 2019 will officially kick off in Vancouver, Canada.

As the most important conference in the field of machine learning, NeurIPS has always had a strong influence and ranking, and is considered one of the best conferences in neural computing.

With the rise of deep learning in recent years, NeurIPS has not only become a rising star in the academic world, but also attracted great attention from the industry. The number of registered people has jumped from a few hundred a few years ago to nearly 10,000 this year.

According to the statistical analysis of the Aminer data platform, NeurIPS has an H5 index of 149 and a 10H value of 34641, ranking second in artificial intelligence conferences.

Based on the statistical analysis of the citations of papers received by NeurIPS in the past ten years, we have selected the TOP100 list of NeurIPS highly cited scholars .

7601.jpg

 

7602.jpg

These 100 scholars, they are all top figures in the field of machine learning, and have achieved impressive results in both academia and industry.

Among these 100 scholars, Google+DeepMind occupies one-fifth, forming an absolute dominance. 8 are from Facebook and 7 are from the University of California, Berkeley. At the same time, 5 from Stanford University, 4 from MIT, 4 from OpenAI, and 3 from New York University and University of Montreal.

Among them, about 16 Chinese scholars are on the list. For example, former Baidu chief scientist Wu Enda and computer vision god He Yuming; but there are only a few people working in mainland China, such as Dai Jifeng of Shangtang Technology, Sun Jian of Megvii Technology, and Ren Shaoqing of Momenta.

The star scholar of NeurIPS

Let’s take a look at the top 10 of the NeurIPS highly cited scholars list.

The top three on the list of highly cited scholars are Geoffrey Hinton, the "Father of Neural Networks", and his masters Ilya Sutskever and Alex Krizhevsky.

7603.jpg

Geoffrey Hinton, ranked second on the list of highly cited scholars, is a master of deep learning and is known as the "Godfather of Artificial Intelligence". His name is very impressive in today's artificial intelligence research community. He invented the Boltzmann machine, and first applied Backpropagation to multilayer neural networks; not only that, he also had great-level students such as Yann LeCun and Ilya Sutskever.

Hinton has a bachelor's degree in experimental psychology from Cambridge University and a PhD in artificial intelligence from the University of Edinburgh. The current vice president and engineering researcher of Google, and at the same time teaching and educating at the University of Toronto, is also the chief scientific consultant of Vector Institute. In 2012, Hinton also won the Canadian Killam Prize (Killam Prizes, the country's highest science award known as the "Canadian Nobel Prize").

Professor Hinton is the pioneer of machine learning, which allows computers to come up with programs independently and solve problems by themselves. Of particular importance is that he also opened up a subfield of machine learning, the so-called "deep learning", that is, let those machines learn like a toddler, imitating the neural network of the brain. He brought neural networks into the research and application boom, and turned "deep learning" from a fringe subject into a core technology that Internet giants such as Google rely on.

Hinton has published 60 papers in the more than 30 years since the founding of NeurIPS, almost every year. Between 2009 and 2019, Hinton published 16 papers in NeurIPS, with 47,482 citations.

7604.jpg

Ilya Sutskever is not only a doctoral student of Hinton, but also a postgraduate of Wu Enda. He was once a top artificial intelligence expert at Google and later founded the artificial intelligence non-profit company-OpenAI.

Sutskever's H-index value is 56. In 2015, he was named an "Innovator under 35" in the Visionaries category by MIT Technology Review.

In ten years, although he has only published 11 articles in NeurIPS, his citations are as high as 67457 times, ranking first on the list of highly cited scholars.

Sutskever, who is obsessed with computers, studied at the University of Toronto as an undergraduate. While in college, he met Professor Hinton. Hinton gave him a research project: improving the random neighbor embedding algorithm. The project was the beginning of their cooperation, and then Sutskever naturally joined Hinton's group to study for a PhD.

After graduating in 2012, Sutskever studied with Professor Wu Enda as a postdoctoral fellow at Stanford University for two months. He then returned to the University of Toronto and joined the research company DNNResearch founded by Hinton. Four months later, Google acquired DNNResearch, and Sutskever has officially joined Google Brain.

In the two years at Google, Sutskever joined the Google open source library and developed the deep learning framework TensorFlow. He also assisted DeepMind in the development of the epoch-making Go artificial intelligence AlphaGo, and the paper on AlphaGo was published in Nature in 2016, and Sutskever is one of the co-authors.

In December 2015, Sutskever left Google and co-founded OpenAI with Greg Brockman (now the CTO of OpenAI). The goal of OpenAI is to open artificial intelligence technology to everyone. In the past few years, OpenAI has made many amazing achievements. Sutskever has always been at the forefront of the artificial intelligence revolution, working with his team to promote the ultimate peak of strong artificial intelligence.

7605.jpg

Alex Krizhevsky, who is ranked third on the list, is also a doctoral student of Hinton. Alex seems to be more low-key, and his information is rarely available online.

In 2012, under the guidance of Hinton, Alex Krizhevsky and Sutskever collaborated to develop the sensational AlexNet. This paper entitled "ImageNet Classification with Deep Convolutional Neural Networks" has been cited up to 44218 times. This is also the only paper Alex published in NeurIPS.

AlexNet was unveiled in NeurIPS with a novel neural network architecture, including five convolutional layers and three fully connected layers. This paper is widely regarded as a truly pioneering work because it is the first to prove that a deep neural network trained on a GPU can take image recognition tasks to a new level.

The AlexNet network has had a very important impact on the development of neural networks. The subsequent ImageNet champions all adopted the convolutional neural network structure, making the CNN architecture the core model of image classification, and thus opened a new wave of deep learning. The convolution + pooling + fully connected architecture used is still the most important network structure of current deep learning.

After that, Alex seemed to be silent. He once co-founded the start-up company DNNresearch with Professor Hinton and Sutskever, and they developed solutions that can greatly improve the target recognition technology. Later Alex and Sutskever joined Google together.

Later, Alex joined the Canadian startup Dessa as the chief machine learning architect of Dessa. At the beginning of this year, Dessa developed a speech synthesis system RealTalk, which is different from previous systems that learn human voice based on voice input. It can generate sounds that are perfectly close to real people based on text input only.

7606.jpg

Ranked fourth on the list is Yoshua Bengio, one of the "big three in artificial intelligence". He won the Turing Award in 2018 along with Geoffrey Hinton and Yann LeCun.

Yoshua Bengio is a pioneer in the field of artificial intelligence natural language processing. Bengio, born in France in 1964, grew up in Canada, and now lives in Montreal. He is a professor in the Department of Computer Science and Computing at the University of Montreal. Bengio received his PhD in Computer Science from McGill University. Together with Geoffrey Hinton and Yann LeCun, he is considered to be the three people who promoted deep learning in the 1990s and early 2000s. In October 2016, Bengio co-founded Element AI, an artificial intelligence incubator located in Montreal.

Bengio is also the one who values ​​academic purity the most among the three. While serving as a consultant at Microsoft, he is also the co-director of the CIFAR Machine and Brain Learning Project, a full-time professor in the Department of Computer Science and Operations Research in Canada, and the research chair of statistical learning algorithms at the University of Montreal.

At the NeurIPS conference, Bengio published a total of 74 papers, especially at this year’s conference, there were 9 papers signed by Bengio.

In the past ten years, he has published 40 articles in NeurIPS, with 18,714 citations.

The most cited article is the article "Generative Adversarial Nets" published in 2014 and co-authored with his PhD student Ian Goodfellow, with 10618 citations. This article proposes the famous Generative Adversarial Networks (GANs).

It is an interesting way to "teach" computers competent humans. In the past five years, GANs have made major breakthroughs in the field of image generation, and can now generate highly realistic synthetic images of animals, landscapes, and human faces, fully demonstrating the potential of "unsupervised learning" technology.

At the same time, it solves the long-standing problem of machine learning technology at the theoretical level: how to promote the training results of machine learning to move in the direction that humans hope. In 2015, GANs technology was still unknown. In 2016, it reached a ubiquitous popularity, and was even described by experts as "the coolest idea in the field of machine learning in 20 years."

7607.jpg

Speaking of GANs, then I have to say Ian Goodfellow-"Father of GANs". He is ranked ninth on the list of highly cited scholars. In ten years, he has published 10 papers in NeurIPS, with a total of 13,480 citations.

Goodfellow is one of the young scholars who have received much attention in the field of machine learning. He studied at Stanford University for his undergraduate and master degrees, under the tutelage of Enda Wu, and studied machine learning under Yoshua Bengio for his Ph. His most notable achievement was the proposal of Generative Adversarial Networks (GANs) in June 2014.

After graduation, Goodfellow joined Google and became a member of the Google Brain research team. Then he left Google to join the newly established OpenAI Research Institute, and in March 2017 he returned to Google Research Institute. Just in April of this year, Ian Goodfellow joined Apple as a director-level position, leading a "machine learning special project team" at Apple.

Scholars ranked fifth to eighth on the list of highly cited scholars are Greg Corrado, Jeffrey Dean, Kai Chen, and Tomas Mikolov from Google.

Greg Corrado and Jeffrey Dean each published 3 articles in NeurIPS in ten years, with a total of 17,218 citations. Tomas Mikolov also published 3 articles with a total citations of 15,407. Kai Chen published two articles with a total of 16,139 citations.

They and Ilya Sutskever jointly published an article "Distributed Representations of Words and Phrases and their Compositionality" in 2013, which has been cited 14087 times.

This paper is a supplement to "Efficient Estimation of Word Representations in Vector Space". It introduces the training method using Skip-gram model and Hierarchical Softmax training mode, and supplements the training mode of Negative Sampling instead of Negative Sampling to obtain faster Training effect. The paper also proposes a method of sub-sampling high-frequency words, as well as a method of measuring phrases, and learning the representation of phrases.

7608.jpg

Greg Corrado is a senior research scientist in Google's machine learning. His main research directions are artificial intelligence, computational neuroscience and scalable machine learning. He is also one of the founders of the Google Brain project.

7609.jpg

Google's artificial intelligence god Jeff Dean is the chief architect of Google, a senior researcher at Google Research, and the head of Google's artificial intelligence team, Google Brain. Jeff Dean, who has a Ph.D. from the University of Washington, an academician of the American Academy of Engineering, an ACM Fellow, a computer discipline consultant of the AI ​​Research Institute of Tsinghua University, and an AAAS Fellow, has been responsible for many large-scale projects at Google, supporting Google's ultra-large-scale computing framework MapReduce and machine learning. The iconic software TensorFlow was developed under his leadership.

7610.jpg

Tomas Mikolov previously worked at Google and is currently a scientist in the Facebook Artificial Intelligence Research Laboratory. He is also a scholar who produces many high-quality papers, from RNNLM, Word2Vec to the recently popular FastText.

7611.jpg

Ranked tenth is He Yuming, the great computer vision god and Facebook AI research scientist. His research interests are computer vision and deep learning. In the past ten years, he has published 3 papers in NeurIPS, with a total of 12,605 citations.

The most famous one is "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" published in 2015, which has been cited 11093 times.

In this TOP100 list, there are more great figures in the field of machine learning, such as the father of machine learning, academician of the American Academy of Sciences, Berkeley professor Michael I. Jordan, Apple’s first AI director Ruslan Salakhutdinov, etc., although We haven't written them one by one, but they cannot deny their unparalleled status in the field of machine learning.

In order to make it easier for everyone to understand and master the latest information and developments of NeurIPS 2019,

This weekend we will launch the NeurIPS 2019 conf-plus page, welcome everyone to pay attention to it!

 

Past review

NeurIPS 2019 | Interpretation of 8 papers including Chinese Academy of Sciences, Wuhan University, Microsoft

Interpretation! A collection of 8 NeurIPS 2019 papers, including Beijing Post, Xidian, and DeepMind

NeurIPS 2019 | Huawei, Peking University, etc. jointly proposed: a cloud network compression method based on positive and unlabeled samples (PU)

Guess you like

Origin blog.csdn.net/AMiner2006/article/details/103421686