A ten-year review of artificial intelligence: CNN, AlphaGo, GAN... they used to change the world like this

Count the important breakthroughs made by AI in the past decade.

In the past decade, artificial intelligence technology has advanced by leaps and bounds, and the craziest science fiction scenes have now become an indispensable part of our lives. Ten years ago, people were talking about the theorization and experimentation of AI, but over the years, AI has become more practical and has become the mainstream. Whether it's international standard courses, platforms, libraries, frameworks, or hardware, everything is logical. It is not an exaggeration to say that the achievements made in the past ten years have laid the foundation for the future.

This article will take stock of the important breakthroughs that AI has made in the past decade.

convolution

2012 is an important year in the history of deep learning. That year, the Convolutional Neural Network (CNN) shined in the famous ImageNet Challenge. The convolutional neural network "Alexnet" designed by Alex Krizhevsky and others won the championship with a performance far exceeding the second place. The visual recognition error rate on the ImageNet dataset was 15.3%, which was reduced by half. The accuracy of the neural network for cat detection reached 74.8%, and the accuracy for detecting faces in YouTube videos was 81.7%.

Now, face recognition applications in mobile phones and shopping malls should be attributed to this work in 2012. The improvement in recognition accuracy has enabled researchers to deploy medical imaging models with high confidence.

Dialogue with AI

The " Attention Is All You Need " published by Vaswani et al. in 2017 brings a cascading effect that enables machines to understand language in an unprecedented way. Thanks to the Transformer architecture, AI can now write fake news, tweets, and may even cause political unrest. After Transformer, Google introduced the BERT model , which is used for keyword prediction and SEO ranking. BERT has now become the de facto standard in the field of natural language processing, and companies such as Microsoft and NVIDIA have begun to accumulate more parameters to catch up with the model.

NVIDIA's Megatron has 8 billion parameters, while Microsoft's Turing NLG model has 17 billion parameters. OpenAI's GPT model came later, and the 175 billion parameter GPT-3 is currently the holder of the historical record.

image

GPT-3 is also an extension of Transformer. It is currently the largest model. It can encode, write prose, and generate business ideas, which only humans can't think of, and can't do without it.

Bring humanity together

image

AI has already defeated humans in chess. And more complex human games, such as Jeopardy! games, Go, Texas Hold'em, etc., have not blocked the algorithm. The most well-known event of artificial intelligence in recent years is that AlphaGo defeated top human players in the most complex board game-"Go". At the same time, in this decade, IBM's Watson also defeated two humans in the Jeopardy! finals. In the end, Watson won 77,147 US dollars in prize money, and the two humans received 24,000 and 21,600 US dollars respectively.

The German punk AI Pluribus, jointly developed by Facebook and Carnegie Mellon University, defeated five expert human players and achieved a task that the predecessor Libratus (cold punch master) failed to complete. The research was also featured in the 2019 issue of Science . In December 2020, MuZero proposed by DeepMind allows an artificial intelligence model to master a variety of games, including shogi, chess, and Go.

Decode life

image

The behavior of every organism can be traced to the source in its protein. Protein carries the secret, and cracking the protein may help defeat the new crown pandemic. But the structure of proteins is very complex, and simulations need to be run continuously. DeepMind tried to solve this problem. The deep learning algorithm "Alphafold" developed by it solved the problem of protein molecular folding that has been around for fifty years . Computer vision has been proven to help diagnosis, and solving protein folding problems can even help developers develop new drugs.

AI: An artist and a liar

image

Last year, in a video, the Prime Minister of Belgium talked about the urgent need to solve the economic and climate crisis. Later, people discovered that this was actually a Deepfake video. Under the manipulation of machine learning and AI on the voice and expression of the Belgian Prime Minister, this fake video made the Prime Minister give a speech on the impact of global warming.

Behind these forged content is a well-designed algorithm-Generative Adversarial Network (GAN). The algorithm was proposed in 2014 and has been widely used, and has even invaded the last barrier of human work: creation. This kind of network can generate faces that never existed, exchange faces, and make the president of a country talk nonsense. A painting generated by GAN was even sold at a Christie's auction for a record-breaking price of $400,000. The other side of GAN is being used for malicious purposes, so that companies like Adobe have to research new technologies to identify counterfeit content. GAN will remain the subject of extensive discussion in the next decade.

Secret weapon-silicon

image

The concept of neural network was born for half a century, and the back propagation method popular today has also appeared for 30 years. However, we still lack the hardware to run these calculations. In the past ten years, we have witnessed more than a dozen companies researching specialized machine learning chips. Over the years, chip technology has been greatly developed, and we can perform millions of operations on palm-sized devices. These chips are used in data centers, and users can watch their favorite Netflix movies, use smartphones, and so on. Next, AI chips tailored specifically for edge devices contain billions of dollars in business opportunities.

image

Companies such as Apple have developed customized machine learning chips (such as A14 Bionic) to provide smart services. Even AWS, which relies on Nvidia and Intel, is slowly entering the chip industry. As chips become smaller and smaller, this trend will only become more obvious: For example, using the NVIDIA Jetson AGX Xavier Developer Kit, you can easily create and deploy end-to-end AI robot applications for manufacturing, retail, smart cities, etc. Wait. Google's Coral toolkit can bring machine learning to edge devices. Safe, real-time output is the current theme.

Open source culture gradually matures

image

*Source: MIT Tech Review
*

In 2015, TensorFlow was open sourced. A year later, Facebook AI open sourced the Python-based deep learning framework PyTorch. Today, TensorFlow and PyTorch have become the most widely used frameworks. Through continuous version updates, Google and Facebook have brought great convenience to the machine learning community. The explosive growth of custom libraries, software packages, frameworks, and tools has brought more people into the AI ​​field and brought more talents to AI research.

Open source is a major feature in recent years. Open source tools and more and more available resources (such as arxiv or Coursera) promote the AI ​​revolution. Another catalyst is Kaggle, a popular competition platform. Kaggle and GitHub have nourished a group of high-quality AI developers.

More learning, fewer rules

The concept of meta-learning proposed by Professor Schmidhuber in the early 1990s has only recently gained attention. Meta-learning refers to making machine learning models learn new skills and adapt to changing environments based on limited training examples. If a large amount of user input is required to optimize a machine learning model for a specific task by manipulating hyperparameters, the process will be more cumbersome. After using meta-learning, this burden will be greatly eased because meta-learning automates the optimization part. Automatic optimization brings a new industry MLaaS (Machine Learning as a Service).

Future direction

image

Some experts predict that the following areas may play a major role:

  • Reproducibility
  • Differential privacy
  • Geometric deep learning
  • Neuromorphic computing
  • Reinforcement learning

Although AI has entered many areas that we have never imagined, it still needs to be applied to more popular applications, such as self-driving cars. However, the challenge lies more at the mathematical level: there are algorithms that can make accurate decisions, and there are processors that can handle these algorithms, but when they can be deployed to applications is still unknown. Whether it's medical or self-driving cars, AI still needs to continue to progress, and this will only happen when transparency and reproducibility are established.

Original link: https://analyticsindiamag.com/ai-top-decade-2010-2020-breakthroughs/
from: Heart of the Machine

image

Guess you like

Origin blog.csdn.net/weixin_42321517/article/details/112500005