Yann LeCun will be the next Marie Curie it?

Author | Florian Douetteau

Translator | Champagne Supernova

Exhibition | CSDN (ID: CSDNnews)

 

 

Parallel period of history - historical precedent

In the past few years, the heat continued to upgrade the depth of learning, and also made some substantial success. But does it really explain in the next few years the AI ​​will grow exponentially?

 

Futurist Roy Amara (Roy Amara) proposed Amara's Law (Amara's law) states:

"People always overestimate the short-term benefits of a technology brings, but underestimate its long-term effects."

Want to know what will happen in the future, we need to figure out not only for their own development on the road to what extent the depth of learning, but also look at the overall situation in the field of artificial intelligence.

 

Gartner hype cycle curve

 

"Gartner hype curve" can to some extent predict the next five to ten years for enterprise adoption of a technology. But it is unclear whether this model is applicable to AI industry --AI not only "just another enterprise technologies", and is itself a science, so it's more likely to cycle in the range of 50-100 years.

 

AI think of a suitable vision that no longer allows us to AI as a field endlessly inventing new technologies, but more it as a discovery, and humans are slowly (and very empirically) and gradually self-learning system in which to explore patterns of behavior.

 

At this point, we can be likened AI previous scientific findings, especially those relating to the discovery of complex systems: solar system, evolution, the power of discovery ...... but why not find likened atoms it?

 

In this historical precedent to find a small classroom, I will try to answer the most important question is: where will develop artificial intelligence? It is now quite advanced, approaching maturity yet? Or it is still in its infancy? 30 years later, when our descendants look back at our history and now this period of time, we find it under the premise of our limited technical tools to do the experiment in terms of AI has been very great? Or maybe you feel a little naive, or dangerous? Or put it another way: Yann Le Cun will be a new era of Richard Feynman (Richard Feymann), or Madame Curie (Marie Curie) do? Or a bit of both?

 

A Brief History of Nuclear Physics

 

Radioactivity is Becquerel (Becquerel) discovered in 1897. This finding itself fairly haphazard - Becquerel initial study was phosphorescence of uranium salts and their ability to emit X-rays at the light. But he soon discovered that uranium does not require involvement of external energy to emit X-rays.

 

Marie Curie (Marie Curie) took over his research, and radioactive research more carefully, made a series of achievements, including also isolated other compounds in addition to the natural radioactivity of uranium.

 

Discovery of radioactivity sparked public enthusiasm. We found a can magically launching a new beam of new substances: X-rays. It gives you the "super power"! (Please note that this is the era before comic appearance, so the actual use of words was not the "super powers").

 

Meanwhile, the radioactivity is a new phenomenon need for research and atomic theory to explain the nature. Einstein proposed in 1905 the famous mass-energy equation, a few years later, Rutherford bombarded with electronic metal plate (usually gold) and studied the collision trajectory, which established the first model of the atom (there is a nucleus and the electron orbiting the line).

 

We should note that the scientific community has full fifteen years and the lack of a good model to describe exactly what the atom - and then still undiscovered neutron.

 

"Spin" is still the model for our modern view of atomic structure and the strong nuclear force, which is only two hypotheses, respectively, in 1929 and 1935 was only proposed it.

 

In this theory continues to progress at the same time, the development of chemical engineering and also improve the accuracy of radioactive compounds, and to promote their practical (useful) application. In 1939, the first example of the use of isotopes for cancer therapy success. In 1942 came first research nuclear reactor, completed in 1956 led to the first large-scale nuclear power plants.

 

Artificial Neural Networks: Start

 

Neural network has a very old ...... very, very old. It originated in kind, even years old Woodstock have not appeared. Its primary aim is to write an algorithm used to mimic the behavior of synapses (They think they can imitate). In 1957 the first Perceptron (perceptron) produced in 1965 the first multi-layer Perceptron produces.

 

In the 1960s the concept of neural networks has just sprouted, the computer also runs very slowly, so even very simple network may also need to spend a few days training. Basically, there is no other neural network technology effectively, so in the next few decades, they have not been widely put into use.

 

But this did not stop people thinking and test its performance. Werbos found the back propagation (back-propagation) in 1974, this is the first breakthrough in the field of neural networks. Using back-propagation neural network operating with a difference (differiable) and investment (investible) this idea, so when a network error, you can be the error itself back to each layer of the network, in order to help their self-correct .

 

Back-propagation neural network and artificial marks the biological neuron parted ways - because it exists for a biological neuron back-propagation is unreasonable (Yoshua Bengio et al. Paper conducted a presentation) - and from some sense, which marked the beginning of what we call deep learning today.

 

A few years later, Kunihiko Fukushima launched Neocognitron. The inspiration for the work comes from the study of cell perception of the visual cortex. It introduced what we call today the convolution Network (CNN).

 

Due to the lack of such technologies can soil the practical application, combined with the lack of (already trained) data and calculate the force aspect, the depth of learning (as well as generalized AI) disappeared for several years. Depth study has been waiting for the advent of modern GPU (and Google) of.

 

Yann Le Cun & Al: a beam of light

 

Yann Le Cun and reverse the spread of CNN applied to identification of the postcode on the letter, and mail routing, which became the first beam in AI winter dawn. It is important: it is easy to use and can be applied to real! However, for deep learning really become mainstream, still need to wait at least a period of about 20 years.

 

CSDN download from Eastern IC 

Three G: Google, GAN and GPU

 

In 2014, in a bar, Ian Goodfellow (Ian Goodfellow) and some of my colleagues at the University of Montreal conducted a heated debate. They discussed is the possibility to automatically generate realistic images, as well as how the church to teach the neural network to do so. Ian crazy idea while drinking beer side is generated, so that the two neural networks to each other "fight" so that one of the neural network responsible for generating the image, while the other is responsible for "training" the first neural network.

 

His friends (intense and somewhat angry) rejected his views. They believe that this will never work, because it's like "do not use the material to go training" as - anyway, you can not generate a neural network from "nothingness" in.

 

(Imagine this picture: A century ago, some physicists may have had a similar heated debate about radioactivity in a bar in Paris, arguing that energy can not be generated from "nothingness" in.)

 

In a sense, we do not know why generated against the network (GAN) (Goodfellow's "fight" neural networks in the modern title) will work. So that the two neural network running in parallel (and make a large-scale neural network to perform two tasks simultaneously by contrast) would be a good idea in theory the reason is not clear, yet to be debated, research and controversy.

 

GAN is an example of a machine learning technique emerged in recent years, but there are many other technologies:

 

  • Good curiosity to explore and machine learning (Learning with exploration and curiosity). Machine learning problem is, in essence, artificial intelligence systems tend not to explore too many possibilities, and therefore not to learn something new. New technologies such as random network distillation (Random Network Distillation) and the like through the network incentive to explore the situation (with another network) "unpredictable" to compensate for this. This is very powerful, and can work (but not sure why).

  • Q double depth study (Deep Double Q-Learning, DDQN ), in which the network attempts to learn a deep learning strategies (such as playing Atari Pong). Double premise network, each network separately assess the advisability of specific steps and two network interconnection results. Because if there is only one "brain", you tend to be overly optimistic (this is my better half mind told me).

  • YOLO (You Look Only Once) object detection algorithm , in an odd way of detecting the object image. This algorithm is not detected boundary object and then try to identify each object, but "only" best effort on the part of the object a given fixed grid classification. YOLO is capable of a first video (> 40 frames / sec) speed of the general object recognition algorithm.

 

When you look back around the basic ideas generated neurons (such as back-propagation, CNN, GAN, RNN, LTSM, etc.), will have wanted to find the analogy with atoms impulse. We studied chemistry concepts, try to combine them in different ways, restructuring, and then upload them to our GPU, waiting to see if they will glow at night.

 

 

The future will become what?

 

Let's imagine that 20 or 30 years after the AI ​​would look like. Maybe we had already built some form of universal AI, maybe not yet. The fact is that the breadth of this problem is far from simple blog can be covered.

 

But let us try based on past scientific discoveries make some assumptions, and do understand that to truly achieve significant progress in the field of AI, what is required:

 

  • We need more theory . At present, our mental maps AI is similar to missing nuclei and neutron spin model. Perhaps future research will establish a suitable learning theory, which will contain drivers (such as curiosity, generalization), and integrate these concepts into "what can be learned," the theory.

  • We need more quantities , and engineering areas will require reusable components. If there are no spare parts and tools, extensive and repeated use, it is impossible to establish the nuclear industry, such as starting with an electrometer such a simple thing to start. In the depth of learning, embedded and reusable said it is becoming a trend, but the degree from super easy to use even close (It is worth noting, training, sharing, reusing still difficult). If we lack pipes and rubber, lack of access to automatically make us re-access stuff neural network, which is a problem. It will be very easy. However, please imagine at some point in the future, the neural network will learn how to understand speech, what is "good" concept, color emotional relationship, a common aesthetic, as well as the preferences of the human body shape, color harmony and so on. Now imagine a practical application, in which you can put all these learned all combine to create a clothing shop assistant.

  • We need a connection can lead to real life. Radioactive challenging part of the project is how to make it work in a controlled manner, so that people not only to get X-rays, but also to gain practical, controlled power. For AI, the challenge is how to achieve the corresponding specification describes the real life elements. AI mainly operate in the digital world, rather than the real world, and that some of its applications in real-life settings obstacles. When you establish specific areas of AI (such as cars), sensors, cameras and well-designed human-machine interface can bridge the gap between the real world and the virtual world. But this will lead to a complex system concept subject to some restrictions, such as the manufacturing process (no matter what the process is, whether it's a hamburger or manufacture of automotive manufacturing are the same). The manufacturing process itself can not be rendered in digital or logical form, so this process is simply not possible to use AI. Some new concepts emerged, such as "digital factory twins", to create a virtual twin of twins factory, AI can run on it, in order to provide an optimized view. In the future, you can imagine that most business processes (as well as software to support them) will own some of the "AI interface" is used to describe the process or software works, so that AI can understand these processes, and operate on it.

  • We need the hardware . Discovery of radioactivity occurred in the electrometer can make a claim to the era of the great advances in science (in fact, this is Marie Curie's husband, Pierre Curie doing things). AI can be developed on the current hardware (including GPU and TPU), or do you say, we need a new kind of hardware (based on quantum or related hardware) to achieve the development of AI?

 

Imagine if the above-mentioned "prediction" in any two or more to become a reality, how after 30 years of AI experts will think of us? They might think: "They will be more nerve-Fi on the premise of do not understand the principles of operation of the neural network together," and "they are starting from scratch each project can be really annoying!!" "Because they on most things there is no digital formalism, so they like being in a false world of operating the same, and in a strange way trying to enter the results in the real world "," damn, they do not have real hardware. I do they do not understand is how the job done with so little force count! and I think when the famous scientist even built out of his own computer! "or" it's funny, they then try to map different learning variants patent! is actually directly on paper, can you imagine it?. "

 

So, yes, when they look back depth study and research in the early 2000s, you might feel both admiration and surprise. And perhaps, Yann Le Cun really is a new era of Marie Curie bar.

Original link:

https://medium.com/ai-musings/is-yann-le-cun-the-new-marie-curie-52538f87237c

This article CSDN translation, please indicate the source of the source.

【End】

Recommended Reading 

GitHub open source project after another blocked provoke outrage, CEO personally apologize!

360 in response to security cloud disk appears unusual transactions; Apple's official website after another restriction iPhone; GitHub open source project shielded Microsoft engineers | Geeks headlines

2020 years, the five kinds of programming language will die

withstood million people live, recommended the United Nations since the end of the book fly migration path technology!

do not know what AWS that? This 11 key with you know AWS!

written contract Solidity of intelligent design patterns

You look at every point, I seriously as a favorite

Released 1864 original articles · won praise 40000 + · Views 16,920,000 +

Guess you like

Origin blog.csdn.net/csdnnews/article/details/105039896