Artificial intelligence development and model customization trends

  The concepts of artificial intelligence and machine learning are now frequently mentioned on various occasions. The future after the mobile Internet era is predicted to be the era of artificial intelligence. So what is the past and present of artificial intelligence and what will it bring to our future? In order to clarify this problem, we can briefly review the history of the development of artificial intelligence.

Insert picture description here

  In fact, artificial intelligence can be traced back to a long time ago. In Turing's era, scientists tried to deal with complex tasks that humans could only complete by simulating human consciousness and thinking, and proposed a Turing test to detect whether a machine has real "intelligence." With the invention of computers, the problem of information storage and processing is solved, artificial intelligence has the possibility of landing. At the Dartmouth Conference in 1956, the concept of artificial intelligence was clearly put forward by Minsky. It was the neural network data model proposed by neuroscientists, and the matching programming language was perfected this time. Towards a more realistic development direction.

  The essence of neural network is the function and feedback between neurons, which is the basis of human thinking. Simulating the brain has been the main idea of ​​artificial intelligence for a long time. Two years later, computer scientist Rosenblatt proposed the concept of a perceptron, which is the simplest neural network composed of two layers of neurons and used to classify data. The scientific community has ushered in the first glimmer of artificial intelligence, and more people have begun to pay attention and devote themselves to it. However, artificial intelligence has not become very popular. Minsky proved in his work in 1969 that perceptrons can only handle linear classification problems, and even simple XOR problems cannot be classified correctly. This problem has thus become a nightmare that scholars in the field of artificial intelligence cannot avoid. As the most unpopular subject, the subject of artificial intelligence fell into a standstill for 20 years.

  Until 1986, Jeffrey Hinton proposed the backpropagation algorithm, which broke the situation of artificial intelligence. This method effectively solves the limitations of the nonlinear classification problem and is widely used in the structure of multi-layer neural network, which brings about the upsurge of deep learning. In order to obtain high-precision results, the network structure is constantly deepened. As the number of layers deepens, the deep structure will gradually lose the effective learning of the previous layer, and the problem of gradient disappearance in the backpropagation algorithm becomes not negligible. Many people have turned to shallow machine learning methods to solve practical problems. It was not until 2006 that Jeffrey Hinton proposed a solution to vanishing gradients, which restarted the deep learning boom. At the same time, this boom began to sweep across the industry from academia, and more and more companies and institutions began to apply it to the fields of speech recognition and image classification. It is in these areas that deep learning methods begin to show significant advantages over traditional shallow machine learning methods. After 2012, various neural network structures and tuning methods have greatly improved the performance of deep learning, but even if the algorithm and computing power continue to strengthen, deep learning requires dozens of hours of training and massive training data. Many people turned away.
Insert picture description here

  In order to solve the problem of migration to a field with less training data, migration learning came into being. It solves the problem of migrating things learned in the original field to the target field, and effectively uses the learned model parameters to greatly shorten the model training time. , Is considered the future of artificial intelligence algorithms.

  After long-term precipitation and development of artificial intelligence disciplines, related algorithms have been able to solve various complex problems. Using deep learning neural networks to solve problems in various subdivisions at the lowest cost has become the key to the outbreak of artificial intelligence. The profound changes that artificial intelligence brings to mankind are likely to happen in the next time. Just as the Internet+ model has changed all aspects of our food, clothing, housing and transportation, the future AI+ model will surely sweep across the subdivisions of various industries in the same way. The Internet of Vehicles, home appliances, medical, agriculture, manufacturing and other industries need to be more precise The model helps humans deal with complex tasks.

Insert picture description here

  Huawei Machine Learning Service launched a custom model service based on this concept. It uses a transfer learning method to help developers easily define their own models. Only a small amount of domain data is required to obtain domain models, which greatly reduces the threshold for deep learning. I believe that in the future, artificial intelligence is no longer just a tool for a few people, but can be applied to all walks of life, bringing smarter and more personalized experiences to all aspects of human society.


Original link:
https://developer.huawei.com/consumer/cn/forum/topic/0204428045740460728?fid=18
Author: timer

Guess you like

Origin blog.51cto.com/14772288/2575776