2020-1-23 depth study notes 1- Introduction

Chapter 1 Introduction

Official website

Deep learning is a method of machine learning, which draws heavily on the knowledge of the human brain, statistics and applied mathematics.

  • Using machine learning to discover expressed itself, rather than to the map representing the output, this method is referred learns (representation learning). Learning to express more often than manual design representation perform better.

  • When the algorithm design features characteristic of the goods designed for learning, our goal is usually to separate the data can explain the observed degradation factor (factors of variation). For example: When analyzing the voice recording, speaker variation factors including age, gender, accent and their words they are saying.

  • In many applications of AI it appears, mainly due to difficulties at the same time more degradation factor affecting every data that we can observe. For example: In the picture that contains a red car in which a single pixel in the night may be very close to the black

  • Deep learning (deep learning) allows the computer to build complex concepts through simple concept. It solves the core problem of representation learning.

  • Telecom examples depth learning model is the depth of feed-forward network or MLP (multilayer perceptron, MLP). MLP is a set of input values to a mathematical function to map only the output values, it is a composite of a number of relatively simple functions together. We can think of each application are different mathematical functions provides a new representation for the input.

  • 2 main ways depth measurement model

    • Number (the flowchart longest path) need to perform sequential assessment architecture based on instructions.
    • The method used in the probability model depth, depth description of how the concept associated with each other in FIG. For example: a face image observation system AI (one eye shadow). It may initially only see the same eyes. But when detecting the presence of a face, AI system can infer a second eye may also be present. At this time, showing the concept of including only two layers (layer on layer on the face and eyes)
  • Compared to the traditional ML, DL model design study and research of the more learned function or learn the concept of combination

  • DL is a computer system capable of improved technology and from empirical data . Boundless amount expressed as a nesting level conceptual system (complex concepts defined by the contact between the relatively simple concept, summarized from the abstract to the senior general abstract representation)
    Here Insert Picture Description
    The figure illustrates the relationship between the different disciplines AI. DL is both a representation of learning, but also a ML.

Here Insert Picture DescriptionThe figure illustrates the principle of how high each discipline works. Shows a flowchart of how the different parts of the AI ​​system related to each other in different disciplines AI. The shaded boxes represent the components from the data in the study.

  • Depth learning experience three times the wave of development
    • 20th century, 40 - 60 years, cybernetics (cybernetics).

    • 20th century 80--90 years, connectionism (connectionism).

      • Connectionist or parallel distributed processing (parallel distributed processin) appeared in cognitive science background. Cognitive science is to understand the thinking of an interdisciplinary approach, which combines a number of different levels of analysis.
      • CONNECTIONISM central idea is: when the network will connect a large number of simple computing unit can achieve intelligent behavior together.
      • One of the main insights CONNECTIONISM: When working with a number of neurons in animals will become wise. The individual neurons or small set of neurons is not particularly useful.
    • In 2006, deep learning occurs.

"Deep belief networks" neural network can use a strategy called "greedy training step by step" approach to effective training. The same strategy can be used to train many other types of depth network, and can systematically improve the generalization ability in the test sample.

  • A rough rule of thumb, the depth of supervised learning algorithms in each class given about 5000 stated in the catalog under (There Classification and Labeling) cases generally will reach acceptable performance, when at least 10 million stated in the catalog data set used for training, it will meet or exceed human performance.
  • Since the introduction of hidden but dependent on the size of artificial neurons approximately doubling every 2.4 years.
    Here Insert Picture Description- depth study in image object recognition, voice recognition, pedestrian detection and image segmentation have been successful. And it has made beyond human performance on the traffic signs classification.
  • Recurrent Neural Networks for the relationship between the sequences and other sequences j model, it is not just a fixed relationship between input. Applications: Machine Translation.
  • Neural Turing machine introduced, you can learn to read and write arbitrary content storage unit to the storage unit. It can learn the procedure from filing the expected behavior of a sample, such as: sorting. Application: Self-programming.
  • In other strong learning (reinforcement learning) in the field of expansion, an autonomous agent must be in the absence of guidance from the human operator, through trial and error to learn to perform tasks. It significantly improves the performance of the robot enhanced learning. For example: learn to play the game, and compete with humans.
  • DL related software infrastructure
    • Theano
    • PyLearn2
    • Torch
    • Caffe
    • MXNet
    • TensorFlow
Published 122 original articles · won praise 7 · views 20000 +

Guess you like

Origin blog.csdn.net/weixin_42555985/article/details/104076729