Deep learning (a) --- depth neural network analysis

I have not found the time to summarize the contents of their own learning, this time insistence, to learn and want to record, so that their learning system better, not remain in the stage since that would also need to be able to express it.

First, the sort of artificial intelligence, machine learning, learning the relationship between the depth , using a map as follows:

                                                    

Artificial intelligence can be seen is the largest and most extensive of the concept, the machine learning refers to a program in the task, with experience, learning outcomes have increased, also believes the program can learn from this experience; deep learning the machine learning is deeper and more small piece, that is based on the depth of artificial neural networks, it will automatically be combined into simple features of complex features, and use these combinations of features solve the problem.

Classification of artificial intelligence are:

Weak artificial intelligence, specific tasks or flat with the efficiency of the human intellect. The current image recognition, speech recognition, natural language processing belong to this stage.

General artificial intelligence, human intelligence has, to solve common problems.

Super artificial intelligence, human intelligence over, can go beyond the ordinary people on creativity. This concept is currently estimated to reach this stage also need a long period of time.

Machine Learning type :

Supervised learning: the training data set with label training

Unsupervised Learning: automatic excavation mode unlabeled training set

Enhanced learning: learning through feedback, or reward and punishment mechanism. Like on the game machine mode.

After finished, machine learning, artificial intelligence, depth study of the relationship, let's sort out the current field of application depth study into the following three main areas: speech recognition, computer vision, natural language processing, speech recognition which is to rely on natural language processing this piece. At present the most difficult area is the field of speech recognition, which is of course in addition to the three areas, the depth of learning to use the fire is still relatively recommendation system.

Depth study comparative advantages of traditional machine learning after mainly reflected in the gradual increase in data size, depth study of the effect of getting better, but the traditional machine learning with the gradual increase of the data size, the effect reaches a maximum value of learning it will flatten but will not be increased. FIGS represented as shown on FIG.

Of course, as in any field are all basically the same routine, the depth of field of study also has three basic algorithms , namely:

DNN: depth neural network, the recommendation system using multi-variant DNN

CNN: convolutional neural network, multi-variant image problems with CNN, applications such as CV

RNN: recurrent neural network, multi-use problems RNN sequence variants, such as voice recognition

Well, since when it comes to the neural network, neural network structure and how is it, the beginning of the artificial neural network (in this case we are talking about the popular neural networks, and are here to distinguish between biological neural networks, so a detailed point) is subject to inspired biological neural network, where the structure on biological neural networks we will not explain in detail, simply said biological neural network at the smallest unit is a neuron, in the transmitting end to the other end of the stimulation signal will be, is configured input (chemical or electrical signal) to the output (chemical or electrical signals) this process; the network is a neural stimulation signal transmitted to another neuron. That is, more neurons of the network structure.

In artificial neural network also has the smallest unit that perception machine (in peacetime, we might also say that neurons do not deliberately distinguish between these two persons), composed of Perceptron (neurons) is included input (a vector) - -> operation (including linear conversion, nonlinear conversion) ----> output (a scalar), each neuron operations are linear conversion comprises (weighted summation) and non-linear transformation (linear function) . That is, each neuron can be seen as a small composite function, the entire neural network can be seen as a big complex function.

Formulated as follows:a=f(x_{1},x_{2}...x_{n})   

Splitting the linear transformation and nonlinear transformation is as follows:

z=x_{1}w_{1}+x_{2}w_{2}+....x_{n}w_{n}+b     Linear transformation

a=g(z) Nonlinear conversion (herein often said activation function is used)

The simplest is a single neuron neural network, as shown in FIG.

 

Here to do the above concepts to explain the depth of learning in neural networks. The next article in the trilogy on to explain the depth of the training process of the neural network.

Guess you like

Origin blog.csdn.net/qq_27575895/article/details/90479412