Beginner deep learning, from entry, to combat

Currently studying deep learning, this course can leave a message if you want, complimentary.

Depth study of neural networks (CNN / RNN / GAN)
algorithm principle combat +

  • Chapter 1 Introduction Course

    Depth study guide courses, mainly on the scope of application of the depth of learning, talent demand and major algorithms. Section of the course, curriculum, for the crowd, as well as a prerequisite for the completion of the degree after learning reached were introduced so that students have a basic understanding of the curriculum.

    •  1-1 Course Guidance Look
  • Chapter 2 Getting Started with Neural Networks

    The practical introductory course curriculum. Machine learning and deep learning to do the introductory talks, a number of projects by way of example to explain the latest developments in the depth of learning. Through lectures and practical basic structure of a neural network - neurons and their extensions logistic regression model, the basic knowledge in the course of a full explanation, including neurons, activation function, the objective function, gradient descent, learning rate, Tensorflow Tensorflow base and the model code. ...

    •  2-1 machine learning, deep learning Introduction
    •  2-2 neuron - end logistic regression model
    •  2-3 multiple output neurons
    •  Gradient descent 2-4
    •  Data processing model of FIG. 2-5 Construction (1)
    •  Data processing model of FIG. 2-6 Construction (2)
    •  2-7 neurons realization (binary logistic regression model to achieve)
    •  2-8 neural network (multi-classification logistic regression model to achieve)
  • Chapter 3 convolution neural network

    This two-part program section, a first portion of the neural network a complete description, including neural network structure, the forward propagation and reverse propagation gradient descent like. The second part of the basic structure of a convolutional neural network, including convolution, pool and the like to explain fully connected. With particular emphasis on the details of the convolution operation, the structure comprises a convolution kernel, the convolution calculation, the number of convolution kernel parameter calculation, and presents a basic structure of a convolutional neural network. ...

    •  3-1 advanced neural network
    •  3-2 convolutional neural network (1) Look
    •  3-3 convolutional neural network (2)
    •  3-4 convolution neural network combat
  • Chapter 4 Advanced convolution neural network

    This section of the course Advanced convolution neural network structure were explained, including AlexNet, VGGNet, ResNet, InceptionNet, MobileNet, etc. as well as their evolution. For each structure, the course of their problem-solving, the basic idea and important skills used in the model substructures were explained one by one. After completing this course, students can achieve the flexibility to build the capacity of different types of convolution neural network. ...

    •  4-1 convolution advanced neural network (alexnet)
    •  4-2 Advanced convolution neural network (Vggnet-Resnet)
    •  4-3 Advanced convolution neural network (inception-mobile-net)
    •  4-4 VGG-ResNet combat (1)
    •  4-5 VGG-ResNet combat (2)
    •  4-6 Inception-mobile_net(1)
    •  4-7 Inception-mobile_net(2)
  • Chapter 5 convolution neural network parameter adjustment

    This class of commonly used convolution network parameter adjustment techniques ( "Alchemy") conducted a systematic review and summarize. The principle behind some important parameter adjustment techniques were explained. Parameter adjustment techniques, including gradient descent, learning rate, activation functions, network parameter initialization, batch normalization, data enhancement, visualization training process analysis, fine-tune and so on, a lot of parameter adjustment technique is also applicable to other networks. After completing this course, students can call themselves "alchemist" the. ...

    •  5-1 adagrad_adam
    •  5-2 to activate the function parameter adjustment techniques (1)
    •  5-3 to the activation function parameter adjustment techniques (2)
    •  5-4 Tensorboard combat (1)
    •  5-5 Tensorboard combat (2)
    •  5-6 fine-tune- combat
    •  5-7 activation-initializer-optimizer-实战
    •  Image enhancement using api 5-8
    •  5-9 image enhancement combat
    •  5-10 batch normalization combat (1)
    •  5-11 batch normalization combat (2)
  • Chapter 6 image style change

    This lesson is an application program convolution neural network, using a pre-trained model implementation VGG style image conversion algorithm. Knowledge of this lesson involves the use of a convolution neural network feature extraction, feature custom content and style features and picture reconstruction method. In addition to the basic style image conversion algorithm, this course introduces the other two further improved version of the style conversion algorithm. ...

    •  6-1 convolution neural network applications
    •  Capacity 6-2 convolution neural network
    •  6-3 style image conversion algorithm V1
    •  6-4 VGG16 pre-training model format
    •  6-5 VGG16 read function encapsulation pretraining model
    •  6-6 VGG16 class loading model structures and encapsulation
    •  6-7 style image conversion algorithm defined input and call VGG-Net
    •  The image converter calculates style 6-8 Construction and FIG loss function calculation
    •  6-9 style image conversion training procedure code implementation
    •  6-10 style image transition effects show
    •  6-11 style image conversion algorithm V2
    •  6-12 style image conversion algorithm V3
  • Chapter 7 Recurrent Neural Networks

    The course explains the recurrent neural network. It includes a cyclic basic structure of the neural network to solve the problem of sequential and networks, multilayer, bi-directional, a residual structure and recursive gradient descent truncated like. Focusing on common varieties - short and long term memory networks were Detailed. Explained and compared with the recurrent neural network convolution neural network model for a variety of applications in text classification, including TextRNN, TextCNN and HAN (level attention network, the introduction of attention mechanism) and so on. ...

    •  7-1 sequence ended questions
    •  7-2 recurrent neural network Look
    •  7-3 long and short term memory network
    •  7-4 LSTM of text-based classification model (TextRNN and HAN)
    •  7-5 Based on CNN's text classification model (TextCNN)
    •  7-6 RNN and CNN convergence solutions Text Categorization
    •  Word of data preprocessing 7-7
    •  7-8 preprocessing data words of table creation with the category table generating
    •  7-9 actual code module parses
    •  7-10 super-defined parameters
    •  7-11 vocabularies packaging and packaging category
    •  7-12 encapsulated data set
    •  FIG calculation defined input 7-13
    •  Figure 7-14 calculated to achieve
    •  7-15 index calculating Gradient Operators
    •  7-16 training processes to achieve
    •  7-17 LSTM cell internal structure to achieve
    •  7-18 TextCNN achieve
    •  7-19 recurrent neural network summary
  • Chapter 8 images generated text

    This course is a combination of courses convolution neural networks and recurrent neural networks. The course of multiple variants of the model were explained, including the Multi-Modal RNN, Show and Tell, Show Attend and Tell and so on. The last issue generated in the course of its anti-text image is described, leads to confrontation neural network. After completing five hundred sixty-seven course, the students and the application of convolutional neural network recurrent neural network should have a very in-depth understanding. ...

    •  8-1 generate text image problem introduced START
    •  8-2 generate text image evaluation indicators
    •  8-3 Encoder-Decoder and frame Beam Search Algorithm for generating text
    •  8-4 Multi-Modal RNN model
    •  8-5 Show and Tell models
    •  8-6 Show attend and Tell models
    •  8-7 Bottom-up Top-down Attention Model
    •  8-8 generates a text image Model Comparison and Summary
    •  8-9 data reports, generate word list
    •  Image feature extraction 8-10 (1) - text description file parsing
    •  8-11 image feature extraction (2) -InceptionV3 image feature extraction model pretrained
    •  8-12 input and output files with default parameters defined
    •  8-13 vocabularies loaded
    •  8-14 is converted to text description ID represents
    •  8-15 ImageCaptionData type of packaging - read image wherein
    •  8-16 ImageCaptionData class encapsulates - Batch data generating
    •  FIG calculation 8-17 Construction - Auxiliary Function implemented
    •  Figure 8-18 computing building - pictures and words embedding
    •  8-19 calculation map structure constructed -rnn achieved, operator training and the loss function implemented
    •  8-20 Training Process Code
    •  Generating an image of text 8-21 issues into this lesson summary
  • Chapter 9 against neural network

    The course of the latest developments depth learning - against neural network were explained. Including thought and two against specific GAN network neural network, the depth of convolution against generate network (DCGAN) and image translation (Pix2Pix) model. Knowledge involved include generator G, a discriminator D, deconvolution, U-Net like. ...

    •  9-1 against the principle of generation network
    •  9-2 against the depth of a convolutional network generates DCGAN (1)
    •  9-3 deconvolution
    •  9-4 against the depth of a convolutional network generates DCGAN (2)
    •  9-5 image translation Pix2Pix
    •  9-6 No matching image translation CycleGAN (1)
    •  9-7 No matching image translation CycleGAN (2)
    •  9-8 multi-field image translation StarGAN
    •  9-9 generates an image of text Text2Img
    •  9-10 confrontation generation network summary
    •  9-11 DCGAN actual primer START
    •  Data generator implemented 9-12
    •  9-13 DCGAN generator implemented
    •  9-14 DCGAN discriminator achieve
    •  9-15 DCGAN FIG calculated loss function constructed and implemented to achieve
    •  9-16 DCGAN train operator to achieve
    •  9-17 show the effect of implementation and training process
  • Chapter 10 automatic machine learning network -AutoML

    The course of the latest developments depth learning - automatic machine learning network were explained. Automatic machine learning to use recurrent neural networks, the need to adjust the parameters automatically search the network structure, resulting in more than a human "alchemist" better results. The course of the three latest automatic machine learning algorithms were explained, followed by progressive three algorithms to automatically search to get the current best in the field of image classification convolution neural network structure. ...

    •  10-1 AutoML introduced
    •  10-2 automatic network configuration search algorithm a
    •  10-3 automatic network configuration search algorithm is a distributed training
    •  10-4 automatic network configuration search algorithm two
    •  10-5 automatic network configuration search algorithm three
  • Chapter 11 Lessons Learned

    The overall curriculum review

    •  11-1 Lessons Learned

 

Beginner deep learning:

 

 

 

 

 

 

 

 

 

 

 

 The figure we can see the depth of learning include artificial intelligence and machine learning, machine learning contains depth study, the relationship between father and son are set.

 

Depth learning algorithm collections:

Convolution neural network: image generation, conversion style. (Mainly in the field of CV)

Recurrent Neural Networks: means for processing variable length data. (Such as text classification) in nlp main achievement of broader areas.

Other algorithms:

 

 

 

 The last one is now more popular algorithm.

 

 

 AlphaGo it used in the convolution neural network CNN.

 Depth learning the basics: Neural Networks

 

Neurons: is the minimum structure of the neural network, the neural element combining a plurality, the neural network can be formed.

After some set of neurons can become ----- logistic regression model.

 

 

Wherein X is a specific feature extracted.

As shown above we can see, a plurality of inputs and one output.

x1,x2,x3分别加权,最后得到 W*x的值,其中公式中的b就是 1*b(分类线和分线面和一些坐标轴焦点的值),1可以当成x的输入,b可以当成W参数的扩展。

下面的例子可以看出一个神经元从输入到输出的过程:

 

 

 多输出问题,也就是相当于x与W的矩阵做内积。

 

 

 

 

 

其中的后面[0,0,0,1,0]转换为了one-hot向量 

 

 第二个函数更适合做多分类的损失函数。

 

 上图中a代表的是步长。

 

 

 

 

 命令式编程所有变量是自己写,并且求导函数也得自己写。

 

现在开始进行编码阶段:

神经元-逻辑斯底回归模型的实现:后面持续更新!

Guess you like

Origin blog.csdn.net/qq_41479464/article/details/93604234