Summary of common algorithm ideas in machine learning and deep learning (unfinished)

This article is used to summarize the ideas of common algorithms in machine learning and deep learning, and post a blog with better explanations on the Internet! ! !

Machine Learning Mathematical Fundamentals

Linear Regression Algorithm

Logistic Regression Algorithm

Softmax algorithm

maximum likelihood estimation

Naive Bayesian Classification Algorithm

Support Vector Machine Algorithm

Idea: Solve the separating hyperplane that correctly partitions the training dataset and has the largest geometric separation.

SMO algorithm  

ID3 algorithm

Idea: Starting from the root node, calculate the information gain of all possible features for the node, select the feature with the largest information gain as the feature of the node, resume the child nodes by different values ​​of the feature, and then recursively call the child nodes The above method builds a decision tree. Until all features have little information gain or no features can be selected.

CART regression tree algorithm

Idea: Traverse all input features and their corresponding segmentation points, use the minimum mean square error criterion to find the optimal feature and optimal segmentation point, form a pair (j, s), divide the input space into two regions, and then Repeat the above process for each area.

Adaboost algorithm

Idea: Learn one basic classifier at a time by iteration. At each iteration, the weights of data that were misclassified by the previous classifier were increased, and the weights of data that were correctly classified were decreased. Finally, Adaboost uses the linear combination of basic classifiers as a strong classifier, in which a large weight is given to the basic classifier with a small classification error rate, and a small weight is given to the basic classifier with a large classification error rate.

K-Means algorithm

Principal Component Analysis Algorithm

Idea: map n-dimensional features to k-dimensions, k<n, this k-dimension feature is a new orthogonal feature and a principal component. It is a reconstructed k-dimensional feature, rather than simply removing nk-dimensional features from n-dimensional features.

Gradient Boosting Tree Algorithm (GBDT)

Idea: Each iteration is to reduce the residual of the previous round. In order to eliminate the residual, a regression tree model is established in the gradient direction of the reduced residual. Use the negative gradient direction of the loss function as an approximation of the residual in the current model.

Backpropagation Algorithm (BP)

Backpropagation Algorithm (BP)

Convolutional Neural Network

Recurrent Neural Network

R-CNN

Fast R-CNN

Faster R-CNN

transfer learning

Idea: Transfer knowledge from a source domain to a target domain similar to it.


References:

1. Li Hang, "Statistical Learning Methods"


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325930286&siteId=291194637