Primary algorithm combing (a) - Summary of machine learning

Machine Learning Overview

1. machine learning classifier

1.1 supervised learning:

With a set of parameters known classes of constituents adjusted classifier to reach the required performance of the process, also known as supervised training of teachers or learning. Supervised learning the training set of requirements, including input and output, mainly used in classification and prediction.

1.2 Unsupervised Learning:

Discovery implied from the data set a certain structure, a structure characteristic of the sample data, determining which data is quite similar.

1.3 semi-supervised learning

Supervised and unsupervised learning combined study, which used in the training phase is untagged and tagged data, not only to learn the structure of the relationship between attributes, but also output classification model to predict.

1.4 Reinforcement Learning:

Reinforcement Learning (Reinforcement Learning, RL), also known as reinforcement learning, evaluation of learning or reinforcement learning, is one of machine learning paradigm and methodology used to describe and solve the agent (agent) in the process of interaction with the environment through learning strategies to achieve maximum return on the problem or to achieve a particular goal.

2. The machine learning algorithm

2.1 linear algorithm (Linear Algorithms):

Linear Regression (Linear Regression), Lasso regression (Lasso Regression), Ridge Regression (Ridge Regression), logistic regression (Logistic Regression)

2.2 decision tree (Decision Tree):

ID3、C4.5、CART

2.3 Support Vector Machine (SVM)

2.4 Naive Bayes (Naive Bayes Algorithms):

Naive Bayes (Naive Bayes), Naive Bayes Gaussian (Gaussian Naive Bayes), polynomial Naive Bayes (Multinomial Naive Bayes), belief networks (Bayesian Belief Network, BBN), Bayesian networks (Bayesian Network, BN )

2.5 k nearest neighbor classification algorithm (kNN)

2.6 clustering (Clustering Algorithms):

k-Means, k-Medians, expectation-maximization algorithm (Expectation Maximisation, EM), hierarchical clustering (Hierarchical Clustering)

2.7 Random Forest (Random Forest)

2.8 dimensionality reduction algorithm (Dimensionality Reduction Algorithms)

2.9 gradient boosting algorithm (Gradient Boosting algorithms)

GBM, XGBoost, LightGBM, CatBoost

2.10 depth learning algorithm (Deep Learning Algorithms):

Convolutional neural network (Convolutional Neural Network, CNN), recurrent neural network (Recurrent Neural Networks, RNNs), short and long term memory network (Long Short-Term Memory Networks, LSTMs), Stacker automatic encoder (Stacked Auto-Encoders), depth Boltzmann machine (deep Boltzmann machine, DBM), the depth of belief networks (deep belief networks, DBN)

3. The machine learning loss function

3.1 0-1 loss function


0-1 loss function is also possible to relax the conditions satisfying | considered equal when <T, i.e., | Y-f (X):
Here Insert Picture Description

3.2 absolute value loss function

Here Insert Picture Description

3.3 square loss function

Here Insert Picture Description

Logarithmic loss function

Here Insert Picture Description

4. The method of machine learning optimization

Here Insert Picture Description

Here Insert Picture Description

Here Insert Picture Description

Here Insert Picture Description

Here Insert Picture Description

Released eight original articles · won praise 1 · views 194

Guess you like

Origin blog.csdn.net/Moby97/article/details/103898430