Machine Learning Technologies(10月20日)

Linear regression

 

SVM(support vector machines

Advantages

·Effective in high dimensional spaces.

·Still effective in cases where number of dimensions is greater than the number of samples.

·Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.

·Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.

Effective in high-dimensional space.

It is still valid in the case of dimension greater than the number of samples.

Using a subset of the training points (called Support Vector) in the decision-making function, it also has a storage efficiency.

Versatile: you can specify a different kernel functions for decision-making function. It provides a common core, but can also specify a custom kernel.

Disadvantage

If the number of features is much greater than the number of samples, the method is likely to give poor performances.

SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below).

If the number of features is much larger than the number of samples, the method may produce poor performance.

SVM does not directly provide the probability estimates, but the use of the expensive five-fold cross-validation of the calculated

 

Linearly separable

Linearly inseparable (margin interval maximum)

It will consume a lot of memory in the data analysis, fast enough.

SVM small amount of data in the range of good ability, poor in big data applications. Has a very good generalization ability.

 

Logistic regression algorithm

Objective loss function

 

Kernel methods(KMs)

Guess you like

Origin www.cnblogs.com/xkeepgoing/p/11783598.html