SVM and LR (Logistic Regression)

How SVMs work

https://www.youtube.com/watch?v=1NxnPkZM9bc

Both linearSVM and LR algorithms are linear classifiers

Note: SVM can be a linear classifier or a nonlinear classifier according to the different kernel functions used

SVM algorithm generally does not overfit, why?

Although, SVM maps x (which cannot be divided in low dimensions) to infinite dimensions, but as long as your classifier has a large margin , the order of the model will not be very high, and SVM will automatically choose the lowest one. VC.

The LR algorithm generally underfits, why?

(Personal understanding, no authoritative proof materials have been found) The logistic regression algorithm divides the samples greater than 0 into one category, and the samples less than 0 into one category. For samples around 0, it is easy to divide errors during training, resulting in The bias is relatively large, that is, underfitting.

SVM is suitable for high-dimensional sparseness and few samples. [The parameters are only related to the support vector, and the number is small, so fewer samples are needed. Since the parameters have nothing to do with the dimension, it can handle high-dimensional problems]

Multi-classification for LR: softmax





Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325564526&siteId=291194637