Second Lecture 6 - Linear Classification

Linear classifiers are the basis of neural networks

Image Captioning can use a Lego combination of Convolutional Neural Networks (focusing on images) + Recurrent Neural Networks (focusing on text).

There are no parameters in k-nearest neighbors, and linear classification is one of the simplest of parametric models.

In multi-classification problems, the size of b in the last layer represents the classifier's willingness to classify objects into this class. Each row of the weight matrix is ​​a template of a class, which is obtained by the classifier averaging all training data sets of this class. The linear model can only learn one template for a class of objects, but deep learning is no longer limited. , and thus perform better.

The inner product (similarity) of this template and the data vector measures the likelihood that the data is this type of object.


Each row of the weight matrix, the template, has the same shape as the graph data and can be restored to a picture. As shown below, cifar10 dataset classification


The degradation of k-nearest neighbors and linear classifiers is relatively simple, and requires careful review and homework.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325899830&siteId=291194637