CS231N assignment1 SVM

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u013608336/article/details/82859255

from cs231n.classifiers.softmax import softmax_loss_naive
线性分类器SVM,分成两个部分
1.a score function that maps the raw data to class scores,也就是所谓的f(w,x)函数
2.a loss function that quantifies the agreement between the predicted scores and the ground truth labels
在这里插入图片描述

margin:

SVM loss function wants the score of the correct class yi to be larger than the incorrect class scores by at least by Δ (delta). If this is not the case, we will accumulate loss.

example

在这里插入图片描述

1. loss function

cs231n/classifiers/linear_softmax.py中

softmax_loss_naive

SVM想让正确类别的score比错误类别的score要高出一个固定的margin Δ.
svm的损失函数计算方法

  Inputs:
  - W: A numpy array of shape (D, C) containing weights.
  - X: A numpy array of shape (N, D) containing a minibatch of data.
  - y: A numpy array of shape (N,) containing training labels; y[i] = c means
    that X[i] has label c, where 0 <= c < C.
  - reg: (float) regularization strength

在这里插入图片描述

  for i in xrange(num_train):#0-N
    scores = X[i].dot(W) ##1*C  
    correct_class_score = scores[y[i]] 
    for j in xrange(num_classes):# 0-C
      if j == y[i]:
        continue
      margin = scores[j] - correct_class_score + 1 # note delta = 1

      if margin > 0:
        loss += margin
        dW[:,j] += X[i].T 
        dW[:,y[i]] -= X[i].T

softmax_loss_vectorized

loss

  scores = X.dot(W)
  yi_scores = scores[np.arange(scores.shape[0]),y] 
  margins = np.maximum(0, scores - np.matrix(yi_scores).T + 1)
  margins[np.arange(num_train),y] = 0
  loss = np.mean(np.sum(margins, axis=1))
  loss += 0.5 * reg * np.sum(W * W)

####参考
cs231n linear classifier SVM
https://mlxai.github.io/2017/01/06/vectorized-implementation-of-svm-loss-and-gradient-update.html

猜你喜欢

转载自blog.csdn.net/u013608336/article/details/82859255