softmax 损失函数与梯度推导

softmax与svm很类似,经常用来做对比,svm的loss function对wx的输出s使用了hinge function,即max(0,-),而softmax则是通过softmax function对输出s进行了概率解释,再通过cross entropy计算loss function。

将score映射到概率的softmax function:p_i=\frac{e^{f_{i}}}{\sum_{k}e^{f_k}} \quad (1),其中,f_i=W_ix,j指代 i-th class。

对于某一个样本如 X_i 的lost function为L_i = -\sum_{j}y_jlog(p_j) \quad (2).

(注:

1、以下所有的公式为了便于表达,设定只有一个样品,即L_i全部写做 L

2、公式中没有进行偏移,实际算法为了避免指数计算容易越界,需要另做偏移处理)

需要求loss function对W的导数(梯度),实际上是进行链式求导。

从最内层的开始,\frac{\partial p_i}{\partial f_j}=\frac{\partial \frac{e^{f_{i}}}{\sum_{k}e^{f_k}}}{\partial f_j} \quad (3),其中,令g_i=e^{f_i},\quad h_i=\sum_{k}e^{f_k}      已知\frac{\mathrm{d} \frac{g(x)}{h(x)} }{\mathrm{d} x}=\frac{{g}'(x)h(x)-{h}'(x)g(x)}{h^2(x)} \quad (4)

且有\frac{\partial g_i}{\partial f_j}= \begin{cases} & \text{ if } i=j \quad e^{f_i} \\ & \text{ if } i\neq j \quad 0 \end{cases} \quad (5)\frac{\partial h_i}{\partial f_j}=e^{f_j},for \quad all \quad j \quad (6)

那么(3)式则可以根据(4)(5)(6)写成(注意,下面用\sum作为h的简写

\begin{cases} & \text{ if } i=j, \quad \frac{e^{f_i}\sum-e^{f_j}e^{f_i}}{\sum ^2}=\frac{e^{f_i}}{\sum} \frac{\sum-e^{f_j}}{\sum}= p_i(1-p_j)\\ & \text{ if } i\neq j, \quad \frac{0-e^{f_j}e^{f_i}}{\sum^2} = - \frac{e^{f_j}}{\sum} \frac{e^{f_i}}{\sum}=-p_jp_i \end{cases}

根据链式法则:(\sum_ky_k=1,y是一个只有一个元素为1,其余为0的向量,真正的分类时y_i=1)

\frac{\partial L}{\partial f_i}=\frac{\partial L}{\partial p_k}\frac{\partial p_k}{\partial f_i}=-\sum_k y_k \frac{1}{p_k}\frac{\partial p_k}{\partial f_i}\\ = -y_i(1-p_i) -\sum_{k\neq i}y_k \frac{1}{p_k}(-p_kp_i)\\ = -y_i(1-p_i)+\sum_{k\neq i}y_kp_i\\ =-y_i+y_ip_i+\sum_{k\neq i}y_kp_i\\ =p_i(\sum_ky_k)-y_i=p_i-y_i

最后一步,因为f_i=W_ix,这儿i代表第i个类别。

所以:\frac{\partial L}{\partial W_i}=\frac{\partial L}{\partial f_i} \frac{\partial f_i}{\partial W_i}=(p_i-y_i)x(上面设定了x只有一个,但实际x有n个,是矩阵而非向量)。

上面的公式用代码表示如下:

 for ii in range(num_train):
    current_scores = scores[ii, :]

    # Fix for numerical stability by subtracting max from score vector.
    # important! make them range between infinity to zero
    shift_scores = current_scores - np.max(current_scores)

    # Calculate loss for this example.
    loss_ii = -shift_scores[y[ii]] + np.log(np.sum(np.exp(shift_scores)))
    loss += loss_ii

    for jj in range(num_classes):
      softmax_score = np.exp(shift_scores[jj]) / np.sum(np.exp(shift_scores))

      # Gradient calculation.不懂这儿为什么要乘以x[ii]
      if jj == y[ii]:
        dW[:, jj] += (-1 + softmax_score) * X[ii]
      else:
        dW[:, jj] += softmax_score * X[ii]
        
     # Average over the batch and add our regularization term.
  loss /= num_train
  loss += reg * np.sum(W*W)

  # Average over the batch and add derivative of regularization term.
  dW /= num_train
  dW += 2*reg*W

猜你喜欢

转载自blog.csdn.net/normol/article/details/84322626