UFLDL Tutorial-Softmax Regression

新手学习,如果哪里写错了,欢迎指正。
这门课程网站是http://ufldl.stanford.edu/tutorial/
这一节写了很久,这里其实和逻辑回归很像,但是这里针对的是多个分类结果,例如0-9十个数字。这一节代码比较麻烦,debug了很久,感觉自己写的是一个半循环,半向量计算的代码。在网上看到有很多大神们利用了所给的提示函数,使得代码很简洁。但由于自己matlab使用得不熟,所以没想到可以这么写。。。只能说还需努力啊。不说那么多了,下面是代码和运行结果。

function [f,g] = softmax_regression(theta, X,y)
  %
  % Arguments:
  %   theta - A vector containing the parameter values to optimize.
  %       In minFunc, theta is reshaped to a long vector.  So we need to
  %       resize it to an n-by-(num_classes-1) matrix.
  %       Recall that we assume theta(:,num_classes) = 0.
  %
  %   X - The examples stored in a matrix.  
  %       X(i,j) is the i'th coordinate of the j'th example.
  %   y - The label for each example.  y(j) is the j'th example's label.
  %
  m=size(X,2);
  n=size(X,1);

  % theta is a vector;  need to reshape to n x num_classes.
  theta=reshape(theta, n, []);
  num_classes=size(theta,2)+1;

  % initialize objective value and gradient.
  f = 0;
  g = zeros(size(theta));
  %
  % TODO:  Compute the softmax objective function and gradient using vectorized code.
  %        Store the objective function value in 'f', and the gradient in 'g'.
  %        Before returning g, make sure you form it back into a vector with g=g(:);
  %
%%% YOUR CODE HERE %%%
  theta = [theta,zeros(n,1)];
  mid = theta'*X;%theta*X
  mid2 = exp(mid);
  b = sum(mid2,1);
  tag = zeros(m,num_classes);%1{y(i)=k}的函数表达
  for i=1:m
      tag(i,y(i))=1;
  end
  h = zeros(1,m);
  for i=1:m
      h(i)=tag(i,:)*mid2(:,i);
  end
  h2 = log(h./b);
  f = sum(h2)*(-1);

  p = zeros(m,num_classes);
  for j=1:num_classes-1  
    for i=1:m  
      p(i,j)=mid2(j,i)/b(i);  
    end  
  end  
  g = X*(p-tag);
  g=g(:,1:num_classes-1); % make gradient a vector for minFunc
  g=g(:);

运行结果
这里写图片描述
速度一般,但正确率和作者说的差不多,所以应该是做对了。也欢迎提出意见及指正。
下一节讲下跑了五个小时的第二节逻辑回归

猜你喜欢

转载自blog.csdn.net/sinat_25882019/article/details/63033914
今日推荐