吴恩达机器学习笔记(五)--多变量线性回归

吴恩达机器学习笔记(五)–多变量线性回归

学习基于:吴恩达机器学习.

1. Multiple Features

Linear regression with multiple variables is also known as “multivariate linear regression”.

equation notation
x j ( i ) x_j^{(i)} value of feature j in the ith training example
x ( i ) x^{(i)} the input (features) of the ith training example
m m the number of training examples
n n the number of features
  • The multivariable form of the hypothesis function accommodating these multiple features is as follows:
    h θ ( x ) = θ 0 x 0 + θ 1 x 1 + θ 2 x 2 + . . . + θ n x n h_\theta(x) = \theta_0x_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_nx_n      ( x 0 1 ) ( x_0 \equiv 1 )

  • Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
    h θ ( x ) = [ θ 0 θ 1 . . . θ n ] [ x 0 x 1 . . . x n ] = θ T x h_\theta(x) = \left[ \begin{matrix} \theta_0 & \theta_1 & ... & \theta_n \end{matrix} \right]\left[ \begin{matrix} x_0 \\ x_1 \\ ... \\ x_n \end{matrix} \right] = \theta^Tx

2. Gradient Descent For Multiple Variables

The gradient descent equation itself is generally the same form; we just have to repeat it for our ‘n’ features:

  • repeat until convergence: {
      θ j : = θ j α 1 m i = 1 m ( h θ ( x i ) y i ) x j ( i ) \theta_{j} := \theta_{j} - \alpha\frac{1}{m}\sum_{i = 1}^{m}(h_{\theta}(x^{i})-y^{i})x_j^{(i)}
     for j : = 0... n j := 0 ... n
    }

1) Feature Scaling

We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.

  • The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:
    • 1 x i 1 -1 \leq x_i \leq 1

2) Learning Rate

  • This is the gradient descent algorithm:

    • θ j : = θ j α θ j J ( θ 0 , θ 1 ) . \theta_{j} := \theta_{j} - \alpha\frac{\partial}{\partial\theta_{j}}J(\theta_{0}, \theta_{1}).
  • We need to adjust the value of α \alpha so that gradient descent can converge

在这里插入图片描述


在这里插入图片描述

  • If α is too small: slow convergence.
  • If α is too large: may not decrease on every iteration and thus may not converge.

3. Polynomial Regression

Our hypothesis function need not be linear (a straight line) if that does not fit the data well.

  • For example:
    • h θ ( x ) = θ 0 + θ 1 x + θ 2 x 2 h_{\theta}(x) = \theta_0 + \theta_1x +\theta_2x^2

4. Normal Equation

We can use normal equation to get the optimal value of θ \theta .

  • θ = ( X T X ) 1 X T Y \theta = (X^TX)^{-1}X^TY

In Octave or MATLAB:

pinv(X'*X)*X'*Y

function pinv() is to calculate the pseudo inverse matrix, so no matter the matrix is invertible or not, we can still get the correct result.

So what’s the difference between gradient descent and normal equation?

Difference Gradient Descent Normal Equation
Need to choose α \alpha Yes No
Need many iterations Yes No
When n n is large Works well Works slowly

猜你喜欢

转载自blog.csdn.net/comajor/article/details/87350208
今日推荐