Machine Learning——错题整理(第二周)

Which of the following are reasons for using feature scaling?

为什么要使用特征缩放?


A.It prevents the matrix XTX (used in the normal equation) from being non-invertable (singular/degenerate).

B.It speeds up gradient descent by making it require fewer iterations to get to a good solution.

加快了梯度下降,通过更少的迭代来达到一个好的结果

C.It speeds up gradient descent by making each iteration of gradient descent less expensive to compute.

每一次迭代的较少计算成本来加快梯度下降

D.It is necessary to prevent the normal equation from getting stuck in local optima.

防止陷入局部最优


正确答案是B.

It speeds up gradient descent by making it require fewer iterations to get to a good solution.

【解析】Feature scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much larger values than the rest.The cost function J(θ) for linear regression has no local optima.The magnitude of the feature values are insignificant in terms of computational cost.

特征缩放是通过避免当一个或多个特征的值大于其余值所需的额外迭代而加速梯度下降。线性回归的成本函数J(θ)没有局部最优


不好意思又把一年前的博客文拿出来了,添加一些新的理解。

结合李宏毅的课件,整理一下我的理解:

比如你有两个输入的特征x1,x2(y=w1x1+w2x2+b),如果两个特征的分布范围很不一样,就最好做特征缩放。

也就是Feature Scaling的目的是make different features have the same scaling。

为什么呢?

继续用之前的例子。

x1小,x2大,那么改变w1对y的影响就很小,而略微改变w2对y的影响就很大。这样子做出来的损失函数的样子就是一个扁扁的椭圆。

放上课件上的图


可以看到左边就是上面所说的情况,这样的情况下做梯度下降就会比较难。因为对学习率的改变要求很大。

而右边相对就会容易很多,因为由于是偏向正圆,无论在哪个点开始做梯度下降都会向着圆心走。

所以做梯度缩放就是让Loss L更接近圆,使得梯度下降更有效率。


猜你喜欢

转载自blog.csdn.net/jesmine_gu/article/details/74614273
今日推荐