Bobo老师机器学习笔记第六课-调试梯度下降法

a在用梯度下降法训练模型的时候,有时候我们要在训练前大概了解一下参数,那么怎么进行调试呢。Bobo老师分享了一种方法,我分享给大家。 

1、主要思想

这种思想主要是根据导数的定义,其实梯度就是多元导数对单个参数的求偏导,所以利用导数的概念也可以求出梯度。 

在上图中,可以看出, 红点处的导数可以用两个蓝点的值获取。 其中\varepsilon是一个比较小的数字,一般可以选0.01 

扩展到多元,梯度就是如下:

对于导数理解不深的朋友,可以查看这篇博客

2、 编码实现

基于上一篇文章的代码,我们在LineRegression中新增一个dj_debug的方法,来计算调试梯度。

import numpy as np

class LineRegression(object):
    def __init__(self):

        self._ethta = None
        self.coef_ = None
        self.intercept_ = None

    def fit_gd(self, x_train, y_train, eta=0.01, n_iters=1e4):

        def J(X_b, y, ethta):
            return np.sum((X_b.dot(ethta) - y) ** 2) / len(X_b)

        def DJ(X_b, y, ethta):
            return 2 * (X_b.T.dot(X_b.dot(ethta) - y)) / len(X_b)

        def dj_debug(X_b, y, ethta, epsilon=0.01):
            """
            用来调试梯度
            :return:
            """
            res = np.empty(X_b.shape[1])
            for i in range(len(ethta)):
                ethta_1 = ethta.copy()
                ethta_1[i] += epsilon
                ethta_2 = ethta.copy()
                ethta_2[i] -= epsilon

                res[i] = (J(X_b, y, ethta_1) - J(X_b, y, ethta_2)) / (2 * epsilon)
            return res

        def gradient_descent(X_b, y, init_ethta, eta, n_iters, explision=1e-16):

            ethta = init_ethta
            cur_in = 0
            while cur_in < n_iters:
                last_ethta = ethta
                #gradient = DJ(X_b, y, ethta)
                gradient = dj_debug(X_b, y, ethta)
                ethta = ethta - eta * gradient
                if abs(J(X_b, y, ethta) - J(X_b, y, last_ethta)) < explision:
                    break

            return ethta

        X_b = np.hstack([np.ones((len(x_train), 1)), x_train])

        init_ethta = np.zeros(X_b.shape[1])

        self._ethta = gradient_descent(X_b, y_train, init_ethta, eta, n_iters)

        print('ethta:', self._ethta)

        self.intercept_ = self._ethta[0]
        self.coef_ = self._ethta[1:]

from linearregession.linearregession import  LineRegression
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import numpy as np

if __name__ == '__main__':

    m = 100000
    x = np.random.normal(size=m)
    X = x.reshape(-1, 1)
    y = 3 * x + 4.0 + np.random.normal(0,3, size=m)
    X_train, X_test, y_train, y_test = train_test_split(X, y)


    stdscaler = StandardScaler()
    stdscaler.fit(X_train)
    x_train_standard = stdscaler.transform(X_train)

    lrg = LineRegression()
    lrg.fit_gd(x_train_standard, y_train)

运行结果:

ethta: [4.02166099 2.98053422]

从这个结果来看,其实我们调试出来的参数和实际的差不多。调试计算梯度的运行效率就是有点低。 

要是你在西安,感兴趣一起学习AIOPS,欢迎加入QQ群 860794445

猜你喜欢

转载自blog.csdn.net/sxb0841901116/article/details/83477177