2-6 感知机 - 对偶形式 - 梯度下降法的算法过程

输入:
训练数据集 T = ( x 1 , y 1 ) , ( x 1 , y 1 ) , , ( x n , y n ) T={(x_1, y_1), (x_1, y_1), \cdots, (x_n, y_n)} ,其中
x i R n y i y = 1 , + 1 , i = 1 , 2 , , n x_i \in R^n \\ y_i \in y = {-1, +1}, \\ i = 1, 2, \cdots, n
学习率为 η \eta
输出:
a, b
感知机模型 f ( x ) = s i g n ( j = 1 m a j y j x j x + b ) f(x) = sign(\sum_{j=1}^ma_jy_jx_j \cdot x + b)
过程:

  1. 选取初值a_0 = 0, b_0 = 0
  2. 在训练集中选取数据 ( x i , y i ) (x_i, y_i)
  3. 如果 y i ( j = 1 m a j y j x j x i + b ) 0 y_i(\sum_{j=1}^ma_jy_jx_j \cdot x_i + b) \le 0 ,则
    { a i a i + η b b + η y i \begin{cases} a_i \leftarrow a_i + \eta \\ b \leftarrow b + \eta y_i \end{cases}
  4. 转至2,直至训练集中没有错误分类点

对偶形式中训练数据仅以内积的形式出现。为了方便,可以预先将训练集中的特征向量的内积计算出来,并以矩阵的形式存储。这个矩阵就所谓的Gram矩阵(Gram matrix)
G = [ x i x j ] m × m G = [x_i \cdot x_j]_{m \times m}

代码:
https://github.com/windmissing/LiHang-TongJiXueXiFangFa/blob/master/Chapter2/perceptron-2.ipynb

def calcGramMaxtrix(X):
    m = X.shape[0]
    gram = np.zeros((m, m))
    for i in range(m):
        for j in range(i, m):
            gram[i, j] = X[i].dot(X[j])
            gram[j, i] = X[i].dot(X[j])
    return gram

def calcI(X, y, a, b, i, gram):
    #print (X.shape, y.shape, a. shape)
    sum = 0
    for j in range(X.shape[0]):
        sum += a[j] *y[j] * gram[j, i]
    return (sum + b)*y[i]
        
# 感知机原始形式
def perceptron(X, y, eta):
    a, b = np.zeros(X.shape[0]),0
    gram = calcGramMaxtrix(X)
    isFinished = False
    while not isFinished:
        isFinished = True
        for i in range(X.shape[0]):
            if calcI(X, y, a, b, i, gram) <= 0:
                isFinished = False
                a[i] += eta
                b += eta * y[i]
    def f(x):
        sum = 0
        for j in range(X.shape[0]):
            sum += a[j] *y[j] * X[j].dot(x)
        return sum + b
    return a, b, f
发布了407 篇原创文章 · 获赞 328 · 访问量 111万+

猜你喜欢

转载自blog.csdn.net/mishifangxiangdefeng/article/details/104506058