第三章 第七节 逻辑回归(程序示例--多分类问题)

程序示例--多分类问题

我们采用 One-vs-All 方法来进行多分类,在原有的逻辑回归模块中添加 One-vs-All 的训练以及预测方法:

# coding: utf-8
# logical_regression/logical_regression.py

# ...

def oneVsAll(X, y, options):
    """One-vs-All 多分类

    Args:
        X 样本
        y 标签
        options 训练配置
    Returns:
        Thetas 权值矩阵
    """
    # 类型数
    classes = set(np.ravel(y))
    # 决策边界矩阵
    Thetas = np.zeros((len(classes), X.shape[1]))
    # 一次选定每种分类对应的样本为正样本,其他样本标识为负样本,进行逻辑回归
    for idx, c in enumerate(classes):
        newY = np.zeros(y.shape)
        newY[np.where(y == c)] = 1
        result, timeConsumed = gradient(X, newY, options)
        thetas,errors,iterations = result
        Thetas[idx] = thetas[-1].ravel()
    return Thetas

def predictOneVsAll(X,Thetas):
    """One-vs-All下的多分类预测

    Args:
        X 样本
        Thetas 权值矩阵
    Returns:
        H 预测结果
    """
    H = sigmoid(Thetas * X.T)
    return H

测试程序如下,我们对手写字符集进行了多分类,并统计了训练精度:

# coding: utf-8
# logical_regression/test_onevsall.py
"""OneVsAll 多分类测试
"""
import numpy as np
import logical_regression as regression
from scipy.io import loadmat

if __name__ == "__main__":
    data = loadmat('data/ex3data1.mat')
    X = np.mat(data['X'])
    y = np.mat(data['y'])
    # 为X添加偏置
    X = np.append(np.ones((X.shape[0], 1)), X, axis=1)
    # 采用批量梯度下降法
    options = {
        'rate': 0.1,
        'epsilon': 0.1,
        'maxLoop': 5000,
        'method': 'bgd'
    }
    # 训练
    Thetas = regression.oneVsAll(X,y,options)
    # 预测
    H = regression.predictOneVsAll(X, Thetas)
    pred = np.argmax(H,axis=0)+1
    # 计算准确率
    print 'Training accuracy is: %.2f%'%(np.mean(pred == y.ravel())*100)

程序运行结果如下:

Training accuracy is: 89.26%

猜你喜欢

转载自blog.csdn.net/LW_GHY/article/details/86715351