ML16:逻辑回归

1.预测函数
g(z) = 1 / (1+e^-z)
z = t0 + t1x1 + t2x2 + … + tnxn
h(x1, …, xn) = 1 / (1+e^-(t0 + t1x1 + t2x2 + … + tnxn))
J(t0, …, tn) = sigma((h(x1, …, xn) - y)^2) / 2m
t0, …, tn ? -> J(t0, …, tn) min
若可证明K(t0, …, tn)和J(t0, …, tn)是同极函数,则可以通过使K函数取极值的模型参数(t0, …, tn)间接地表示使成本函数J取极小值的参数。
J(t)+r||t||(l1/l2(缺省))
||t||(l1):t向量各元素绝对值之和
||t||(l2):t向量各元素平方值之和
r||t||称为正则项,避免由于t向量过大造成模型过拟合,其中r称为正则强度,越大正则强度越高,对t的惩罚力度越大,反而反之。

代码

from __future__ import unicode_literals
import numpy as np
import sklearn.linear_model as lm
import matplotlib.pyplot as mp
x = np.array([
    [3,  1],
    [2,  5],
    [1,  8],
    [6,  4],
    [5,  2],
    [3,  5],
    [4,  7],
    [4, -1]])
y = np.array([0, 1, 1, 0, 0, 1, 1, 0])
# 创建逻辑分类器模型
# solver='liblinear':对特征做线性组合
# C=1:惩罚力度/正则强度,越小越强
model = lm.LogisticRegression(solver='liblinear',
                              C=1)
model.fit(x, y)
l, r, h = x[:, 0].min() - 1, x[:, 0].max() + 1, 0.005
b, t, v = x[:, 1].min() - 1, x[:, 1].max() + 1, 0.005
grid_x = np.meshgrid(
    np.arange(l, r, h),
    np.arange(b, t, v))
flat_x = np.c_[grid_x[0].ravel(), grid_x[1].ravel()]
flat_y = model.predict(flat_x)
grid_y = flat_y.reshape(grid_x[0].shape)
mp.figure(num='Logistic Classification',
          facecolor='lightgray')
mp.title('Logistic Classification', fontsize=20)
mp.xlabel('x', fontsize=14)
mp.ylabel('y', fontsize=14)
mp.tick_params(labelsize=10)
mp.pcolormesh(grid_x[0], grid_x[1], grid_y,
              cmap='gray')
mp.scatter(x[:, 0], x[:, 1], s=80, c=y, cmap='gray_r')
mp.show()

猜你喜欢

转载自blog.csdn.net/weixin_38246633/article/details/80595116
今日推荐