机器学习(五)多元线性回归

线性回归

多元线性回归(多特征)

h θ ( x ) = θ 0 + θ 1 x 1 + θ 2 x 2 + . . . + θ n x n h_\theta(x) = \theta_0+\theta_1x_1+\theta_2x_2+...+\theta_nx_n

当Y值的影响因素不是唯一时,采用多元线性回归模型

代价函数(损失函数)

  • 最小二乘法计算误差

J ( θ 0 , θ 1 , . . . , θ n ) = 1 / 2 m i = 1 m ( y i h θ ( x i ) ) 2 J(\theta_0,\theta_1,...,\theta_n) = 1/2m\sum_{i=1}^{m} {(y^i-h_\theta(x^i))^2}

梯度下降法

  • 求法

θ j : = θ j α θ j J ( θ 0 , . . . , θ n ) \theta_j: = \theta_j-\alpha\frac {\partial} {\partial\theta_j}J(\theta_0,...,\theta_n)

( j = 0 , . . . , n ) (j = 0,...,n)

θ 1 : = θ 1 α 1 m i = 1 m ( h θ ( x i ) y i ) x 0 i \theta_1:=\theta_1 - \alpha\frac {1} {m}\sum_{i=1}^{m} {(h_\theta(x^i)-y^i)x_0^i}

θ 2 : = θ 2 α 1 m i = 1 m ( h θ ( x i ) y i ) x 1 i \theta_2:=\theta_2 - \alpha\frac {1} {m}\sum_{i=1}^{m} {(h_\theta(x^i)-y^i)x_1^i}

θ 3 : = θ 3 α 1 m i = 1 m ( h θ ( x i ) y i ) x 2 i \theta_3:=\theta_3 - \alpha\frac {1} {m}\sum_{i=1}^{m} {(h_\theta(x^i)-y^i)x_2^i}

. . . . . . ......

  • 代码实现
#二元线性回归实现

# 学习率learning rate
lr = 0.0001
# 参数
theta0 = 0
theta1 = 0
theta2 = 0
# 最大迭代次数
epochs = 1000

# 最小二乘法
def compute_error(theta0, theta1, theta2, x_data, y_data):
    totalError = 0
    for i in range(0, len(x_data)):
        totalError += (y_data[i] - (theta1 * x_data[i,0] + theta2*x_data[i,1] + theta0)) ** 2
    return totalError / float(len(x_data))

def gradient_descent_runner(x_data, y_data, theta0, theta1, theta2, lr, epochs):
    # 计算总数据量
    m = float(len(x_data))
    # 循环epochs次
    for i in range(epochs):
        theta0_grad = 0
        theta1_grad = 0
        theta2_grad = 0
        # 计算梯度的总和再求平均
        for j in range(0, len(x_data)):
            theta0_grad += (1/m) * ((theta1 * x_data[j,0] + theta2*x_data[j,1] + theta0) - y_data[j])
            theta1_grad += (1/m) * x_data[j,0] * ((theta1 * x_data[j,0] + theta2*x_data[j,1] + theta0) - y_data[j])
            theta2_grad += (1/m) * x_data[j,1] * ((theta1 * x_data[j,0] + theta2*x_data[j,1] + theta0) - y_data[j])
        # 更新b和k
        theta0 = theta0 - (lr*theta0_grad)
        theta1 = theta1 - (lr*theta1_grad)
        theta2 = theta2 - (lr*theta2_grad)
    return theta0, theta1, theta2

利用sklearn库求解

利用 sklearn库中的 LinearRegression(线性回归)函数 (运用的是标准方程法)

import numpy as np
from numpy import genfromtxt
from sklearn import linear_model
import matplotlib.pyplot as plt  
from mpl_toolkits.mplot3d import Axes3D
# 创建模型
model = linear_model.LinearRegression()
model.fit(x_data, y_data)
ax = plt.figure().add_subplot(111, projection = '3d') 
ax.scatter(x_data[:,0], x_data[:,1], y_data, c = 'r', marker = 'o', s = 100) #点为红色三角形  
x0 = x_data[:,0]
x1 = x_data[:,1]
# 生成网格矩阵
x0, x1 = np.meshgrid(x0, x1)
z = model.intercept_ + x0*model.coef_[0] + x1*model.coef_[1]
# 画3D图
ax.plot_surface(x0, x1, z)
#设置坐标轴  
ax.set_xlabel('Miles')  
ax.set_ylabel('Num of Deliveries')  
ax.set_zlabel('Time')  
  
#显示图像  
plt.show()  

猜你喜欢

转载自blog.csdn.net/weixin_45781143/article/details/107706044