多项式回归以及python实现

多项式回归原理

之前已经介绍过一元线性回归以及python实现,详情请戳这里

而多项式回归,原理和多元线性回归相似。

还是先举个栗子:

唯一特征X,共有m = 500个数据数量,Y是实际结果,从中找到一条曲线,使数据集到曲线之间的距离差最小,如下图所示:

先假设一条曲线:

h(x) = \theta _{0} + \theta _{1} * x + \theta _{2} * x ^2

这里只假设成一个

找到它和y的损失函数:

J\left ( \theta \right ) = \frac{1}{2m}\sum_{i = 0}^{m}(h(x^{i})-y^i)^2

和线性回归一样,找到最小的J(\theta )就可以了。

梯度下降

不难看出,对于

h(x) = \theta _{0} + \theta _{1} * x + \theta _{2} * x ^2

可以将它看作:

h(x) = \theta _{0} + \theta _{1} * x + \theta _{2} * x _{2}

其中x_2=x^2,问题一下子就变成了多元线性回归,接下来要做的就是对J(\theta )求偏导:

\frac{\partial J(\theta ))}{\partial \theta _{0}}=\frac{1}{m}\sum_{i=1}^{m}(h_{\theta }(x^{i})-y^{i})

\frac{\partial J(\theta ))}{\partial \theta _{1}}=\frac{1}{m}\sum_{i=1}^{m}(h_{\theta }(x^{i})-y^{i})*x_1^i

\frac{\partial J(\theta ))}{\partial \theta _{2}}=\frac{1}{m}\sum_{i=1}^{m}(h_{\theta }(x^{i})-y^{i})*x_2^i

更新\theta:

\theta _{0}:=\theta _{0}-\alpha \frac{1}{m}\sum_{i=1}^{m}(h_{\theta }(x^{i})-y^{i})

\theta _{1}:=\theta _{1} - \alpha \frac{1}{m}\sum_{i=1}^{m}(h_{\theta }(x^{i})-y^{i})*x_1^i

\theta _{2}:=\theta _{2} - \alpha \frac{1}{m}\sum_{i=1}^{m}(h_{\theta }(x^{i})-y^{i})*x_2^i

经过迭代,完成拟合。

python实现

# -*- coding: utf-8 -*-
"""
Created on Thu Jul 26 16:32:55 2018

@author: 96jie
"""
import numpy as np
from matplotlib import pyplot as plt
a = np.random.standard_normal((1, 500))
x = np.arange(0,50,0.1)
y =  x**2 + x*2 + 5
y = y - a*100
y = y[0]
x1 = x * x
#归一化
def scaling(x,x1):
    n_x = (x - np.mean(x))/50
    n_x1 = (x1 - np.mean(x))/2500
    return n_x,n_x1

def Optimization(x,x1,y,theta,learning_rate):
    for i in range(iter):
        theta = Updata(x,x1,y,theta,learning_rate)
    return theta

def Updata(x,x1,y,theta,learning_rate):
    m = len(x1)
    sum = 0.0
    sum1 = 0.0
    sum2 = 0.0
    n_x,n_x1 = scaling(x,x1)
    alpha = learning_rate
    h = 0
    for i in range(m):
        h = theta[0] + theta[1] * x[i] +theta[2] * x1[i]
        sum += (h - y[i])
        sum1 += (h - y[i]) * n_x[i]
        sum2 += (h - y[i]) * n_x1[i]
    theta[0] -= alpha * sum / m 
    theta[1] -= alpha * sum1 / m 
    theta[2] -= alpha * sum2 / m 
    return theta
#数据初始化

learning_rate = 0.0001
theta = [0,0,0]
iter = 3000
theta = Optimization(x,x1,y,theta,learning_rate)

plt.rcParams['font.sans-serif']=['SimHei']
plt.rcParams['axes.unicode_minus'] = False
'''
plt.figure(figsize=(35,35))
plt.scatter(x,y,marker='o')
plt.xticks(fontsize=40)
plt.yticks(fontsize=40)
plt.xlabel('特征X',fontsize=40)
plt.ylabel('Y',fontsize=40)
plt.title('样本',fontsize=40)
plt.savefig("样本.jpg")
'''
b = np.arange(0,50)
c = theta[0] + b * theta[1] + b ** 2 * theta[2]

plt.figure(figsize=(35,35))
plt.scatter(x,y,marker='o')
plt.plot(b,c)
plt.xticks(fontsize=40)
plt.yticks(fontsize=40)
plt.xlabel('特征X',fontsize=40)
plt.ylabel('Y',fontsize=40)
plt.title('结果',fontsize=40)
plt.savefig("结果.jpg")

在刚开始写的时候不小心将归一化后的数用来计算h(\theta )了,得到的结果炒鸡大,改了好久才找到错误,被自己蠢哭了。

猜你喜欢

转载自blog.csdn.net/i96jie/article/details/81252198