梯度下降算法Gradient Descent Algorithm

梯度下降算法Gradient Descent Algorithm

梯度

更新梯度

代码

准备训练数据

x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]

初始化权重

w = 1.0

定义模型

def forward(x):
  return x * w

定义损失函数

def cost(xs, ys):
  cost = 0
  for x, y in zip(xs, ys):
    y_pred = forward(x)
    cost += (y_pred - y) ** 2
  return cost / len(xs)

定义梯度函数

def gradient(xs, ys):
  grad = 0
  for x, y in zip(xs, ys):
    grad += 2 * x * (x * w - y)
  return grad / len(xs)

打印训练前梯度

print('Predict (before training)', 4, forward(4))

定义保存用plt库显示的数据

扫描二维码关注公众号,回复: 11876405 查看本文章
cost_list = []
epoch_list = []

训练

for epoch in range(100):
  cost_val = cost(x_data, y_data)
  grad_val = gradient(x_data, y_data)
  w -= 0.01 * grad_val
  print('Epoch:', epoch, 'w=', w, 'loss=', cost_val)
  epoch_list.append(epoch)    # 保存epoch数据
  cost_list.append(cost_val)  # 保存cost数据
print('Predict (after training)', 4, forward(4))

训练结果

..........

可见loss最后接近于0,预测结果是4.799

显示曲线

plt.plot(epoch_list, cost_list)
plt.ylabel('cost')
plt.xlabel('epoch')
plt.show()

曲线

猜你喜欢

转载自blog.csdn.net/qq_39715243/article/details/105443358