[Notes Machine Learning (II)] by linear regression to tensorflow

TensorFlow linear regression

Environment: TensorFlow2.0

Pre-knowledge

Tensor

TensorFlow using tensor (tensor) to represent data, the name on the pricey big, in fact, can be understood as multidimensional arrays.

import tensorflow as tf         # 每次使用TensorFlow的第一件事
A = tf.constant([1, 2, 3])      # constant是表示A是一个常量
B = tf.constant([[1,2],[3,4]])

Here we define two tensor, which is one-dimensional tensor A, B is a two-dimensional tensor. We look Print

>>> A           # 形状是(3,)是一个3有个元素的向量
<tf.Tensor: id=0, shape=(3,), dtype=int32, numpy=array([1, 2, 3])>
>>> B           # 形状是(2, 2)一个2*2矩阵
<tf.Tensor: id=1, shape=(2, 2), dtype=int32, numpy=
array([[1, 2],
       [3, 4]])>    # 两者都是int32类型,可以通过numpy()方法来得到它们的值

Tensor operations

tf.add(tensor_A, tensor_B)          # 矩阵元素相加
tf.subtract(tensor_A, tensor_B)     # 矩阵元素相减
tf.multiply(tensor_A, tensor_B)     # 矩阵元素相乘
tf.divide(tensor_A, tensor_B)       # 矩阵元素相除
tf.matmul(tensor_A, tensor_B)       # 矩阵乘法
tf.pow(tensor_A, num)               # 矩阵元素幂运算

This matrix multiplication above, also other mathematical symbols can be used in addition to matrix elements operate +-*/和**instead.

Tensor operations there will be a broadcast mechanism, behind repeat that experience.

Automatic derivation

As the machine (depth) learning framework, automatic derivation mechanism and ultimately, we look at the code

x = tf.Variable(initial_value=1.)   # tf.Variable表示这是一个变量,里面的initial参数将其初始值设为1
with tf.GradientTape() as tape:     # GradientTape:梯度带,在with中的所有过程将会被记录
    y = x**2+7*x+1                  # 也就是你的函数放在这里就行,甚至可以分好几步写
dy_dx = tape.gradient(y,x)          # 求with过程中y关于x的梯度
print(dy_dx)                        # tf.Tensor(9.0, shape=(), dtype=float32)

Linear Regression

First, we generated data, data labels by adding Gaussian noise to get real function.

Then in order to calculate the gradient, X, y tf format needs to be converted.

The variable w is defined and the offset b, initial values ​​are set to 0.

# 构造数据,方程y=2x+1
x_data = np.linspace(0, 1, 200).reshape((-1,1))
y_data = 2*x_data+1 + np.random.normal(0,0.02, x_data.shape)

X = tf.constant(x_data, dtype=tf.float32)
y = tf.constant(y_data,  dtype=tf.float32)

w = tf.Variable(0.)
b = tf.Variable(0.)

Gradient with a gradient calculated automatically, then the optimizer automatically update the model and the parameter w b

epoches = 1000
# 优化器,设置学习率为0.001
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
for _ in range(epoches):
    with tf.GradientTape() as tape:
        y_pred = X*w+b
        loss = 0.5*tf.reduce_sum((y_pred-y)**2)
    grads = tape.gradient(loss, [w,b])
    # 通过apply_gradients来最小化损失函数,参数是梯度,变量对(grad, variable)
    optimizer.apply_gradients(grads_and_vars=zip(grads, [w, b]))

Our print parameters, very close to the true parameter

>>> print(w.numpy(), b.numpy())
1.9974449 1.003515

At last

Our linear regression is complete, if you want to see more intuitive, can be used to plot discrete points and fitting functions drawn.

import matplotlib.pyplot as plt
y_pred = X*w+b
plt.figure()
plt.scatter(x_data, y_data)
plt.plot(x_data, y_pred, "r", lw=2)
plt.show()

Guess you like

Origin www.cnblogs.com/Axi8/p/11695934.html