Machine learning superfluous - linear regression

This series is to understand the examples that already exist in some open source projects, and share some reading notes, which may be useful to other beginners. If you have any questions about the shared code, you are welcome to ask questions and communicate.

 

The following code is excerpted from: 

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docker/notebooks/2_getting_started.ipynb

 

#@test {"output": "ignore"}
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

%matplotlib inline

# Set up the data with a noisy linear relationship between X and Y.
num_examples = 50
X = np.array([np.linspace(-2, 4, num_examples), np.linspace(-6, 6, num_examples)])

#Generate a normal distribution of random numbers with a matrix size of 2*50. The number of data generated by randn satisfies: the mean is 0, and the variance is 1
#Use the generated random number to add noise to X
X += np.random.randn(2, num_examples)
x, y = X

#Add bias to x
x_with_bias = np.array([(1., a) for a in x]).astype(np.float32)


losses = []
training_steps = 50
learning_rate = 0.002

with tf.Session() as sess:
    # Set up all the tensors, variables, and operations.
    input = tf.constant(x_with_bias)
    target = tf.constant(np.transpose([y]).astype(np.float32))
    weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1))

    tf.global_variables_initializer().run()

    yhat = tf.matmul(input, weights)
    yerror = tf.subtract(yhat, target)
    
    #l2 is not the l2 regularaztion (regularization) we often say, but the squared difference is the standard error, which is closer to the L2 normal form
    loss = tf.nn.l2_loss(yerror)
  
    #How to identify that weights is a variable that can be updated, and then adjust to approach the minimum value?
    #https://stackoverflow.com/questions/34477889/holding-variables-constant-during-optimizer
    update_weights = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
  
    for _ in range(training_steps):
        # Repeatedly run the operations, updating the TensorFlow variable.
        update_weights.run()
        losses.append(loss.eval())

    # Training is done, get the final values for the graphs
    betas = weights.eval()
    #Calculate the estimated value of Y based on the latest Weights
    yhat = yhat.eval()

# Show the fit and the loss over time.
fig, (ax1, ax2) = plt.subplots(1, 2)
#Adjust the horizontal spacing of the 2 subplots
plt.subplots_adjust(wspace=.3)
#Set the size of the entire canvas (inches)
fig.set_size_inches(10, 4)

#x, y sampling statistics, because of errors, it will fluctuate around the line
ax1.scatter(x, y, alpha=.7)

# Draw points for the valuation of X and Y, the valuation is calculated, so it will fall on the line
# c="g" => short color code (rgbcmyk), g is green
ax1.scatter(x, np.transpose(yhat)[0], c="g", alpha=.6)

#Draw a line (only the 2 points from the start and end are calculated, and the 2 points define a line)
line_x_range = (-4, 6)
ax1.plot(line_x_range, [betas[0] + a * betas[1] for a in line_x_range], "g", alpha=0.6)

#Loss tapering trend drawing
ax2.plot(range(0, training_steps), losses)
ax2.set_ylabel("Loss")
ax2.set_xlabel("Training steps")
plt.show()

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326465581&siteId=291194637