[tensorflow] continuous input linear regression model training code

[tensorflow] continuous input perceptron model training

  Check out the three models in this series: [tensorflow] Continuous input linear regression model training code [tensorflow] Continuous input neural network model training code [tensorflow] Continuous input + discrete input neural network model training code
  
  
  

Full code - copy and play

from sklearn.model_selection import train_test_split
import tensorflow as tf
import numpy as np
from keras import Input, Model, Sequential
from keras.layers import Dense, concatenate, Embedding, LSTM
from sklearn.preprocessing import StandardScaler
from tensorflow import keras

def get_data():
    # 设置随机种子,以确保结果可复现(可选)
    np.random.seed(0)

    # 生成随机数据
    data = np.random.rand(10000, 10)
    
    # 正则化数据
    scaler = StandardScaler()
    data = scaler.fit_transform(data)

    # 生成随机数据
    target = np.random.rand(10000, 1)

    return train_test_split(data, target, test_size=0.1, random_state=42)

data_train, data_val, target_train, target_val = get_data()

# 迭代轮次
train_epochs = 10
# 学习率
learning_rate = 0.0001
# 批大小
batch_size = 200

# 定义模型
with tf.name_scope("Model"):
    x = tf.placeholder(tf.float32, [None,10]) # 10个特征数据(10列)
    y = tf.placeholder(tf.float32, [None, 1])

    d = tf.Variable(tf.random_normal([10,10], stddev=0.01))
    w = tf.Variable(tf.random_normal([10,1], stddev=0.01))

    # b 初始化值为 1.0
    a = tf.Variable(1.0)
    b = tf.Variable(1.0)

    k = tf.matmul(x, d) + a
    pred = tf.matmul(k, w) + b

#损失函数
loss_function = tf.reduce_mean(tf.square(y-pred))
# 创建优化器
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss_function)

#3生成会话,训练STEPS轮
with tf.Session() as sess:
    # 初始化参数
    init = tf.global_variables_initializer()
    sess.run(init)


    # 训练模型
    STEPS = 1000 #3000
    for i in range(STEPS):
        start = (i*batch_size) % 4
        end =  start + batch_size
        #sess.run(optimizer, feed_dict={x: X[start:end], y_: Y_[start:end]})
    
        if i % 100 == 0:
            total_loss = sess.run(loss_function, feed_dict={
    
    x: data_train, y: target_train})
            print("After %d training step(s), loss_mse on all data is %g" % (i, total_loss))            
        sess.run(optimizer, feed_dict={
    
    x: data_val, y: target_val})

training output

  The output during model training is as follows:
insert image description here

code introduction

  The get_data function is used to generate random training and validation datasets. First use np.random.rand to generate a random data set of shape (10000, 10) to simulate a 10-dimensional continuous input, and then use StandardScaler to standardize the data. Then generate a target of (10000, 1), which represents the final fitting target score. Finally, use the train_test_split function to divide the dataset into a training set and a validation set.

  Since the target is a floating point number, our task is a regression task.

  When defining a model, use tf.placeholder to define a placeholder for passing in input data and target data. Define variables d, w, a, and b as weights and biases for the model. Use tf.matmul to perform matrix multiplication and addition operations to obtain the predicted value pred.

You can try to change the model and change the activation function.

  Define the loss function as the mean square error (MSE), and use the gradient descent optimizer for parameter update.

  Create a session in tf.Session and initialize the variables of the model through tf.global_variables_initializer. Training is then performed, iterating rounds of STEPS. In each round of training, run the optimizer via sess.run for parameter updates and calculate the loss on the training set. Every 100 rounds of training, print out the loss value of the current round.

Guess you like

Origin blog.csdn.net/qq_43592352/article/details/131258353