Train your own dnn neural network based on Tensorflow

    Some time ago, I learned the tensorflow open source tutorial from google, and some of them have not been read, and I have never had a chance to use them. Recently, I was solving a kind of mechanical arm similar to a parallel mechanism, and encountered a big problem in the process of positive solution, but my brother built a model with matlab, and the end position can be obtained through the length of 3 rods, and its working space can be calculated. Recorded in csv, one testdata, one traindata, and later I used testdata as a validation set. . . I want to fit this model by building a dnn neural network.

    This is the code I wrote, which is basically the code in the tensorflow open source tutorial.

import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format


def construct_feature_columns(input_features):
  """Construct the TensorFlow Feature Columns.
  Args:
    input_features: The names of the numerical input features to use.
  Returns:
    A set of feature columns
  """
  return set([tf.feature_column.numeric_column(my_feature)
              for my_feature in input_features])

def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
    """Trains a linear regression model of one feature.
  
    Args:
      features: pandas DataFrame of features
      targets: pandas DataFrame of targets
      batch_size: Size of batches to be passed to the model
      shuffle: True or False. Whether to shuffle the data.
      num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
    Returns:
      Tuple of (features, labels) for next data batch
    """
    
    # Convert pandas data into a dict of np arrays.
    features = {key:np.array(value) for key,value in dict(features).items()}                                           
 
    # Construct a dataset, and configure batching/repeating
    ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
    ds = ds.batch(batch_size).repeat(num_epochs)
    
    # Shuffle the data, if specified
    if shuffle:
      ds = ds.shuffle(10000)
    
    # Return the next batch of data
    features, labels = ds.make_one_shot_iterator().get_next()
    return features, labels

train_data=pd.read_csv('traindata.csv')
test_data=pd.read_csv('testdata.csv')
training_examples=pd.DataFrame()
training_targets=pd.DataFrame()
validation_examples=pd.DataFrame()
validation_targets=pd.DataFrame()

training_examples['L1'] =train_data["L1"]
training_examples['L2'] =train_data["L2"]
training_examples['L3'] =train_data["L3"]
training_targets['X'] =train_data["X"]
training_targets['Y'] =train_data["Y"]
training_targets['Z'] =train_data["Z"]
validation_examples['L1'] =test_data["L1"]
validation_examples['L2'] =test_data["L2"]
validation_examples['L3'] =test_data["L3"]
validation_targets['X'] =test_data["X"]
validation_targets['Y'] =train_data["Y"]
validation_targets['Z'] =train_data["Z"]

validation_targets["X"]
0      0.0
1     -0.2
2     -0.3
3     -0.5
4     -0.6
      ...
120    0.7
121    0.5
122    0.3
123    0.2
124    0.0
Name: X, Length: 125, dtype: float64

def train_nn_regression_model(
    my_optimizer,
    steps,
    batch_size,
    hidden_units,
    training_examples,
    training_targets,
    validation_examples,
    validation_targets):
  """
  Trains a neural network regression model.
  
  In addition to training, this function also prints training progress information,
  as well as a plot of the training and validation loss over time.
  
  Args:
    my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
    steps: A non-zero `int`, the total number of training steps. A training step
      consists of a forward and backward pass using a single batch.
    batch_size: A non-zero `int`, the batch size.
    hidden_units: A `list` of int values, specifying the number of neurons in each layer.
    training_examples: A `DataFrame` containing one or more columns from
      `california_housing_dataframe` to use as input features for training.
    training_targets: A `DataFrame` containing exactly one column from
      `california_housing_dataframe` to use as target for training.
    validation_examples: A `DataFrame` containing one or more columns from
      `california_housing_dataframe` to use as input features for validation.
    validation_targets: A `DataFrame` containing exactly one column from
      `california_housing_dataframe` to use as target for validation.
      
  Returns:
    A tuple `(estimator, training_losses, validation_losses)`:
      estimator: the trained `DNNRegressor` object.
      training_losses: a `list` containing the training loss values taken during training.
      validation_losses: a `list` containing the validation loss values taken during training.
  """
  global training_predictions_now
  global validation_predictions_now
  periods = 20
  steps_per_period = steps / periods
  
  # Create a linear regressor object.
  my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
  dnn_regressor = tf.estimator.DNNRegressor(
      feature_columns=construct_feature_columns(training_examples),
      hidden_units=hidden_units,
      optimizer=my_optimizer
  )
  
  # Create input functions
  training_input_fn = lambda: my_input_fn(training_examples,
                                          training_targets["X"],
                                          batch_size=batch_size)
  predict_training_input_fn = lambda: my_input_fn(training_examples,
                                                  training_targets["X"],
                                                  num_epochs=1,
                                                  shuffle=False)
  predict_validation_input_fn = lambda: my_input_fn(validation_examples,
                                                    validation_targets["X"],
                                                    num_epochs=1,
                                                    shuffle=False)
  # Train the model, but do so inside a loop so that we can periodically assess
  # loss metrics.
  print ("Training model...")
  print ("RMSE (on training data):")
  training_rmse = []
  validation_rmse = []
  for period in range (0, periods):
    # Train the model, starting from the prior state.
    dnn_regressor.train(
        input_fn=training_input_fn,
        steps=steps_per_period
    )
    # Take a break and compute predictions.
    training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
    training_predictions = np.array([item['predictions'][0] for item in training_predictions])
    
    validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
    validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
    
    # Compute training and validation loss.
    training_root_mean_squared_error = math.sqrt(
        metrics.mean_squared_error(training_predictions, training_targets['X']))
    validation_root_mean_squared_error = math.sqrt(
        metrics.mean_squared_error(validation_predictions, validation_targets['X']))
    # Occasionally print the current loss.
    print ("  period %02d : %0.4f" % (period, training_root_mean_squared_error))
    # Add the loss metrics from this period to our list.
    training_rmse.append(training_root_mean_squared_error)
    validation_rmse.append(validation_root_mean_squared_error)
  print ("Model training finished.")
  # Output a graph of loss metrics over periods.
  plt.ylabel("RMSE")
  plt.xlabel("Periods")
  plt.title("Root Mean Squared Error vs. Periods")
  plt.tight_layout()
  plt.plot(training_rmse, label="training")
  plt.plot(validation_rmse, label="validation")
  plt.legend()
  print ("Final RMSE (on training data):   %0.4f" % training_root_mean_squared_error)
  print ("Final RMSE (on validation data): %0.4f" % validation_root_mean_squared_error)
  training_predictions_now=training_predictions
  validation_predictions_now=validation_predictions
  return dnn_regressor, training_rmse, validation_rmse

_ = train_nn_regression_model(
    my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.003),
    steps=2000,
    batch_size=50,
    hidden_units=[13, 13, 10, 10, 5, 5],
    training_examples=training_examples,
    training_targets=training_targets,
    validation_examples=validation_examples,
    validation_targets=validation_targets)

The parameters are all blindly adjusted, and the meanings of many parameters have been forgotten. . . As for the effect, it feels that some positions have high accuracy, and some positions have average accuracy, which may be the reason for adjusting the parameters.

Last result:

Training model...
RMSE (on training data):
  period 00 : 0.1325
  period 01 : 0.0952
  period 02 : 0.0720
  period 03 : 0.0725
  period 04 : 0.0535
  period 05 : 0.0526
  period 06 : 0.0472
  period 07 : 0.0419
  period 08 : 0.0376
  period 09 : 0.0358
  period 10 : 0.0324
  period 11 : 0.0368
  period 12 : 0.0385
  period 13 : 0.0289
  period 14 : 0.0311
  period 15 : 0.0251
  period 16 : 0.0272
  period 17 : 0.0290
  period 18 : 0.0256
  period 19 : 0.0194
Model training finished.
Final RMSE (on training data):   0.0194
Final RMSE (on validation data): 0.0195

question:

1. How can I output XYZ at the same time? Now I only output X, and I try to change the code and I get an error, or I don't understand the code deeply. .

2. How should the parameters be adjusted?

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324847008&siteId=291194637