基于Tensorflow训练自己的dnn神经网络

    前段时间学了google出的tensorflow开源教程,有的没的看完了,也一直没机会用。最近在解一种类似并联机构的机械臂,在正解的过程中遇到了很大的问题,但是师兄用matlab建好了模型,可以通过3根杆长得到末端位置,算了它的工作空间,记录在csv中,一个testdata,一个traindata,后来我把testdata当作验证集用了。。。我想通过建立dnn神经网络来拟合这个模型。

    这是我写的代码,基本都是改tensorflow开源教程里的代码。

import math
​
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
​
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format


def construct_feature_columns(input_features):
  """Construct the TensorFlow Feature Columns.
​
  Args:
    input_features: The names of the numerical input features to use.
  Returns:
    A set of feature columns
  """ 
  return set([tf.feature_column.numeric_column(my_feature)
              for my_feature in input_features])

def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
    """Trains a linear regression model of one feature.
  
    Args:
      features: pandas DataFrame of features
      targets: pandas DataFrame of targets
      batch_size: Size of batches to be passed to the model
      shuffle: True or False. Whether to shuffle the data.
      num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
    Returns:
      Tuple of (features, labels) for next data batch
    """
    
    # Convert pandas data into a dict of np arrays.
    features = {key:np.array(value) for key,value in dict(features).items()}                                           
 
    # Construct a dataset, and configure batching/repeating
    ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
    ds = ds.batch(batch_size).repeat(num_epochs)
    
    # Shuffle the data, if specified
    if shuffle:
      ds = ds.shuffle(10000)
    
    # Return the next batch of data
    features, labels = ds.make_one_shot_iterator().get_next()
    return features, labels

train_data=pd.read_csv('traindata.csv')
test_data=pd.read_csv('testdata.csv')
training_examples=pd.DataFrame()
training_targets=pd.DataFrame()
validation_examples=pd.DataFrame()
validation_targets=pd.DataFrame()

training_examples['L1'] =train_data["L1"]
training_examples['L2'] =train_data["L2"]
training_examples['L3'] =train_data["L3"]
training_targets['X'] =train_data["X"]
training_targets['Y'] =train_data["Y"]
training_targets['Z'] =train_data["Z"]
validation_examples['L1'] =test_data["L1"]
validation_examples['L2'] =test_data["L2"]
validation_examples['L3'] =test_data["L3"]
validation_targets['X'] =test_data["X"]
validation_targets['Y'] =train_data["Y"]
validation_targets['Z'] =train_data["Z"]

validation_targets["X"]
0      0.0
1     -0.2
2     -0.3
3     -0.5
4     -0.6
      ... 
120    0.7
121    0.5
122    0.3
123    0.2
124    0.0
Name: X, Length: 125, dtype: float64

def train_nn_regression_model(
    my_optimizer,
    steps,
    batch_size,
    hidden_units,
    training_examples,
    training_targets,
    validation_examples,
    validation_targets):
  """
  Trains a neural network regression model.
  
  In addition to training, this function also prints training progress information,
  as well as a plot of the training and validation loss over time.
  
  Args:
    my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
    steps: A non-zero `int`, the total number of training steps. A training step
      consists of a forward and backward pass using a single batch.
    batch_size: A non-zero `int`, the batch size.
    hidden_units: A `list` of int values, specifying the number of neurons in each layer.
    training_examples: A `DataFrame` containing one or more columns from
      `california_housing_dataframe` to use as input features for training.
    training_targets: A `DataFrame` containing exactly one column from
      `california_housing_dataframe` to use as target for training.
    validation_examples: A `DataFrame` containing one or more columns from
      `california_housing_dataframe` to use as input features for validation.
    validation_targets: A `DataFrame` containing exactly one column from
      `california_housing_dataframe` to use as target for validation.
      
  Returns:
    A tuple `(estimator, training_losses, validation_losses)`:
      estimator: the trained `DNNRegressor` object.
      training_losses: a `list` containing the training loss values taken during training.
      validation_losses: a `list` containing the validation loss values taken during training.
  """
  global training_predictions_now
  global validation_predictions_now
  periods = 20
  steps_per_period = steps / periods
  
  # Create a linear regressor object.
  my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
  dnn_regressor = tf.estimator.DNNRegressor(
      feature_columns=construct_feature_columns(training_examples),
      hidden_units=hidden_units,
      optimizer=my_optimizer
  )
  
  # Create input functions
  training_input_fn = lambda: my_input_fn(training_examples, 
                                          training_targets["X"], 
                                          batch_size=batch_size)
  predict_training_input_fn = lambda: my_input_fn(training_examples, 
                                                  training_targets["X"], 
                                                  num_epochs=1, 
                                                  shuffle=False)
  predict_validation_input_fn = lambda: my_input_fn(validation_examples, 
                                                    validation_targets["X"], 
                                                    num_epochs=1, 
                                                    shuffle=False)
​
  # Train the model, but do so inside a loop so that we can periodically assess
  # loss metrics.
  print ("Training model...")
  print ("RMSE (on training data):")
  training_rmse = []
  validation_rmse = []
  for period in range (0, periods):
    # Train the model, starting from the prior state.
    dnn_regressor.train(
        input_fn=training_input_fn,
        steps=steps_per_period
    )
    # Take a break and compute predictions.
    training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
    training_predictions = np.array([item['predictions'][0] for item in training_predictions])
    
    validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
    validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
    
    # Compute training and validation loss.
    training_root_mean_squared_error = math.sqrt(
        metrics.mean_squared_error(training_predictions, training_targets['X']))
    validation_root_mean_squared_error = math.sqrt(
        metrics.mean_squared_error(validation_predictions, validation_targets['X']))
    # Occasionally print the current loss.
    print ("  period %02d : %0.4f" % (period, training_root_mean_squared_error))
    # Add the loss metrics from this period to our list.
    training_rmse.append(training_root_mean_squared_error)
    validation_rmse.append(validation_root_mean_squared_error)
  print ("Model training finished.")
​
  # Output a graph of loss metrics over periods.
  plt.ylabel("RMSE")
  plt.xlabel("Periods")
  plt.title("Root Mean Squared Error vs. Periods")
  plt.tight_layout()
  plt.plot(training_rmse, label="training")
  plt.plot(validation_rmse, label="validation")
  plt.legend()
​
  print ("Final RMSE (on training data):   %0.4f" % training_root_mean_squared_error)
  print ("Final RMSE (on validation data): %0.4f" % validation_root_mean_squared_error)
  training_predictions_now=training_predictions
  validation_predictions_now=validation_predictions
  return dnn_regressor, training_rmse, validation_rmse

_ = train_nn_regression_model(
    my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.003),
    steps=2000,
    batch_size=50,
    hidden_units=[13, 13, 10, 10, 5, 5],
    training_examples=training_examples,
    training_targets=training_targets,
    validation_examples=validation_examples,
    validation_targets=validation_targets)

参数什么的都是瞎调的,很多参数的意义都忘掉了。。。至于效果感觉有些位置精度很高,有些位置精度就一般了,可能是调参数的原因吧。

上个结果:

Training model...
RMSE (on training data):
  period 00 : 0.1325
  period 01 : 0.0952
  period 02 : 0.0720
  period 03 : 0.0725
  period 04 : 0.0535
  period 05 : 0.0526
  period 06 : 0.0472
  period 07 : 0.0419
  period 08 : 0.0376
  period 09 : 0.0358
  period 10 : 0.0324
  period 11 : 0.0368
  period 12 : 0.0385
  period 13 : 0.0289
  period 14 : 0.0311
  period 15 : 0.0251
  period 16 : 0.0272
  period 17 : 0.0290
  period 18 : 0.0256
  period 19 : 0.0194
Model training finished.
Final RMSE (on training data):   0.0194
Final RMSE (on validation data): 0.0195

问题:

1、怎么同时输出X Y Z呢,现在我是只输出了X,尝试改了改代码就出错了,还是对代码理解不深。。

2、参数应该怎么调?

猜你喜欢

转载自blog.csdn.net/sinat_37939098/article/details/80084665