Keras model

About the Keras model

There are two main types of models in Keras: Sequential model and Model class model using functional API. These models have many common methods and properties:
• model.layers is a flattened list containing the network layers of the model.
• model.inputs is a list of model input tensors.
• model.outputs is a list of model output tensors.
• model.summary(): Print out the model summary information. It is a shortcut call of utils.print_summary.
• model.get_config(): returns a dictionary containing model configuration information. With the following code, you can re-instantiate the model based on these configuration information:

Model class model method

compile compile(self, optimizer, loss, metrics=None,
loss_weights=None, sample_weight_mode=None, weighted_metrics=None,
target_tensors=None)

parameter

optimizer : string (optimizer name) or optimizer object. See optimizers for details.
loss : character string (name of objective function) or objective function. See losses for details. If the model has multiple outputs, you can
use a different loss on each output by passing a dictionary or list of loss functions. The loss value that the model will minimize will
be the sum of all individual losses.
Metrics : Model evaluation criteria during training and testing. Usually you would use metrics = ['accuracy'].
To specify different evaluation criteria for different outputs of the multi-output model, you can also pass a dictionary, such as metrics =
{'output_a':'accuracy'}.
Model 50
loss_weights : Optional list or dictionary of specified scalar coefficients (Python floating point numbers) to measure
the contribution of the loss function to the output of different models. The error value that the model will minimize is the weighted sum
error weighted by the loss_weights coefficient . If it is a list, then it should be a 1:1 mapping corresponding to the model output. If it is a tensor, then
the output name (string) should be mapped to the scalar coefficient.
sample_weight_mode : If you need to implement time-step sampling weight (2D weight), please set it to
temporal. The default is None, which is the sampling weight (1D). If the model has multiple outputs, you can pass mode
A dictionary or list of to use a different sample_weight_mode on each output. • weighted_metrics: A
list of metrics evaluated and weighted by sample_weight or class_weight during training and testing .
target_tensors : By default, Keras will create a placeholder for the model's target, and the
target data will be used during training . On the contrary, if you want to use your own target tensors (conversely, Keras will not load
the external Numpy data of these target tensors during training ), you can specify them through the target_tensors parameter. It can
be a single tensor (single output model), a list of tensors, or a dictionary mapping output names to target tensors.
**kwargs : When using Theano/CNTK backend, these parameters are passed to K.function. When using the TensorFlow backend, these parameters are passed to tf.Session.run.
Exceptions
• ValueError: If the parameters of optimizer, loss, metrics or sample_weight_mode are invalid.

fit
fit(self, x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None,
validation_split=0.0, validation_data=None, shuffle=True,
class_weight=None, sample_weight=None, initial_epoch =0, steps_per_epoch=None,
validation_steps=None
) Train the model in a fixed number of rounds (iterations on the data set).

Arguments
• x : Numpy array of training data (if the model has only one input), or a list of Numpy arrays (if the
model has multiple inputs). If the input layer in the model is named, you can also pass a dictionary to
map the input layer name to a Numpy array. If data is fed from a local framework tensor (such as TensorFlow data tensor), x can be
None (default).
• y: Numpy array of target (label) data (if the model has only one output), or a list of Numpy arrays
(if the model has multiple outputs). If the output layer in the model is named, you can also pass a dictionary to map the output
layer name to a Numpy array. If data is fed from a local framework tensor (such as TensorFlow data tensor)
, y can be None (default).
• batch_size: integer or None. The number of samples for each gradient update. If not specified, the default is 32. • epochs: integer. Training model iteration rounds. A round is an iteration over the entire x or y. Please note that along with
initial_epoch, epochs are understood as "final rounds". The model is not trained for the epochs round, but
stops training at the epochs round.
Model 51
• verbose: 0, 1 or 2. Log display mode. 0 = quiet mode, 1 = progress bar, 2 = one line per round.
• callbacks:A series of keras.callbacks.Callback instances. A series of callback
functions that can be used during training . See callbacks for details. • validation_split: float between 0 and 1. The proportion of training data used as the validation set. The model will separate part
of the validation data that will not be trained, and will evaluate the errors of these validation data and any other model indicators at the end of each round
. The verification data is in the last part of the sample of the x and y data before shuffling.
• validation_data: tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights), used
to evaluate the loss, and any model metrics at the end of each round. The model will not be trained on this data.
This parameter will override validation_split.
• shuffle: Boolean value (whether to shuffle the data before each iteration) or string (batch). Batch is
a special option for dealing with HDF5 data restrictions. It shuffles the data inside a batch. When steps_per_epoch is not None
, this parameter is invalid.
• class_weight: Optional dictionary used to map class index (integer) to weight (floating point) value for weighted loss function
(only during training). This may help tell the model to "pay more attention" to samples from underrepresented classes.
• sample_weight : optional Numpy weight array of training samples, used to weight the loss function (only in training
period). You can pass a flat (1D) Numpy array with the same length as the input sample (
1:1 mapping between weights and samples ), or in the case of time series data, you can pass
a 2D array of size (samples, sequence_length) to Different weights are applied to each time step of each sample. In this case, you should make sure
to specify sample_weight_mode="temporal" in compile(). • initial_epoch: integer. The starting round of training (helps to resume previous training).
• steps_per_epoch: integer or None. The total number of steps (sample
batches) before declaring the completion of one round and starting the next round . When using input tensors such as TensorFlow data tensors for training, the default value None is equal to
the number of samples in the data set divided by the batch size. If it cannot be determined, it is 1.
validation_steps: Only useful when steps_per_epoch is specified. The total number of steps to be verified before stopping (batch
samples).

evaluate(self, x=None, y=None, batch_size=None, verbose=1, sample_weight=None,
steps=None)
returns the error value of the model and the evaluation standard value in the test mode.
The calculations are carried out in batches.

• x: Numpy array of test data (if the model has only one input), or a list of Numpy arrays (if the
model has multiple inputs). If the input layer in the model is named, you can also pass a dictionary to
map the input layer name to a Numpy array. If data is fed from a local framework tensor (such as TensorFlow data tensor), x can be
None (default).
• y: Numpy array of target (label) data, or list of Numpy arrays (if the model has multiple outputs).
If the output layer in the model is named, you can also pass a dictionary to map the output layer name to a Numpy array.
If data is fed from a local framework tensor (such as TensorFlow data tensor), y can be None (default).
• batch_size: integer or None. The number of samples per evaluation. If not specified, the default is 32. • verbose: 0 or 1. Log display mode. 0 = quiet mode, 1 = progress bar.
• sample_weight : An optional Numpy weight array of the test sample, used to weight the loss function. You can pass
delivered in the same flat input sample length (1D) Numpy array (weights and 1 between the sample: 1 mapping), or
in situations where the time series data transfer size (samples, sequence_length) a 2D array, in each
application of each time step samples different weights. In this case, you should make sure to specify
sample_weight_mode="temporal" in compile() .
• steps: Integer or None. Declare the total number of steps (batch samples) before the end of the evaluation. The default value is None.

predict
predict(self, x, batch_size=None, verbose=0, steps=None)
Generate output predictions for input samples.
Calculations are carried out in batches

Parameters
• x: Input data, Numpy array (or list of Numpy array, if the model has multiple outputs).
• batch_size: integer. If not specified, the default is 32. • verbose: log display mode, 0 or 1. • steps: declare the total number of steps (batch samples) before the end of the forecast. The default value is None.

train_on_batch
train_on_batch(self, x, y, sample_weight=None, class_weight=None)

Run a single gradient update of a batch of samples.
__ Parameters_
• x: Numpy array of test data (if the model has only one input), or a list of Numpy arrays (if the
model has multiple inputs). If the input layer in the model is named, you can also pass a dictionary to
map the input layer name to a Numpy array.
• y: Numpy array of target (label) data, or list of Numpy arrays (if the model has multiple outputs). The
output layer if the model is named, you can pass a dictionary that maps to the output layer name Numpy array.
• sample_weight: optional array, the same length as x, containing the weight of each sample applied to the model loss function.
If it is time domain data, you can pass a 2D array of size (samples, sequence_length) to
apply different weights for each time step of each sample. In this case, you should specify
sample_weight_mode="temporal" in compile() .
• class_weight: Optional dictionary used to map class index (integer) to weight (floating point) value to
weight the loss function of the model during training . This may help tell the model to "pay more attention" to samples from underrepresented classes.

test_on_batch
test_on_batch(self, x, y, sample_weight=None)
Test the model on a batch of samples.

Parameters
• x: Numpy array of test data (if the model has only one input), or a list of Numpy arrays (if the
model has multiple inputs). If the input layer in the model is named, you can also pass a dictionary to
map the input layer name to a Numpy array.
• y: Numpy array of target (label) data, or list of Numpy arrays (if the model has multiple outputs). The
output layer if the model is named, you can pass a dictionary that maps to the output layer name Numpy array.
• sample_weight: optional array, the same length as x, containing the weight of each sample applied to the model loss function.
If it is time domain data, you can pass a 2D array of size (samples, sequence_length) to
apply different weights for each time step of each sample.

Guess you like

Origin blog.csdn.net/as1490047935/article/details/105059940