tf.keras.Model 和 tf.keras.Sequential

There are two ways to initialize the Model:

1. Using the function API, start with Input, then specify the forward process, and finally build the model based on the input and output:

2. By constructing a subclass of Model: similar to nn.Module of pytorch: by defining the implementation of the layer in __init__, and then implementing the forward process in the call function:

 

 

method:

1. compile

Used to configure model training

optimizer: string type optimizer name tf.keras.optimizers

loss: string type function name, objective function or tf.keras.losses.Loss , objective function must be: scalar_loss = fn (y_true, y_pred). If the model has multiple outputs, you can use different losses in each outpu by passing a dictionary or list loss.

metric: The metric used in training and testing. Typically, metrics = ['accuracy'] can be used. For multiple output situations, a dictionary can be passed: metrics = {'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}. You can also pass a list (the length must be the same as the output length): metrics=[['accuracy'], ['accuracy', 'mse']] or  metrics=['accuracy', ['accuracy', 'mse']].

loss_weights: optional. The format is a list or dictionary to weight different losses. The final loss will be the weighted sum of these losses.

sample_weight_mode: If you need to implement timing mode weighting, you can set it to 'temporal', and the default is None. If the model has multiple outputs, you can use differentsample_weight_mode在不同的输出上。

weightd_metrics: There is a list of many evaluation methods metrics, weighted by sample_weight or class_weight.

target_tensors: By default, keras will create placeholders for the model traget, which is to put data for the target position during training. If you want to use your own target tensor, you can use this parameter. It can be a single tensor (single output case), or a tensor list or dictionary map.

distribute: TF2 does not support.

 

2. evaluate

x: input data, can be a numpy array (or list numpy means multiple inputs), can be real tf tensor (or list tensor), can be a dictionary, can be a tf.data data set, can be a generator or keras.utils.Sequence Examples

y: target data. Similar to x.

batch_size: integer or None. If not specified, the default is 32. keras.utils.Sequence实例Do not specify this parameter in the case of the symbol tensor, dataset, generator  .

verbose:0或1。Verbosity mode. 0 = silent, 1 = progress bar.

sample_weight: Used to weight in the loss function, optional.

step: integer or None. The number of steps required before starting the assessment.

callbacks:  keras.callbacks.Callback List instances.

max_queue_size: integer, used only when the input is a generator or keras.utils.Sequence . The maximum size of the generator queue. If not specified, the default is 10.

workers: Integer, only used when the input is a generator keras.utils.Sequence . The maximum number of processes to start when using process-based threads. The default is 1.

use_multiprocessing:布尔。仅对于输入是生成器或keras.utils.Sequence 时使用。

 

3. fit

Iterative training model

The usage of the first few parameters is the same as above.

validation_split: floating point number between 0-1. Divide a certain percentage of data for verification. This part of the data will not be used for training. When selecting, the last sample before shuffle in x and y will be used.

validation_data: The model will be verified on this, and will not participate in training. After specified, the validation_split parameter will be overwritten in the format:

class_weight: 可选。字典格式:将类索引(整数)映射到权重(浮点)值,用于加权损失函数(仅在训练期间)。这有助于告诉模型“更加关注”来自表示不足的类的样本。 

steps_per_epoch:整数或None。周期中迭代次数。训练数据是类似于tf的tensor时默认情况就是数据大小/批量大小。当x是tf.data时不指定该参数则周期迭代直到输入数据被遍历完。

validaton_steps:仅当有validation_data时有效。

validation_freq:仅当有validation_data时有效。整数则表示在进行新的验证前需要训练多少个epoch。validation_freq=[1, 2, 10]则表示在第1,2,10个epoch之后进行验证。

 

4. get_layer

根据名字或索引来检索一个layer。如果都提供的话,先考虑index。索引基于水平图遍历的顺序(自下而上)。 

 

5. load_weights

从TF或HDF5权重文件中载入权重。 如果by_name为False,则根据网络拓朴来载入权重,这意味着网络结构应与保存的权重一致。不包含权重的层不考虑在网络结构中,所以添加或移除这些层不影响。如果by_name为True,仅仅有相同名字的层权重会被载入。这对于微调或者迁移模型中需要改变一些层的操作是有用的。当by_name=False时仅仅tf格式的权重是支持的。注意到从tf和HDF5中load操作有细微不同。HDF5基于展平的权重列表,而TF是基于在model中定义的层的名字。skip_mismatch为布尔,仅在by_name=True时有效:对于形状或数目不能匹配的层进行略过。

 

6. predict

对输入样本进行预测,参数和前面基本一致。 

 

7.  predict_on_batch

对于单一的批量进行预测。x的要求和前面一样。

 

8. reset_metrics和reset_states:重置metrics的状态、重置state。

 

9. save

保存模型到tf或HDF5文件中。要保存的文件savefile包括:

保存的模型可以通过keras.models.load_model. 载入。有load_model返回的模型是是一个已编译的模型,可以使用(除非保存的模型从未编译过)。由Sequential和函数API构建的模型可以保存为HDF5和SavedModel格式。子类模型可以仅可被保存为 SavedModel格式。

参数save_format:可以为r 'tf' or 'h5', 在TF2中默认为tf,TF1中默认为h5.

signature仅在’tf‘格式中可用。

 

 

10. save_weights

保存所有层的权重,save_format可以指定村委HDF5或TF格式。 当村委HDF5格式时,权重文件需要有:

存为TF格式时,网络引用的所有对象都以与tf.train.Checkpoint相同的格式保存,包括分配给对象属性的任何层实例或优化器实例。对于使用tf.keras.Model(inputs,outputs)构建的网络,网络使用的层实例将被自动跟踪/保存。对于从tf.keras.Model继承的用户定义类,必须将层实例分配给对象属性,通常在构造函数中。详见tf.train.Checkpoint和tf.keras.Model文档。

虽然格式相同,但不要混合使用save_weights和tf.train.Checkpoint。Model.save_weights保存的检查点应使用Model.load_weights加载使用tf.train.Checkpoint.save保存的检查点应使用相应的tf.train.Checkpoint.restore还原
TensorFlow格式从根对象开始匹配对象和变量,self表示save_weights,贪婪地匹配属性名。对于Model.save是模型,对于Checkpoint.save是检查点,即使检查点附加了模型。这意味着,使用save_weights保存tf.keras.Model并加载到tf.train.Checkpoint中,同时附加一个模型(反之亦然),将与模型的变量不匹配。有关TensorFlow格式的详细信息,请参阅guide to training checkpoints

 

11. summary

打印网络信息。positions:每行日志元素的相对或绝对位置。如果未提供,则默认为[.33、.55、.67、1.]。 

 

12. test_on_batch

如果reset_metrics=True,则metrics仅在该批量中返回。False则将度量将有状态地根据批量累积。 

 

13. to_json和to_yaml

返回不同格式的网络信息。可以分别利用 keras.models.model_from_yaml(yaml_string, custom_objects{}).、 keras.models.model_from_json(json_string, custom_objects={})来载入网络中。

 

14. train_on_batch

在单一批量数据上进行单次梯度更新。返回训练loss。

 

 

tf.keras.Sequential

方法基本和Model一样。

 

Guess you like

Origin www.cnblogs.com/king-lps/p/12743485.html