keras1.x与keras2.x的区别:

最近学习keras发现网上的代码大部分都是参差不齐,有用1.x版本写的还有2.x版本写的,于是便去查询了一下区别:

官方链接:https://github.com/keras-team/keras/wiki/Keras-2.0-release-notes

主要区别如下:

Keras 2 release notes:

This document details changes, in particular API changes, occurring from Keras 1 to Keras 2.

Training:

All layers taking a legacy dim_ordering argument

dim_ordering has been renamed data_format. It now takes two values: "channels_first" (formerly "th") and "channels_last" (formerly "tf").

Dense layer

Changed interface:

Dropout, SpatialDropout*D, GaussianDropout

Changed interface:

Embedding

Convolutional layers

Interface changes common to all convolutional layers:

Pooling1D

the -> indicates that the terms has been renamed.

Pooling2D, 3D

ZeroPadding layers

The padding argument of the ZeroPadding2D and ZeroPadding3D layers must be a tuple of length 2 and 3 respectively. Each entry i contains by how much to pad the spatial dimension i. If it's an integer, symmetric padding is applied. If it's a tuple of integers, asymmetric padding is applied.

Upsampling1D

BatchNormalization

The mode argument of BatchNormalization has been removed; BatchNorm now only supports mode 0 (use batch metrics for feature-wise normalization during training, and use moving metrics for feature-wise normalization during testing).

ConvLSTM2D

Same changes as for convolutional layers and recurrent layers apply.

PReLU

GaussianNoise

Recurrent layers

Lambda

The Lambda layer now supports a mask argument.

  • The nb_epoch argument has been renamed epochs everywhere.
  • The methods fit_generatorevaluate_generator and predict_generator now work by drawing a number of batches from a generator (number of training steps), rather than a number of samples.
    • samples_per_epoch was changed to steps_per_epoch in fit_generator. It now refers to the number of batches an epoch is considered as done.
    • nb_val_samples was renamed validation_steps in fit_generator.
    • val_samples was renamed steps in evaluate_generator and predict_generator.
  • It is now possible to manually add a loss to a model by calling model.add_loss(loss_tensor). The loss is added to the other losses of the model and minimized during training.
  • It is also possible to not apply any loss to a specific model output. If you pass None as the loss argument for an output (e.g. in compile, loss={'output_1': None, 'output_2': 'mse'}, the model will expect no Numpy arrays to be fed for this output when using fittrain_on_batch, or fit_generator. The output values are still returned as usual when using predict.
  • In TensorFlow, models can now be trained using fit if some of their inputs (or even all) are TensorFlow queues or variables, rather than placeholders. See this test for specific examples.

    Losses & metrics


  • The objectives module has been renamed losses.

  • Several legacy metric functions have been removed,namely matthews_correlationprecisionrecallfbeta_scorefmeasure.
  • Custom metric functions can no longer return a dict, they must return a single tensor.

  • Models


    Constructor arguments for Model have been renamed:

    • input -> inputs
    • output -> outputs
  • The Sequential model not longer supports the set_input method.
  • For any model saved with Keras 2.0 or higher, weights trained with backend X will be converted to work with backend Y without any manual conversion step.

  • Layers


    Removals

    Deprecated layers MaxoutDenseHighway and TimedistributedDense have been removed.

    Call method

  • All layers that use the learning phase now support a training argument in call (Python boolean or symbolic tensor), allowing to specify the learning phase on a layer-by-layer basis. E.g. by calling a Dropout instance as dropout(inputs, training=True) you obtain a layer that will always apply dropout, regardless of the current global learning phase. The training argument defaults to the global Keras learning phase everywhere.
  • The call method of layers can now take arbitrary keyword arguments, e.g. you can define a custom layer with a call signature like call(inputs, alpha=0.5), and then pass a alpha keyword argument when calling the layer (only with the functional API, naturally).
  • __call__ now makes use of TensorFlow name_scope, so that your TensorFlow graphs will look pretty and well-structured in TensorBoard.
  • output_dim -> units
  • init -> kernel_initializer
  • added bias_initializer argument
  • W_regularizer -> kernel_regularizer
  • b_regularizer -> bias_regularizer
  • b_constraint -> bias_constraint
  • bias -> use_bias
  • p -> rate
  • The AtrousConvolution1D and AtrousConvolution2D layer have been deprecated. Their functionality is instead supported via the dilation_rate argument in Convolution1D and Convolution2D layers.
  • Convolution* layers are renamed Conv*.
  • The Deconvolution2D layer is renamed Conv2DTranspose.
  • The Conv2DTranspose layer no longer requires an output_shape argument, making its use much easier.
  • nb_filter -> filters
  • 'nb_filter' is renamed as 'filters'.
  • float kernel dimension arguments become a single tuple argument, kernel size. E.g. a legacy call Conv2D(10, 3, 3) becomes Conv2D(10, (3, 3))
  • kernel_size can be set to an integer instead of a tuple, e.g. Conv2D(10, 3) is equivalent to Conv2D(10, (3, 3)).
  • subsample -> strides. Can also be set to an integer.
  • border_mode -> padding
  • init -> kernel_initializer
  • added bias_initializer argument
  • W_regularizer -> kernel_regularizer
  • b_regularizer -> bias_regularizer
  • b_constraint -> bias_constraint
  • bias -> use_bias
  • pool_length -> pool_size
  • stride -> strides
  • border_mode -> padding
  • border_mode -> padding
  • dim_ordering -> data_format
  • length -> size
  • beta_init -> beta_initializer
  • gamma_init -> gamma_initializer
  • added arguments centerscale (booleans, whether to use a beta and gamma respectively)
  • added arguments moving_mean_initializermoving_variance_initializer
  • added arguments beta_regularizergamma_regularizer
  • added arguments beta_constraintgamma_constraint
  • attribute running_mean is renamed moving_mean
  • attribute running_std is renamed moving_variance (it is in fact a variance with the current implementation).
  • init -> alpha_initializer
  • sigma -> stddev
  • output_dim -> units
  • init -> kernel_initializer
  • inner_init -> recurrent_initializer
  • added argument bias_initializer
  • W_regularizer -> kernel_regularizer
  • b_regularizer -> bias_regularizer
  • added arguments kernel_constraintrecurrent_constraintbias_constraint
  • dropout_W -> dropout
  • dropout_U -> recurrent_dropout
  • consume_less -> implementation. String values have been replaced with integers: implementation 0 (default), 1 or 2.
  • LSTM only: the argument forget_bias_init has been removed. Instead there is a boolean argument unit_forget_bias, defaulting to True.
    • dim_ordering -> data_format
    • In the SeparableConv2D layers, init is split into depthwise_initializer and pointwise_initializer.
    • Added dilation_rate argument in Conv2D and Conv1D.
    • 1D convolution kernels are now saved as a 3D tensor (instead of 4D as before).
    • 2D and 3D convolution kernels are now saved in format spatial_dims + (input_depth, depth)), even with data_format="channels_first".

      Utilities


      Utilities should now be imported from keras.utils rather than from specific submodules (e.g. no more keras.utils.np_utils...).


      Backend


      random_normal and truncated_normal

    • std -> stddev

      Misc


    • In the backend, set_image_ordering and image_ordering are now set_data_format and data_format.
    • Any arguments (other than nb_epoch) prefixed with nb_ has been renamed to be prefixed with num_ instead. This affects two datasets and one preprocessing utility.

猜你喜欢

转载自blog.csdn.net/qq_58611650/article/details/129041055