Keras.layer ()

Keras.layers

The Dense (full connection layer)

Syntax:

keras.layers.Dense(units, 
                   activation=None, 
                   use_bias=True,
                   kernel_initializer='glorot_uniform',
                   bias_initializer='zeros', 
                   kernel_regularizer=None, 
                   bias_regularizer=None, 
                   activity_regularizer=None, 
                   kernel_constraint=None, 
                   bias_constraint=None)

DenseImplement the following: output = activation(dot(input, kernel) + bias)wherein activationis calculated according to the activation function-by-element, kernelis created by the weight matrix network layer, and bias(only in the offset vector is created which use_biasis Trueonly helpful).

Parameters:

  • Units : a positive integer, output spatial dimensions.
  • Activation : activation function (see Activations ). If not specified, the activation function (ie, "linear" Activation: not used a(x) = x).
  • use_bias : Boolean value, whether the layer is the offset vector.
  • kernel_initializer : kernelinitializer weight matrix (see from the initializers ).
  • bias_initializer : offset vector initializer (See from the initializers ).
  • kernel_regularizer : applied to the kernelweight matrix regularization function (see regularizer ).
  • bias_regularizer : the bias applied to the regularization function (see regularizer ).
  • activity_regularizer : Application to the output layer regularization function (its "activation"). (See regularizer ).
  • kernel_constraint : applied to the kernelconstraint functions (see the weight matrix of the Constraints ).
  • bias_constraint : offset vector constraint applied to the function (see Constraints ).

Enter the dimensions

nD tensor, (batch_size, ..., input_dim)size: . The most common scenario is a dimension of (batch_size, input_dim)the 2D input.

Output Size

nD tensor, (batch_size, ..., units)size: . For example, for a size of (batch_size, input_dim)the 2D input size, it is output (batch_size, units).


Activation (activation function)

syntax:

keras.layers.Activation(activation)

parameter:

  • Activation : activation function name to be used (see: Activations ), or select one or TensorFlow Theano operation.

Input size:

Any size. When using this layer as a first layer model, the parameters input_shape(tuple of integers, not including the axes of the sample).

Output Size

Same as the input.


Dropout (regularization layer)

keras.layers.Dropout(rate, noise_shape=None, seed=None)

The Dropout applied to the input.

Dropout each update included in training, the input unit is set to 0 by a random ratio, which helps to prevent over-fitting.

parameter

  • Rate : float between 0 and 1. Input ratio required discarded.
  • noise_shape : integer tensor 1D, showing the shape of the dropout multiplied by the input binary mask layer. For example, if you enter a size (batch_size, timesteps, features), then you want to dropout masking layer at all time steps are the same, you can use noise_shape=(batch_size, 1, features).
  • the SEED : As a Python integer random seed.

references


The Flatten (flattening)

keras.layers.Flatten(data_format=None)

Input flattened. It does not affect the batch size.

parameter

  • DATA_FORMAT : a string with the value channels_last(default value) or channels_first. It shows the dimensions of the order of input. The purpose of this parameter is the model when switching from one data format to another data format reserved weighting sequence. channels_lastCorresponding to the size of (batch, ..., channels)the input and the channels_firstcorresponding size of (batch, channels, ...)the input. The default is image_data_formatthe value, you can Keras configuration files ~/.keras/keras.jsonfound in it. If you never set it, it would bechannels_last

Case

model = Sequential()
model.add(Conv2D(64, (3, 3),
                 input_shape=(3, 32, 32), padding='same',))
# 现在:model.output_shape == (None, 64, 32, 32)
model.add(Flatten())
# 现在:model.output_shape == (None, 65536)

Input (for example of Keras tensor)

keras.engine.input_layer.Input()

Input() Keras tensor for instantiation.

Keras tensor the underlying rear end (Theano, TensorFlow or CNTK) tensor objects, we have added some properties, such models can be constructed through the input and output Keras understanding model.

For example, if a, b and c are Keras tensor, then the following actions are possible: model = Model(input=[a, b], output=c)

Keras attribute is added: - ** _ keras_shape : size for integer tuple propagated through the reasoning Keras end size. - _keras_history **: applied to the last layer of tensor. FIG layer entire network computing recursively retrieved from the layer.

parameter

  • Shape : a tuple size (an integer), the batch size is not included. For example, shape=(32,)it indicates that the desired input is the 32-dimensional vector batches.
  • batch_shape : a tuple size (an integer), comprising a batch size. For example, batch_shape=(10, 32)it indicates that the desired input 32 is 10-dimensional vectors. batch_shape=(None, 32)32 show dimensional vector of any batch size.
  • name : string optional layer name. In one model should be unique (can not reuse a name twice). If not provided, it will be generated automatically.
  • DTYPE : Enter the desired data type represents character string ( float32, float64, int32...)
  • sparse : a Boolean value indicating whether the need to create placeholder is sparse.
  • Tensor : Optional packaged into Inputconventional tensor layer. If set, then this layer will not create a placeholder tensor.

return

A tensor.

Case

# 这是 Keras 中的一个逻辑回归
x = Input(shape=(32,))
y = Dense(16, activation='softmax')(x)
model = Model(x, y)

The RESHAPE (adjustment input size)

keras.layers.Reshape(target_shape)

Input rescaled specific dimensions.

parameter

  • target_shape : target size. Integer tuple. It does not contain express quantities axis.

Enter the dimensions

Arbitrary, although the input size in all dimensions must be fixed. When using this layer as a first layer model, the parameters input_shape(tuple of integers, not including the axes of the sample).

Output Size

(batch_size,) + target_shape

Case

# 作为 Sequential 模型的第一层
model = Sequential()
model.add(Reshape((3, 4), input_shape=(12,)))
# 现在:model.output_shape == (None, 3, 4)
# 注意: `None` 是批表示的维度

# 作为 Sequential 模型的中间层
model.add(Reshape((6, 2)))
# 现在: model.output_shape == (None, 6, 2)

# 还支持使用 `-1` 表示维度的尺寸推断
model.add(Reshape((-1, 2, 2)))
# 现在: model.output_shape == (None, 3, 2, 2)

Permute (displacement dimension input)

keras.layers.Permute(dims)

Substituted dimension according to a given input pattern.

In some scenarios is useful, for example, be connected together RNN and CNN.

Case

model = Sequential()
model.add(Permute((2, 1), input_shape=(10, 64)))
# 现在: model.output_shape == (None, 64, 10)
# 注意: `None` 是批表示的维度

parameter

  • DIMS : integer tuples. Replacement mode, does not contain a sample dimensions. Index starts at 1. For example, (2, 1)replacement of the first input and the second dimension.

Enter the dimensions

Any. When using this layer as a first layer model, the parameters input_shape(tuple of integers, not including the axes of the sample).

Output Size

Identical to the input size, but rearranged according to the dimensions specified pattern.


RepeatVector (repeated n times the input)

keras.layers.RepeatVector(n)

Case

model = Sequential()
model.add(Dense(32, input_dim=32))
# 现在: model.output_shape == (None, 32)
# 注意: `None` 是批表示的维度

model.add(RepeatVector(3))
# 现在: model.output_shape == (None, 3, 32)

parameter

  • n- : integer, the number of repetitions.

Enter the dimensions

2D tensor dimensions (num_samples, features).

Output Size

3D tensor dimensions (num_samples, n, features).


Lambda (the package is any expression Layer object)

keras.layers.Lambda(function, output_shape=None, mask=None, arguments=None)

Case

# 添加一个 x -> x^2 层
model.add(Lambda(lambda x: x ** 2))
# 添加一个网络层,返回输入的正数部分
# 与负数部分的反面的连接

def antirectifier(x):
    x -= K.mean(x, axis=1, keepdims=True)
    x = K.l2_normalize(x, axis=1)
    pos = K.relu(x)
    neg = K.relu(-x)
    return K.concatenate([pos, neg], axis=1)

def antirectifier_output_shape(input_shape):
    shape = list(input_shape)
    assert len(shape) == 2  # only valid for 2D tensors
    shape[-1] *= 2
    return tuple(shape)

model.add(Lambda(antirectifier,
                 output_shape=antirectifier_output_shape))

parameter

  • function : function requires package. Tensor input as the first parameter.
  • output_shape : expected function output size. Only makes sense when using Theano. Or it may be a function of a tuple. If a tuple, only specify the first dimension; sample dimensions is assumed that the same input: output_shape = (input_shape[0], ) + output_shapeor, is input Noneand the sample dimensions are None: output_shape = (None, ) + output_shapeIf a function that specifies the overall size of the input size is a function of:output_shape = f(input_shape)
  • arguments The : The optional keyword arguments to be passed to the function.

Enter the dimensions

Any. When using this layer as a first layer model, the parameters input_shape(tuple of integers, not including the axes of the sample).

Output Size

By the output_shapespecified parameters (or in use TensorFlow, to give automated reasoning).


ActivityRegularization (network layer)

parameter

keras.layers.ActivityRegularization(l1=0.0, l2=0.0)

Network layer, the active application based on the input of a cost function update

  • l1 **: L1 regularization factor (positive floating-point number).
  • L2 : L2 of regularization factor (positive floating-point number).

Enter the dimensions

Any. When using this layer as a first layer model, the parameters input_shape(tuple of integers, not including the axes of the sample).

Output Size

Same as the input.


Masking (cover using cover sequence value)

keras.layers.Masking(mask_value=0.0)

Use cover covering sequence values ​​to skip time steps.

For each time step (first dimension tensor) input tensor, if all the input time step tensor values mask_valueare equal, then the time step is covered (skipped) in all downstream layers (as long as they support coverage).

If you do not support any downstream layer covering the input coverage but still receive such information, an exception is thrown.

Case

Considered to be a fed LSTM Numpy matrix layer x, size (samples, timesteps, features). You want to cover the time step # 3 and # 5, because of the lack of data you several time steps. You can:

  • Setting x[:, 3, :] = 0.andx[:, 5, :] = 0.
  • Before LSTM layer interposed a mask_value=0of Maskinglayers:
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(timesteps, features)))
model.add(LSTM(32))

SpatialDropout1D (Spatial 1D version of Dropout)

keras.layers.SpatialDropout1D(rate)

Dropout This version of the same function, but it drops wherein FIG. 1D instead of discarding the entire single element. If the Fig adjacent frames are strongly correlated (usually the case in front of the convolution layer), the dropout will not make a conventional activated regularization, and the reduction of effective learning rate. In this case, SpatialDropout1D will improve independence between the characteristic diagram, it should be used instead Dropout.

parameter

  • Rate : floating point number between 0 and 1. Input ratio required discarded.

Enter the dimensions

3D tensor, size:(samples, timesteps, channels)

Output Size

Same as the input.

references


SpatialDropout2D (Spatial 2D version of Dropout)

keras.layers.SpatialDropout2D(rate, data_format=None)

Dropout This version of the same function, but it drops wherein FIG. 2D instead of discarding the entire single element. If the Fig adjacent pixels are strongly correlated (usually the case in front of the convolution layer), the dropout will not make a conventional activated regularization, and the reduction of effective learning rate. In this case, SpatialDropout2D will improve independence between the characteristic diagram, it should be used in place of dropout.

parameter

  • Rate : floating point number between 0 and 1. Input ratio required discarded.
  • the DATA_FORMAT : channels_firstOr channels_last. In the channels_firstmode, the channel dimension (i.e., depth) is at index 1, in the channels_lastmode, the channel dimension at index 3. The default is image_data_formatthe value, you can Keras configuration files ~/.keras/keras.jsonfound in it. If you never set it, it would bechannels_last

Enter the dimensions

4D tensor, if DATA_FORMAT = channels_first, size (samples, channels, rows, cols), if DATA_FORMAT = channels_last, size(samples, rows, cols, channels)

Output Size

Same as the input.

references


SpatialDropout3D (Dropout of Spatial 3D version)

keras.layers.SpatialDropout3D(rate, data_format=None)

Dropout This version of the same function, but it drops wherein FIG. 3D instead of discarding the entire single element. If the Fig adjacent voxels are strongly correlated (usually the case in front of the convolution layer), the dropout will not make a conventional activated regularization, and the reduction of effective learning rate. In this case, SpatialDropout3D will improve independence between the characteristic diagram, it should be used in place of dropout.

parameter

  • Rate : floating point number between 0 and 1. Input ratio required discarded.
  • the DATA_FORMAT : channels_firstOr channels_last. In the channels_firstmode, the channel dimension (i.e., depth) is at index 1, in the channels_lastmode, the channel 4 dimension at index. The default is image_data_formatthe value, you can Keras configuration files ~/.keras/keras.jsonfound in it. If you never set it, it would bechannels_last

Enter the dimensions

5D tensor, if DATA_FORMAT = channels_first, size (samples, channels, dim1, dim2, dim3), if DATA_FORMAT = channels_last, size(samples, dim1, dim2, dim3, channels)

Output Size

Same as the input.

references

reference

[Official document] https://keras.io/zh/layers/core/

Guess you like

Origin www.cnblogs.com/wangjs-jacky/p/11521778.html