python tensorflow tf.layers.dense()(密集连接层的功能接口)

from tensorflow\python\layers\core.py

@tf_export('layers.dense')
def dense(
    inputs, units,
    activation=None,
    use_bias=True,
    kernel_initializer=None,
    bias_initializer=init_ops.zeros_initializer(),
    kernel_regularizer=None,
    bias_regularizer=None,
    activity_regularizer=None,
    kernel_constraint=None,
    bias_constraint=None,
    trainable=True,
    name=None,
    reuse=None):
  """Functional interface for the densely-connected layer.
  密集连接层的功能接口。

  This layer implements the operation:
  `outputs = activation(inputs * kernel + bias)`
  where `activation` is the activation function passed as the `activation`
  argument (if not `None`), `kernel` is a weights matrix created by the layer,
  and `bias` is a bias vector created by the layer
  (only if `use_bias` is `True`).

该层实现以下操作:
输出=激活(输入*内核+偏差)其中“激活”是作为“激活”参数(如果不是“无”)传递的激活函数,“内核”是由以下项创建的权重矩阵 层,“ bias”是由该层创建的偏差矢量(仅当“ use_bias”为“ True”时)。

  Arguments:
    inputs: Tensor input. 张量输入。
    units: Integer or Long, dimensionality of the output space.
    	   输出空间的维度:Integer或Long。
    activation: Activation function (callable). Set it to None to maintain a
      linear activation.
      			激活功能(可调用)。 将其设置为None以保持线性激活。
    use_bias: Boolean, whether the layer uses a bias.
    		  布尔值,层是否使用偏差。
    kernel_initializer: Initializer function for the weight matrix.
      If `None` (default), weights are initialized using the default
      initializer used by `tf.get_variable`.
      					权重矩阵的初始化函数。 如果为“ None”(默认),则权重将使用“ tf.get_variable”使用的默认初始化程序进行初始化。
    bias_initializer: Initializer function for the bias.
    				  偏置的初始化函数。
    kernel_regularizer: Regularizer function for the weight matrix.
    					权重矩阵的正则函数。
    bias_regularizer: Regularizer function for the bias.
    				  正则化功能的偏置。
    activity_regularizer: Regularizer function for the output.
    					  输出的正则函数。
    kernel_constraint: An optional projection function to be applied to the
        kernel after being updated by an `Optimizer` (e.g. used to implement
        norm constraints or value constraints for layer weights). The function
        must take as input the unprojected variable and must return the
        projected variable (which must have the same shape). Constraints are
        not safe to use when doing asynchronous distributed training.
        			   (内核约束)一种可选的投影函数,在由“ Optimizer”更新后将应用于内核(例如,用于实现规范约束或层权重的值约束)。 
        			   该函数必须将未投影的变量作为输入,并且必须返回投影变量(形状必须相同)。 
        			   在进行异步分布式训练时,使用约束并不安全。
    bias_constraint: An optional projection function to be applied to the
        bias after being updated by an `Optimizer`.
        			 一个可选的投影函数,在由“ Optimizer”更新后将应用于偏置。
    trainable: Boolean, if `True` also add variables to the graph collection
      `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
      		   布尔值,如果True还将变量添加到图形集合GraphKeys.TRAINABLE_VARIABLES中(请参见tf.Variable)。
    name: String, the name of the layer.
    	  字符串,层的名称。
    reuse: Boolean, whether to reuse the weights of a previous layer
      by the same name.
      	   布尔值,是否重复使用相同名称的上一层的权重。

  Returns:
    Output tensor the same shape as `inputs` except the last dimension is of
    size `units`.
    输出张量与“输入”具有相同的形状,除了最后一个维度的尺寸为“units”。

  Raises:
    ValueError: if eager execution is enabled.
    ValueError:如果启用了急切执行。
  """
  layer = Dense(units,
                activation=activation,
                use_bias=use_bias,
                kernel_initializer=kernel_initializer,
                bias_initializer=bias_initializer,
                kernel_regularizer=kernel_regularizer,
                bias_regularizer=bias_regularizer,
                activity_regularizer=activity_regularizer,
                kernel_constraint=kernel_constraint,
                bias_constraint=bias_constraint,
                trainable=trainable,
                name=name,
                _scope=name,
                _reuse=reuse)
  return layer.apply(inputs)

发布了800 篇原创文章 · 获赞 39 · 访问量 12万+

猜你喜欢

转载自blog.csdn.net/Dontla/article/details/104242306