tensorflow使用初见

就只是做吴恩达深度学习课后作业的时候顺手打开一个文档把不会的记录下来,方便查阅,毕竟在官网上查由于各种各样的问题查的效率很低,这篇基本是使用上的东西,原理什么的以后再补。

tf.placeholder(
    dtype,
    shape=None,
    name=None
)
hint:需要feed_dict喂数据,可以视为Input
用法:
x=tf.placeholder(name='X',shape=(X.shape),dtype=tf.float32)
shape可以不完整比如:
shape=(None,a,b,c)
tf.Variable(<initial - value>,name=<optional - name>)
#生成一个初始值为initial - value的变量。

tf.get_variable(
    name,
    shape=None,
    dtype=None,
    initializer=None,
    regularizer=None,
    trainable=None,
    collections=None,
    caching_device=None,
    partitioner=None,
    validate_shape=True,
    use_resource=None,
    custom_getter=None,
    constraint=None,
    synchronization=tf.VariableSynchronization.AUTO,
    aggregation=tf.VariableAggregation.NONE
)

#获取已经存在的变量,如果不存在,就新建一个
推荐使用:tf.get_variable(,,,initializer=tf.contrib.layers.xavier_initializer(seed=0))
init=tf.global_variables_initializer()
session.run(init)
不会指定变量的初始化顺序。因此,如果变量的初始值取决于另一变量的值,那么很有可能会出现错误。
要在训练开始前一次性初始化所有可训练变量,请调用 tf.global_variables_initializer()。此函数会返回一个操作,负责初始化 tf.GraphKeys.GLOBAL_VARIABLES 集合中的所有变量。运行此操作会初始化所有变量
tf.nn.conv2d(
    input,
    filters,
    strides,
    padding,
    data_format='NHWC',
    dilations=None,
    name=None
)

Args:

  • input: A Tensor. Must be one of the following types: half, bfloat16, float32, float64. A 4-D tensor. The dimension order is interpreted according to the value of data_format, see below for details.
  • filters: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels]
  • strides: An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. The dimension order is determined by the value of data_format, see below for details.
  • padding: Either the string “SAME” or “VALID” indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is “NHWC”, this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is “NCHW”, this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]].
  • data_format: An optional string from: “NHWC”, “NCHW”. Defaults to “NHWC”. Specify the data format of the input and output data. With the default format “NHWC”, the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be “NCHW”, the data storage order of: [batch, channels, height, width].
  • dilations: An int or list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
  • name: A name for the operation (optional).
tf.nn.max_pool(
    input,
    ksize,
    strides,
    padding,
    data_format=None,
    name=None
)
tf.nn.relu(Z)
tf.contrib.layers.flatten(
    inputs,
    outputs_collections=None,
    scope=None
)
#Flattens the input while maintaining the batch_size.
#return [batch_size,k]

在做吴恩达作业的时候,第四课第一周的bug,

问题答案

问题在于tf 1.4+以上的flatten实现和1.3-不同,不是我错了,问题不大
1.3-

Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064]
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]

1.4+

Z3 = [[ 1.44169843 -0.24909666 5.45049906 -0.26189619 -0.20669907 1.36546707]
[ 1.40708458 -0.02573211 5.08928013 -0.48669922 -0.40940708 1.26248586]]

tf.contrib.layers.fully_connected(
    inputs,
    num_outputs,
    activation_fn=tf.nn.relu,
    normalizer_fn=None,
    normalizer_params=None,
    weights_initializer=initializers.xavier_initializer(),
    weights_regularizer=None,
    biases_initializer=tf.zeros_initializer(),
    biases_regularizer=None,
    reuse=None,
    variables_collections=None,
    outputs_collections=None,
    trainable=True,
    scope=None
)
#Adds a fully connected layer.
tf.math.reduce_mean(
    input_tensor,
    axis=None,
    keepdims=False,
    name=None
)
#Computes the mean of elements across dimensions of a tensor
#Args:
#input_tensor: The tensor to reduce. Should have numeric type.
#axis: The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
#keepdims: If true, retains reduced dimensions with length 1.
#name: A name for the operation (optional).
tf.nn.softmax_cross_entropy_with_logits(
    labels,
    logits,
    axis=-1,
    name=None
)
#Computes softmax cross entropy between logits and labels.
#Args:
#labels: Each vector along the class dimension should hold a valid probability distribution e.g. for the case in which labels are of shape [batch_size, num_classes], each row of labels[i] must be a valid probability distribution.
#logits: Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
#axis: The class dimension. Defaulted to -1 which is the last dimension.
#name: A name for the operation (optional).
eval(
    feed_dict=None,
    session=None
)
#Evaluates this tensor in a Session.

#Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor.

#N.B. Before invoking Tensor.eval(), its graph must have been launched in a session, and either a default session must be available, or session must be specified explicitly.

#Args:
#feed_dict: A dictionary that maps Tensor objects to feed values. See tf.Session.run for a description of the valid feed values.
#session: (Optional.) The Session to be used to evaluate this tensor. If none, the default session will be used.
#Returns:
#A numpy array corresponding to the value of this tensor.
tf.dtypes.cast(
    x,
    dtype,
    name=None
)
x = tf.constant([1.8, 2.2], dtype=tf.float32)
tf.dtypes.cast(x, tf.int32)  # [1, 2], dtype=tf.int32
tf.math.equal(
    x,
    y,
    name=None
)
x = tf.constant([2, 4])
y = tf.constant(2)
tf.math.equal(x, y) ==> array([True, False])

x = tf.constant([2, 4])
y = tf.constant([2, 4])
tf.math.equal(x, y) ==> array([True,  True])
发布了73 篇原创文章 · 获赞 13 · 访问量 2万+

猜你喜欢

转载自blog.csdn.net/IDrandom/article/details/101625299
今日推荐