Common methods in tensorflow

1.tf.argmax(input,axis)

tf.argmax(input,axis) returns the index of the maximum value of each row or column depending on the value of axis.

  • axis = 0: 

Compare the elements of each column, record the index of the largest element of each column, and finally output the index array of the largest element of each column.

test[0] = array([1, 2, 3])
test[1] = array([2, 3, 4])
test[2] = array([5, 4, 3])
test[3] = array([8, 7, 2])
# output   :    [3, 3, 1]      
  • axis = 1 :

Record the index of the largest element in each row, and finally return the index array of the largest element in each row.

test[0] = array([1, 2, 3])  #2
test[1] = array([2, 3, 4])  #2
test[2] = array([5, 4, 3])  #0
test[3] = array([8, 7, 2])  #0

2.tf.placeholder(dtype, shape=None, name=None)

参数:
dtype:数据类型。常用的是tf.float32,tf.float64等数值类型
shape:数据形状。默认是None,就是一维值,也可以是多维(比如[2,3], [None, 3]表示列是3,行不定)
name:名称

(Original link: https://blog.csdn.net/kdongyi/article/details/82343712)

The design concept of Tensorflow is called a computational flow graph. When writing a program, first build the graph of the entire system. The code will not take effect directly. This is different from other numerical calculation libraries in Python (such as Numpy, etc.). The graph is static. , similar to the image in docker. Then, during actual operation, start a session and the program will actually run. The advantage of this is that you avoid repeatedly switching the context in which the underlying program actually runs, and tensorflow helps you optimize the code of the entire system. So the placeholder() function is a placeholder in the model when the neural network builds the graph. At this time, the data to be input is not passed into the model, it will only allocate the necessary memory. After the session is established, in the session, feed data to the placeholder through the feed_dict() function when running the model.

3.tf.multiply(x, y, name=None) 

参数: 
x: 一个类型为:half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128的张量
y: 一个类型跟张量x相同的张量  

返回值: 
x * y element-wise. 
  • The multiply function implements element-level multiplication, that is, the elements of the two multiplied numbers are multiplied separately, rather than matrix multiplication. Note the difference from tf.matmul.​ 
  • The two numbers to be multiplied must have the same data type, otherwise an error will be reported

4.tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)

参数: 
a: 一个类型为 float16, float32, float64, int32, complex64, complex128 且张量秩 > 1 的张量
b: 一个类型跟张量a相同的张量。 
transpose_a: 如果为真, a则在进行乘法计算前进行转置
transpose_b: 如果为真, b则在进行乘法计算前进行转置
adjoint_a: 如果为真, a则在进行乘法计算前进行共轭和转置 
adjoint_b: 如果为真, b则在进行乘法计算前进行共轭和转置
a_is_sparse: 如果为真, a会被处理为稀疏矩阵
b_is_sparse: 如果为真, b会被处理为稀疏矩阵
name: 操作的名字(可选参数)
 
返回值: 
一个跟张量a和张量b类型一样的张量且最内部矩阵是a和b中的相应矩阵的乘积
  • The input must be a matrix (or a tensor of rank >2, representing a batch of matrices) with matching matrix dimensions after transposition
  • Both matrices must be of the same type. The supported types are as follows: float16, float32, float64, int32, complex64, complex128

5.tf.cast(x,dtype,name=None)

Convert the data format of x to dtype. For example, if the original data format of x is bool, then after converting it to float, it can be converted into a sequence of 0 and 1. Vice versa is also possible.

a = tf.Variable([0,0,0,1,1])
b = tf.cast(a,dtype=tf.bool)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print(sess.run(b)) 

>>>[ False False False  True  True]

Guess you like

Origin blog.csdn.net/weixin_42149550/article/details/99564853