tensorflow+numpy deep learning related functions (continuous update)

 

 

tensorflow algorithm

import tensorflow as tf
tf.add(a,b)         #加法
tf.subtract(a,b)    #减法
tf.multiply(x,y)    #乘法
tf.div(x,y)         #整除
tf.truediv(x,y)     #浮点数除法
tf.mod(x,y)         #取余

 

tf.reduce_mean()

The tf.reduce_mean function is used to calculate the average value of the tensor tensor along the specified number axis (a certain dimension of the tensor), which is mainly used to reduce the dimension or calculate the average value of the tensor (image).

接口为:

reduce_mean(input_tensor,
                axis=None,
                keep_dims=False,
                name=None,
                reduction_indices=None)
  • The first parameter input_tensor: the input tensor to be reduced;
  • The second parameter axis: the specified axis, if not specified, the mean value of all elements is calculated;
  • The third parameter keep_dims: Whether to reduce the dimensionality, set to True, the output result keeps the shape of the input tensor, set to False, the output result will reduce the dimensionality;
  • The fourth parameter name: the name of the operation;
  • The fifth parameter reduction_indices: used to specify the axis in the previous version, has been deprecated;

for example:

import tensorflow as tf
 
x = [[1,2,3],
      [1,2,3]]
 
xx = tf.cast(x,tf.float32)
 
mean_all = tf.reduce_mean(xx, keep_dims=False)
mean_0 = tf.reduce_mean(xx, axis=0, keep_dims=False)
mean_1 = tf.reduce_mean(xx, axis=1, keep_dims=False)
 
 
with tf.Session() as sess:
    m_a,m_0,m_1 = sess.run([mean_all, mean_0, mean_1])
 
print m_a    # output: 2.0
print m_0    # output: [ 1.  2.  3.]
print m_1    #output:  [ 2.  2.]

Similar functions

  • tf.reduce_sum: Calculate the cumulative sum of all elements on the specified axis of tensor;
  • tf.reduce_max: Calculate the maximum value of each element in the direction of the specified axis of tensor;
  • tf.reduce_all: Calculate the logical sum (and operation) of each element in the tensor specified axis direction;
  • tf.reduce_any: Calculate the logical OR (or operation) of each element on the specified axis of tensor;

 

e.g., linalg.norm () —— 范 数

Inalg=linear (linear) + algebra (algebra), norm means norm.

x_norm=np.linalg.norm(x, ord=None, axis=None, keepdims=False)

1. x: represents a matrix (can be one-dimensional)

2. ord: norm type

The three norms of vectors:

The three norms of the matrix:

3. axis: processing type

4. keepding: whether to keep the two-dimensional characteristics of the matrix

True means to maintain the two-dimensional characteristics of the matrix, False on the contrary

example:


import numpy as np
x = np.array([
    [1, 2, 3],
    [2, 4, 6]])
print "默认参数(矩阵2范数,不保留矩阵二维特性)        :", np.linalg.norm(x)
print "矩阵2范数,保留矩阵二维特性:", np.linalg.norm(x, keepdims=True)
print "矩阵1范数(列和的最大值)   :", np.linalg.norm(x, ord=1,keepdims=True)
print "矩阵2范数(求特征值,然后求最大特征值得算术平方根):", np.linalg.norm(x, ord=2, keepdims=True)
print "矩阵∞范数(行和的最大值)   :", np.linalg.norm(x, ord=np.inf, keepdims=True)
print "矩阵每个行向量求向量的2范数:", np.linalg.norm(x, axis=1, keepdims=True)
print "矩阵每个列向量求向量的2范数:", np.linalg.norm(x, axis=0, keepdims=True)
print "矩阵每个行向量求向量的1范数:", np.linalg.norm(x, ord=1, axis=1, keepdims=True)
print "矩阵每个列向量求向量的1范数:", np.linalg.norm(x, ord=1, axis=0, keepdims=True)


The output result is:

默认参数(矩阵2范数,不保留矩阵二维特性)        : 8.36660026534
矩阵2范数,保留矩阵二维特性: [[8.36660027]]
矩阵1范数(列和的最大值)   : [[9.]]
矩阵2范数(求特征值,然后求最大特征值得算术平方根): [[8.36660027]]
矩阵∞范数(行和的最大值)   : [[12.]]
矩阵每个行向量求向量的2范数: [[3.74165739]
 [7.48331477]]
矩阵每个列向量求向量的2范数: [[2.23606798 4.47213595 6.70820393]]
矩阵每个行向量求向量的1范数: [[ 6.]
 [12.]]
矩阵每个列向量求向量的1范数: [[3. 6. 9.]]

 


Reference materials:

【1】https://blog.csdn.net/Liang_xj/article/details/85005243

【2】https://blog.csdn.net/dcrmg/article/details/79797826

Guess you like

Origin blog.csdn.net/allein_STR/article/details/111222195