8, statistics

  • tf.norm

  • tf.reduce_min/max

  • tf.argmax/argmin

  • tf.equal

  • tf.unique

1, norm, the norm of the vector

1  # norm of a vector, the default is 2 norm square and square root 
2 A = tf.ones ([2,2 & ])
 . 3  Print (tf.norm (A))   # tf.Tensor (2.0, Shape = (), DTYPE = float32) 
. 4  
. 5  # achieve their 
. 6 A1 = tf.sqrt (tf.reduce_sum (tf.square (A)))
 . 7  Print (A1) # tf.Tensor (2.0, Shape = (), DTYPE = float32) 
. 8  
. 9 B = tf.ones ([4,28,28,3 ])
 10 B1 = tf.norm (B)
 . 11  Print (B1) # tf.Tensor (96.99484, Shape = (), DTYPE = float32) 
12  
13  # to achieve their own 
14 b2 = tf.sqrt(tf.reduce_sum(tf.square(b)))
15 print(b2) #tf.Tensor(96.99484, shape=(), dtype=float32)
. 1  # Ll NORM 
2 B = tf.ones ([2,2 & ])
 . 3 N1 = tf.norm (B)
 . 4  Print (N1) # tf.Tensor (2.0, Shape = (), DTYPE = float32) 
. 5  
. 6 N2 tf.norm = (B, the ord = 2, Axis =. 1) # compress columns, each row seeking 2 norm 
. 7  Print (N2) # tf.Tensor ([1.4142135 1.4142135], Shape = (2,), DTYPE = float32) 
. 8  
. 9 N3 tf.norm = (B, the ord =. 1) # . 1 norm, find all default values 
10  Print (N3) # tf.Tensor (4.0, Shape = (), DTYPE = float32) 
. 11  
12 is N3 tf.norm = (B, the ord =. 1, Axis = 0) #Row compression, seeking norm of each column 
13 is  Print (N3) # tf.Tensor ([2. 2.], Shape = (2,), DTYPE = float32) 
14  
15 N4 = tf.norm (B, the ord = 1, Axis = 1) # compress columns, each row seeking norm 
16  Print (N4) # tf.Tensor ([2. 2.], Shape = (2,), DTYPE = float32)

 

 

 

 

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/pengzhonglian/p/11934119.html