tensorflow 2 basics

Machine learning is finally open! !

Of course machine learning requires tensorflow + gpu (but I do n’t have / (ㄒ o ㄒ) / ~~)

The basics of learning tensorflow + cpu should be ok

Before, I watched Wu Enda ’s video directly. After understanding the principles of the algorithms, I found that it was impossible to realize at all ... and I could n’t understand the function of the functions in tf.

So I am going to study it again systematically this time

Looking for resources to do machine learning based on tf 2 for a long time, some are not too old to use tf 1.0, just read ppt

Finally, I feel that this video is the most detailed

https://www.bilibili.com/video/BV1ua4y1t7Ws?

He is a step-by-step manual implementation of the actual combat example, and then use the tf function to simplify, to better understand the principle and implementation

 

The following text:

In fact, tf and numpy have similar basic functions. Using tf 2 is also for better compatibility with numpy | more convenient

tf.zeros(shape,dtype=tf.float32) 全0

tf.zeros_like (x) creates 0 matrix directly from the dim of x <==> tf.zeros (x.shape)

tf.ones(shape,dtpye=tf.float32) 全1 

tf.ones (x)    creates a scalar | dim = 0 

tf.ones ([])    creates a 1 * 1 vector with value x

tf.ones ([x])  create vector shape = x

tf.math.log: logarithmic function with base e

tf.exp: exponential function

tf.uniform (shape, minval, maxval, dtpye) uniformly changing array

tf.random.normal (shape, mean (mean default 0), stddev (variance default 1))

tf.fill (dim, value) fill numbers

The above is no different from numpy


tf.gather (a, axis, idx) axis can specify the axis, and then this axis takes the data of a in the order of idx this list

a.shape=[4,28,28,1] idx=[1,28,28,4]

a.shape = [1,28,28,, 4] is equivalent to changing a ==> [1, x] to idx

tf.constant (value, shape) fill numbers

x.ndim returns dimension

xxx = tf.covert_to_tensor (name, dtype) numpy converted to tensor type

x.numpy () to numpy

tf.is_tensor (x) Determine whether x is a tensor type

tf.random.shuffle (x) shuffle the x array randomly

tf.reshape (xx, shape) turns x tensor into shape

tf.reshape (xx, [-1, xxx]) -1 represents all of this dimension

For example, mnist data, if batch = 128, the last group cannot meet [128,28,28,1]

Then tf.reshape (x, [-1,28,28,1]) will not report an error

 

tf.expand_dim (xx, axis) xx tensor increases the axis dimension (axis starts from 0)

tf.transpose (a, vector (such as [x1, x2, x3, x4])) is
equivalent to transforming the original image. For
example, it is [n, h, w, c] vector = [0,3,2,1]
==> [n, c, w, h] This will make the content not change the "essence"

tf.one_hot (x, depth) converts x to a 1_depth one_hot vector

tf.concat ([x1, x2, xn], axis) xi is a tensor, axis represents the axis on which the tensor is spliced

For example, axis = 0, then [[x1], [x2]], axis = 1, [[x1, x2]]

tf.squeeze (x, axis) delete axis

tf.split (xx, list, axis) splits the xx tensor into other tensors according to axis and list

For example axis = 0, xx = [[x1], [x2], [x3], [x4]], list = [1: 2: 1]
output: [x1] [[x2], [x3]] [x4 ]
If list is a number, it is divided into list tensors

In general machine learning, dimensions all have 'meaning':

dim=3

Sentences: There are several sentences, each sentence has several words, how many dimensional vectors each word is regarded as

[num, len, vector_dim]

dim=4

Photos: How many photos there are, the size of each photo (h, w), there are several channels

[num,height,weight,channel]

dim = 5 | I do n't understand ...

That is, a total task is divided into multiple tasks to be processed simultaneously, and the number of processing each time becomes the first dimension

[batch,num,h,w,rgb]

broadcast:

This is an optimization method that does not require us to specifically call

Simply put, it automatically matches the dimensions of the two operations

For example [3,4,5,7] + [3]

<==> [3,4,5,7] + [3,1,1,1] matches from right to left whether they have the same dimension, if not add dimension 1 automatically

<==> [3,4,5,7] + [3,4,5,7] turns all 1s into high-dimensional equal dimensions

[3,4,5,6] + [1,4,5,1]

<==> [3,4,5,6] a
lot more convenient

 

Guess you like

Origin www.cnblogs.com/cherrypill/p/12740005.html