pytorch the tensor tensor data base entry

Getting pytorch tensor data type
1, the depth learning framework pytorch of the basic data types of the tensor data type, i.e. Tensor data type , for python inside int, float, int array, flaot array corresponding to pytorch inside i.e. Tensor can be preceded by a --intTensor, the Float Tensor, IntTensor size of [D1, D2 ...], FloatTensor size of [D1, D2, ...]
2, for pytorch, and can not represent string datatype , under normal circumstances, the results of which can be classified data string encoded representation, which is encoded as a vector data type [D1, D2 ... DN] , this method is referred to as One-hot coded representation. Wherein, the number of n-type data classification result, i.e., the total length of the vector. For example: When classifying the data, which is the function of distinguishing cats and dogs, comprising a total of two data classification result, and therefore the result categories can be encoded as [0,1] Cats and [1,0] dog
3, when the data type is determined and the output data for pytorch inside, there are three general methods:
(. 1) Print (a.type) : output data a detailed data types;
(2) Print (type (a)) : output data a basic data type, not (1) in such a detailed;
(. 3) Print (the isinstance (a, torch.FloatTensor)): is used to output data a torch.Tensor whether the data type, i.e., return value is True or False.
4, the data type for the tensor Tensor of pytorch on different platforms are not the same, if the CPU is the normal tensor Tensor data type, if the GPU above, it is necessary to convert the data type:
Data = data.cuda () , the type of data to data at this time torch.cuda.FloatTensor, it can accelerate the algorithm cuda torch.FlaotTensor conversion from above.
5, for pytorch inside scalar data a , the related data definition, typically defined as torch.tensor (a), the output returns Tensor (a)
. 6, the data type for a scalar, the data shape output generally = tensor.size a.shape ([]) , for which the length of the output len (a.shape) = 0, further, also equal to a.size tensor.size ([]) in.
7, a tensor for any data inside torch.tensor pytorch ([d1, d2, d3, d4]) DIM and Implications of the size and shape to distinguish three types of data attributes as follows:
the DIM tensor means the length of data ( i.e., the number of layers of data) = len (a.shape), size and shape refers to a shape are tensor data;
Further, a.numel () refers to the size of the data * D2 * D3 * D1 D4
(. 1) the DIM 2 =:
a = torch.tensor ([4,784])
where 4 is the number of data images, and 784 refers to a characteristic dimension of each picture
example: for a = torch.tensor ([1,2,3])
For general machine learning data
(2) = the DIM. 3:
. 1) a.size / tensor.size Shape = ([l, 2,3])
2) a.size (0) =. 1
. 3) a.shape [ 2] =. 3
. 4) a [0] = .shape [2,3]
data type for the neural network RNN [length, num, feature]
for example, for speech recognition neural network RNN processing [10,20,100] represents: wherein each word contains 100, a total of 10 words of a word, phrase and each output 20
(3) DIM = 4:
generally applicable to the convolutional neural network CNN [b, c, h, w ]: processing the image information in the image
torch.tensor ([2,3,28,28]):
. 1) 2 refers to the number of images per input
2) 3 refers to the basic features of each image channel type
3) 28 28 refers to the characteristic of each pixel of the image: the length and width
8, Tensor data creating method are the following:
(. 1) Import from numpy:
a = np.array ([1.1, 2.1)
B = torch.from_numpy (a)
a = np.ones ([2,3]) defined in a matrix manner #
b = torch.from_numpy (a)
Note: data type float imported from the fact numpy double type.
(2) from Import List:
A = torch.tensor ([[1.1, 2.1], [1.5,1.2]]), whereLowercase tensor in the list of data refers to data per se data
b = torch.FloatTensor / Tensor (d1,where the uppercase Tensor for shape data, i.e., data composed of dimension
9, generated uninitialized data uninitialized:
(. 1) torch.empty ()
(2) torch.FloatTensor (D1, D2, D3)
(. 3) torch.IntTensor (D1, D2, D3)
10, random initialization manner tensor data -rand / rand_like (0- 1), randint (integer data type), randn (normal data):
(. 1) torch.rand (): generating data between 0-1
(2) torch.rand_like (a): a is a tensor data type, and generates a random tensor data of a data type of the same shape
(. 3) torch.randint (min, max, [d1, d2, d3]): generating a shape type is [d1, d2, d3] of tensor data, the minimum and maximum data were min and max
(. 4) torch.randn: generating a data type of a normal distribution N (0,1), a normal distribution for the custom data N (mean, std), generally need to torch.normal () function typically requires two steps, the specific use for example as follows:
a = torch.normal (Mean = torch.full ([10], 0)), STD = torch.arange ( 1,0, -0.1))
a.reshape = B (2,5)
. 11, to generate a completely filled with the same data: torch.full ([d1, d2, de3], a) wherein the fill data is A
12 is, increases or decreases the function API: arange / range
torch.arange (min, max, distance) : left and right open closed interval does not include the maximum
torch. range (min, max, distance) : fully closed interval comprising the maximum value, is not recommended
13 is, linspace / LOGSPACE: linear space
(1) torch.linspace (min, max , steps = data number): return equispaced data, which includes data about the number of data Steps, intervals of data (max-min) / (. 1-Steps)
(2) torch.logspace (min, max, Steps number = data): returns 10 power value of each of the linear space
14, torch some zero, and a data generating unit tensor the API:
torch.zeros (3,4-) # zero tensor data
torch.ones (3,4) # 1 tensor data
torch.eye (4,5) # tensor data unit
15, randperm: mainly generates a random index value:
torch.randperm (10): in the [0,10), i.e. 10 0-9 generates a random index

Fully shown above, for the data type tensor tensor basic training python code as follows:


Torch Import
A = torch.randn ([1,2,3,4])
Print (A)
layers print (a.dim ()) # tensor data output, i.e., the length of the
print (a.dim () == len (a.shape))
Print total number (a.numel ()) # tensor output data, i.e., data size, accounting for the number of memory
print (a.shape) form the output data #
print (a.size ())
Print (a.size (. 3)) # size and shape of the output element wherein
Print (a.shape [. 3])
X = torch.empty (2,2,3)
Print (X)
Print (torch.IntTensor (2,3))
Print (torch.FloatTensor (l, 2,3))
Print (torch.Tensor (1,2,10)) # tensor uninitialized data, occupy a certain memory segment
print (x.type () ) output tensor data type #
# torch.set_default_tensor_type (torch.DoubleTensor) # set tensor data type doubletensor
X = torch.empty (2,2,3)
Print (X)
Print (x.type ()) re-output # tensor data type
# random initialization manner
a = torch.rand (3,3) # of generated shape 0-1 [3,3] of the tensor data
Print (a)
the same b = torch.rand_like (a) # of tensor generates a data type and a tensor data, which is also the data size between 0-1, may be utilized if certain large data processing
Print (B)
C = torch.randint (0,10, [3,5])
Print (C)

a = torch.randn (2,5) # generate a standard normal distribution data
Print (A)
A = torch.normal (Mean = torch.full ([10], 0), torch.arange STD = (. 1, 0, -0.1)) # generates normal data size of the general-defined
B = a.reshape (2,5)
Print (a)
Print (B)
# torch.full
a = torch.full ([2,3] ,. 3)
print (a)
B = torch.full ([],. 1) generates a scalar #
print (B)
C = torch.full ([. 1],. 1) generates a tensor tensor # dim = 1 data of a
print (C)
Print (a.type (), b.type (), c.type ())
a torch.arange = (0, 10) # half-open interval, the left and right to open and close, it does not include the right of the maximum value
Print (A)
B = torch.range (0,10) comprising a right data # 10, fully closed interval
print (b)

linspace # / LOGSPACE
# (. 1) torch.linspace (min, max, Steps Number = Data): returns the data such as pitch, wherein the left and right data includes data number of Steps, intervals of data (max-min) / (. 1-Steps)
# (2) torch.logspace (min, max, Steps Number = Data): returns the value of the power of each linear space 10
a = torch.linspace (0,10, steps = 10)
Print (A)
B = torch.linspace (0,10, Steps =. 11)
Print (B)
C = torch.logspace (0,10, Steps =. 11)
Print (C)
D = torch.logspace (-1,0 , Steps = 10)
Print (D)
# Ones / zeros / Eyes
Print (torch.zeros (3,4-)) # zero tensor data
print (torch.ones (3,4)) # 1 tensor data
print (torch .eye (4,5)) # tensor data unit
randomly generates an index value #randperm
a torch.rand = (2,3)
B = torch.rand (2,2 &)
Print (a, B)
IDX = Torch. randperm (2)
Print (IDX)
A = A [IDX]
B = B [IDX]
print(a,b)

Ultimately results are shown below:

Guess you like

Origin www.cnblogs.com/Yanjy-OnlyOne/p/11546137.html