pytorch create tensor data

A, pytorch and python different data types:

* Pytorch supports only numeric data, does not support string type data.

* If we want to represent pytorch string, string will be converted to a digital code (matrix or vector) to handle. Common methods: One-hot, Embedding (Word2vec, glove)

Two, pytorch data types View

* A.type () # View tensor digital type

* Type (a) # return a Tensor is a type, but does not return a specific number of types Tensor

* Isinstance (a, torch.FloatTensor) # floattensor determines whether a digital type of torch, True is returned

isinstance (data, torch.cuda.DoubleTensor) # If the data is initially deployed in the CPU above, if compared with cuda, is False

After data = data.cuda () # on the CPU to transfer data to cuda above, is then compared True

Third, the scalar

* Defined appreciated: constant can be appreciated, the direction vector does not (only values, no direction), dimension = 0

* Pytorch as defined: torch.tensor (1.), The output tensor (1.)

* Role: used in computing loss. The error between the predicted value and the true value is calculated scalar

* Check the size of the scalar

Note: .shape method without brackets

In addition to the method of the above figure, there a.dim ()

The difference is: dim is the dimension, i.e. a high matrix, one value. size / shape represented by a plurality of height and width values ​​.... Matrix.

 

Fourth, the tensor

* Tensor and the scalar difference:

Tensor directional, need to add [] is defined

* Numpy and tensor type swap

1) torch.from_numpy (data) # converts data tensor numpy

2) tensor_data.numpy () # tensor directly convert numpy

* Tensor multidimensional data use

Tensor torch.randn (2, 3) # 2 generates a random rows and three columns: 1) two-dimensional data for: linear batch input, such as

2) three-dimensional data: torch.rand (1, 2, 3); three for RNN, [10, 20, 100] 20 sentences, 10 words, each word is represented by a 100-dimensional vector

3) four-dimensional data: torch.rand (2, 3, 28, 28) 2 photos, 3 channels, picture size 28 (h) * 28 (w);

* Tensor memory View

a.numel()  # number of element

* Definition of tensor

* Torch.tensor () # lowercase

Reception existing list or matrix of data [];

* Torch.Tensor (), torch.FloatTensor () # uppercase

Receiving dimensions (d1, d2, d3 ...), random initialization data; of course you can also receive existing data, but recommended a small written data, written large dimensions, and dimension data or easy to confuse.

 

 * torch.empty() 

Given dimension. Uninitialized data, very irregular, very large, very small may be the case, as far as possible to cover the complete data

* torch.FloatTensor(d1, d2, d3)

Generating a random vector, followed by modify assigned values

* Recommended Data Types

Enhanced learning inside with a double, higher precision; other parts of general use float

* Set the default data type tensor

torch.set_default_tensor_type (torch.DoubleTensor) # Note: The original data must be decimal, if all integer, this setting is invalid

* Torch.rand (2, 3) # recommended

Generating two rows and three columns, the data value between 0 and 1 randomly distributed (uniformly distributed), the default is N (0, 1) mean 0 and variance 1

* torch.rand_like(a) 

The shape of a randomly generated

* torch.randint(1, 10, [d1, d2, ...]) 

 Integer followed by the range [1-10) is given dimension

* torch.normal(mean=torch.full([2, 3], 0), std=torch.arange(0.6, 0, -0.1)) 

Generating normal, two rows and three columns, with mean 0 and variance from 0.6-0.1, can be seen from the figure, the output data is almost decreasing trend.

* torch.full([2, 3], 7) 

Assign all three rows 2 is 7; torch.full ([], 7) is generated tensor (7) scalar; torch.full ([1], 7) is generated by a vector of length 1

* torch.arange(0, 10)

Generating [0, 10) of the tensor ([0, 1, 2, ... 9]); torch.arange (0, 10) # [0, 10), tensor data interval 2; tensor.range () not recommended.

* torch.linspace(0, 10, steps=4) 

The [0,10] was cut into 4 equally divided parts

* torch.ones(3, 3)

Given shape, to generate a full tensor matrix 1; torch.ones_like (a), generated according to a shape of a matrix are all 1's.

* torch.zeros(3, 3) 

Given shape, to generate a full tensor matrix 0

* torch.eye(3, 4)

Given shape, generating a diagonal matrix. If it is not a diagonal matrix, is padded with 0, as shown below:

 

* torch.randperm(10) 

Generating a one-dimensional tensor [0, 9] is scrambled.

It can be used: two equal dimensions, one relationship data, if the same random order in a way that can be generated by the random index this method, as shown below:

Guess you like

Origin www.cnblogs.com/jaysonteng/p/12585802.html