Tensor operations

Tensor stitching (numpy.concatenate)

np.concatenate((a1,a2,a3,…), axis=0)
Tensor stitching should use np.concatenate method, where a1, a2, a3, ... are stitching sub-tensors, axis is dimension, and axis = 0 means stitching according to the first dimension.
For example, two two-dimensional tensors are spliced ​​into a two-dimensional tensor according to the first dimension:

import numpy as np
a=np.array([[1,2,3]])
b=np.array([[4,5,6]])
c=np.concatenate((a,b),axis=0)
print(c)
d=np.concatenate((c,a),axis=0)
print(d)
e=np.concatenate((c,c),axis=1)
print(e)

result

array([[1, 2, 3],
       [4, 5, 6]])
array([[1, 2, 3],
       [4, 5, 6],
       [1, 2, 3]])
array([[1, 2, 3, 1, 2, 3],
       [4, 5, 6, 4, 5, 6]])

Tensor stitching (torch.cat)

The stitching here is the same as the stitching function of numpy introduced above

C = torch.cat( (A,B),0 )  #按维数0拼接(竖着拼)
C = torch.cat( (A,B),1 )  #按维数1拼接(横着拼)

Example:

import torch
A=torch.ones(2,3)  #2x3的张量(矩阵)   
B=2*torch.ones(4,3)  #4x3的张量(矩阵)    
C=torch.cat((A,B),0)  #按维数0(行)拼接
print(C)                                

result:

tensor([[ 2.,  2.,  2.],
        [ 2.,  2.,  2.],
        [ 2.,  2.,  2.],
        [ 2.,  2.,  2.]])

Then above

D=2*torch.ones(2,4) #2x4的张量(矩阵)
C=torch.cat((A,D),1)#按维数1(列)拼接
print(C)

result:

tensor([[ 1.,  1.,  1.,  2.,  2.,  2.,  2.],
        [ 1.,  1.,  1.,  2.,  2.,  2.,  2.]])

Tensor reconstruction (torch.view)

The function of the view function in pytorch is to reconstruct the dimensions of the tensor, which is equivalent to the function of resize () in numpy, but the usage may be different.
1.torch.view (parameter a, parameter b, ...)
For example:

import torch
tt1=torch.tensor([-0.3623, -0.6115,  0.7283,  0.4699,  2.3261,  0.1599])
result=tt1.view(3,2)
print(result)

result

tensor([[-0.3623, -0.6115],
        [ 0.7283,  0.4699],
        [ 2.3261,  0.1599]])

In the above example, the parameter a = 3 and the parameter b = 2 determine the reconstruction of one-dimensional tt1 into a 3x2-dimensional tensor.

2. Sometimes torch.view (-1) or torch.view (parameter a, -1) will appear.
example:

import torch
tt2=torch.tensor([[-0.3623, -0.6115],
         [ 0.7283,  0.4699],
         [ 2.3261,  0.1599]])
result=tt2.view(-1)
print(result)

result:

tensor([-0.3623, -0.6115,  0.7283,  0.4699,  2.3261,  0.1599])

As can be seen from the above case, if it is torch.view (-1), the original tensor will become a one-dimensional structure.

Example:

import torch
tt3=torch.tensor([[-0.3623, -0.6115],
         [ 0.7283,  0.4699],
         [ 2.3261,  0.1599]])
>>> result=tt3.view(2,-1)

result:

tensor([[-0.3623, -0.6115,  0.7283],
        [ 0.4699,  2.3261,  0.1599]])

As can be seen from the above case, if it is torch.view (parameter a, -1), it means that the length of the column vector is automatically filled in when the parameter b is unknown and the parameter a is known. In this example, a = 2, tt3 consists of 6 elements in total, then b = 6/2 = 3.

Example:

import torch
inputs = torch.randn(1,3)
print(inputs)
print(inputs.view(1, 1, -1))

result:

tensor([[-0.5525,  0.6355, -0.3968]])
tensor([[[-0.5525,  0.6355, -0.3968]]])

Change 2D to 3D, a = 1, b = 1, c = 3 / (1 * 1)

Tensor shape (torch.size)

import torch
inputs = torch.randn(1,3)
print(inputs.size())

result:

torch.Size([1, 3])
Published 41 original articles · praised 13 · visits 6692

Guess you like

Origin blog.csdn.net/comli_cn/article/details/104797320