Fancy pytorch tensor (the Tensor) Operation

 First, the tensor dimension operation

1.squezee & unsqueeze

torch.rand = X (5,1,2,1 ) 
X = torch.squeeze (X) # remove dimension of size 1, x.shape = (5,2) 
X = torch.unsqueeze (X,. 3) # Instead squeeze and expand in the third dimension, x.shape = (5,2,1)

2. The diffusion tensor, the given dimension will extend to the original tensor specified size, such as the original x 3 is 1, the input size is [3, 4], which can be expanded into three 4,4 original one element replication

x = x.expand(*size)

3. transpose, torch.transpose exchange only two dimensions without limitation permute

torch.transpose = X (X, 1, 2) # exchange 1 and the dimension 2 
X = x.permute (1, 2,. 3, 0) # a dimension recombinant

4. change shape, view & reshape the role as both, then when the difference changes from a multi-dimension to the small dimension of, if not the tensor contiguous memory storage, the view can not be combined into dimensions being given

x.view = X (. 1, 2, -1) # the original data tensor is arranged in a one-dimensional data (this should be required because the addresses are stored contiguously), according to the parameters and then combined into a line in accordance with the order of priority other dimensions Tensor 
X = x.reshape (. 1, 2, -1)

The tensor splicing cat & stack

torch.cat (a_tuple, Dim) # tuple is a tuple or tensor, stitching on the specified dimension 
torch.stack (a_tuple, Dim) # and the difference is in cat, cat only one dimension in the original connection, stack can create a new dimension, the original dimensions on the order of this dimension 
# For example, there are two tensor of 4x4, with the cat can only turn them into tensor of a 8x4 or 4x8, a stack can be turned into 2x4x4.

6. tensor split, chunk & split

torch.chunk (a, chunk_num, Dim) # specified dimension on a chunk_num into equal-sized chunk, returns a tuple. If the last is not enough chunk_num, returns the rest of the 
torch.split (A, chunk_size, Dim) # and chunk similar, but the second parameter becomes chunk_size

First, the tensor multiplication operation

1 * & dot torch.mul both use the same, with which the broadcast concept

# Scalar k * do multiply result is multiplied by each element of Tensor k (k is equivalent to the lhs copied into the same size, with all the elements of the k Tensor 
A = torch.ones (3,4- ) 
A = A * 2
 '' ' Tensor ([[2., 2., 2., 2.], 
        [2., 2., 2., 2.], 
        [2., 2., 2., 2.]]) ' ' 
# and the row vector multiplier for multiplying each column vector value is multiplied by the corresponding column vector of the row (the row corresponding to the row vector replication, the same number of columns and the number of vector a), with the same reason column vector 
b = torch. the Tensor ([1,2,3,4 ]) 
A * B
 '' ' 
Tensor ([[. 1., 2., 3., 4.], 
     [1., 2., 3., 4.], 
     [ 1., 2., 3., 4]]) 
'' ' 
# vector * vectors, element-wise product

Both torch.matmul & 2.torch.mm same usage, which with the broadcast concept

torch.matmul (INPUT, OTHER, OUT = None) → the Tensor
 # two tensor matrix product. Tensor behavior depends on the number of dimensions, as follows: 
# 1. If two tensors are one-dimensional, the dot product (scalar) returns. 
# Vector Vector X 
 tensor1 torch.randn = (. 3 ) 
 tensor2 = torch.randn (. 3 ) 
 torch.matmul (tensor1, tensor2) .size () 
torch.Size ([]) 
# 2. If the two-dimensional parameters are the matrix is the matrix product returns. 
# Matrix Matrix X 
 tensor1 torch.randn = (. 3,. 4 ) 
 tensor2 = torch.randn (. 4,. 5 ) 
 torch.matmul (tensor1, tensor2) .size () # torch.Size ([. 3,. 5]) 
# . 3. If the first parameter is one-dimensional, and the second parameter is two dimensional, for the matrix multiplication, which will be attached to a dimension. After multiplying the matrix will delete the front size. 
#That is, let tensor2 become a matrix, said matrix of 1x3 and 3x4 matrix to obtain a 1x4 matrix, and then delete 1 
 tensor1 = torch.randn (3, 4 ) 
 tensor2 = torch.randn (3 ) 
 torch.matmul (tensor2, tensor1 ) .size () # torch.Size ([. 4]) 
# 4. If the first two-dimensional parameter, the second parameter is one-dimensional, then the matrix-vector product returns. 
# Matrix X Vector 
 tensor1 torch.randn = (. 3,. 4 ) 
 tensor2 = torch.randn (. 4 ) 
 torch.matmul (tensor1, tensor2) .size () # torch.Size ([. 3]) 
# 5. The self If two at least one variable dimension and at least one argument is the N-dimensional (where N> 2), the matrix multiplication to return the batch. 
# If the first argument is one-dimensional, then added a 1 before its dimensions, in order to achieve the bulk matrix multiplication and subsequently deleted. 
# If the second parameter is a dimension which will be attached to a dimension, in order to achieve the object of the bulk matrix multiples, then delete it. 
#Non-matrix (i.e., batch) dimensions may be broadcast (and therefore must be broadcast). 
# For example, if the input is (jx1xnxm) tensor, and the other is (k × m × p) tensor, out will be (j × k × n × p ) tensor. Finally, two-dimensional must meet matrix multiplication 
 # Batched Matrix X broadcasted Vector 
 tensor1 = torch.randn (10,. 3,. 4 ) 
 tensor2 = torch.randn (. 4 ) 
 torch.matmul (tensor1, tensor2) .size () # torch.Size ([10,. 3]) 
 # Batched Matrix Matrix X Batched 
 tensor1 = torch.randn (10,. 3,. 4 ) 
 tensor2 = torch.randn (10,. 4,. 5 ) 
 torch.matmul (tensor1, tensor2) .size () # torch.Size ([10,. 3,. 5]) 
 # Batched Matrix Matrix X broadcasted 
 tensor1 = torch.randn (10,. 3,. 4 ) 
 tensor2= torch.randn(4, 5)
 torch.matmul(tensor1, tensor2).size()#torch.Size([10, 3, 5])
 tensor1 = torch.randn(10, 1, 3, 4)
 tensor2 = torch.randn(2, 4, 5)
 torch.matmul(tensor1, tensor2).size()#torch.Size([10, 2, 3, 5])

3. General Multiplication: torch.tensordot

# May represent arbitrary multi-dimensional, any combination of matrix multiplication 
# If torch.Tensor = A ([. 1, 2,. 3,. 4]), B = torch.tensor ([2,. 3,. 4,. 5]) 
# wants represents an inner product, can directly make dims = 1 
# if dimss = 0 one by one, then the element-wise multiplication in accordance with the accumulated 
# dimss may be a two-dimensional array, (dims_a, dims_b), specify the dimensions of any two tensor multiplication 
C = Torch .tensordot (A, B, DIMS)
 # A: B the BNF: the PF 
C = torch.tensordot (A, B, DIMS = ([-. 1], [-. 1])) # C: of BNP

4.einsum

# Use Einstein notation calculates multiple linear expression (i.e., product sum) can be represented by various tensor calculation (inner product, outer product, transposed in a uniform manner, dot matrix trace, other custom operators). 
# A: IK, B: JK 
C = torch.enisum ( ' IK, JK -> ij of ' , A, B) # C: as ij of the following formula and

Other reference https://blog.csdn.net/a2806005024/article/details/96462827

 

Guess you like

Origin www.cnblogs.com/yutingmoran/p/11882816.html