permute(), transpose(), view() functions in Pytorch

1. permute() function

  effect:

  • The function of permute function is to transpose the dimensions of tensor. 

If a 1X2X3X4 four-dimensional vector is randomly generated, the parameter of the permute function represents the transposed vector position. For example, in the original vector (1, 2, 3, 4), the subscript of 1 is 0, the subscript of 2 is 1, the subscript of 3 is 2, and the subscript of 4 is 3; in x.permute(2, 1 , 0, 3), 2 means that the number 3 that was originally 2 in the following table is placed in the first place (that is, N), 1 means that the number 2 that was originally 1 in the following table is placed in the second place (that is, C), 0 It means that the number 1 that was originally 0 in the table below is placed in the third position (that is, H), and 3 means that the number 4 that was originally 3 in the table below is placed in the fourth position (that is, W). The code is as follows:

import torch
import torch.nn as nn

x = torch.randn(1, 2, 3, 4)
print(x.size())      
print(x.permute(2, 1, 0, 3).size())

========================================
torch.Size([1, 2, 3, 4])   #原来的tensor
torch.Size([3, 2, 1, 4])   #转置后的tensor

2. transpose() function

  effect:

  • The function of the function is also to transpose the tensor.

But torch.transpose() can only operate two-dimensional transposition. **This does not mean that torch.transpose() can only operate on two-dimensional vectors. It means that only two dimensions of transposition can be performed at a time. If multiple dimensions of transposition are required, transpose() needs to be called multiple times. . For example, if the above tensor[1,2,3,4] is transposed to tensor[3,4,1,2], you need to do the following to use transpose:

x.transpose(0,2).transpose(1,3)
====================================
torch.Size([3, 4, 1, 2])   #转置后的tensor

2. view() function

  effect:

  • view() is equivalent to reshape and resize, which re-adjusts the shape of Tensor.

1. When a Tensor passes through dimension transformation functions such as tensor.transpose(), tensor.permute(), etc., the memory is not continuous, and the requirement of the tensor.view() dimension transformation function requires that the memory of the Tensor be continuous , so Before running tensor.view(), use tensor.contiguous() to prevent errors.

2. The dimension transformation function is a shallow copy operation (only a pointer to an object is copied, and the old and new objects still share the same memory). That is, the view operation will deform the original variables together, which is illegal, so it is not legal. An error will be reported;---- This explanation is partly true, that is, contiguous returns the deep copy contiguous copy data of tensor;

import torch
import torch.nn as nn
import numpy as np

y = np.array([[[1, 2, 3], [4, 5, 6]]]) # 1X2X3
y_tensor = torch.tensor(y)
y_tensor_trans = y_tensor.permute(2, 0, 1) # 3X1X2
print(y_tensor.size())
print(y_tensor_trans.size())

print(y_tensor)
print(y_tensor_trans)
print(y_tensor.view(1, 3, 2)) 
==================================================
torch.Size([1, 2, 3])
torch.Size([3, 1, 2])
tensor([[[1, 2, 3],
         [4, 5, 6]]])
tensor([[[1, 4]],

        [[2, 5]],

        [[3, 6]]])
tensor([[[1, 2],
         [3, 4],
         [5, 6]]])

Special usage:

One parameter in view() is set to -1, which means that the number of elements in this dimension is automatically adjusted to ensure that the total number of elements remains unchanged.

Guess you like

Origin blog.csdn.net/m0_62278731/article/details/131934376