Tensor in pytorch realizes data dimensionality reduction and channel number conversion

        First import the torch package, and use the torch.narrow() function to convert the number of data channels. See the figure below for specific examples

        Use torch.rand(5,6) to randomly generate a 5X6 two-dimensional matrix, use torch.narrow(x,dim,start,length) to convert the number of channels, the first parameter in the narrow() function is what you need to convert Raw data must be in tensor form. The second variable dim is the specific dimension you need to convert. The third variable is the channel in the selected dimension as the starting point. The fourth variable is the number of reserved channels.
   

      In the above example, torch.narrow(x,0,2,3), because x is a 5X6 two-dimensional tensor, consists of two dimensions [0,1], 0 represents the first dimension, horizontal, 1 represents the second dimension, vertical ; and the number of channels in the first dimension is 5, and the number of channels in the second dimension is 6. In the narrow function, dim=0, select the first dimension; start=2 means starting from the second channel of the first dimension, and python starts counting from 0, that is, the row vector of 0.6467 is the starting channel; length=3 means The number of channels is 3, that is, 3 channels are reserved downward from the row vector of 0.6467, as shown in the figure below.

 Successfully reduced the number of channels in the first dimension to 3.

If you often encounter thinking tensor in image processing in deep learning, that is, x.shape=[9, 3, 256, 256], 3 represents the number of three channels of RGB color pictures. If you want to achieve dimensionality reduction, you can use x. The channel number 3 of the second dimension becomes 1, and then use x=torch.squeeze(x,dim=1) to achieve dimensionality reduction. After dimensionality reduction, the shape becomes [9,256,256]

 

 

 

Guess you like

Origin blog.csdn.net/yangfaner2021/article/details/127891478