Mini-batch batch implementation function in pytorch xx.unsqueeze(0)

When visualizing the feature map, I found that the tensor dimension of the input image has changed after a function is run, and it cannot be directly converted to numpy data for image display. This function is **.unsqueeze(0)** The function is posted below Screenshot of the change in tensor dimension after running:
Before running:
Insert picture description here
The dimension of tensor is (3, 224, 224), which means that a 3-channel image with a size of 224*224 is transformed into a tensor vector. After running the following statement:

image_info = image_info.unsqueeze(0)

The result is: at
Insert picture description here
this time, the input tensor tensor dimension is one more dimension than before the input. It can be found that the brackets [] after the tensor has one more layer, which means that the tensor before the input becomes a certain list here. The 0th element, as the image data of a batch process increases, then this list will have the first, second,,, elements filled in, and the overall composition is a tensor that enters the network for processing, so it is realized Batch processing function. At this time torch.Size([batch size,in_channels,height,width]) is its internal parameter.
Of course **.squeeze(0)** is the reverse operation. When you select the tensor of one of the pictures and perform the reverse operation, the tensor dimension of the picture will be reduced from 4 dimensions to 3 dimensions, so that you can directly perform the image For processing operations such as display, because images are generally at most 3 dimensions, they cannot be processed directly without dimensionality reduction.

Guess you like

Origin blog.csdn.net/qq_44442727/article/details/112972167