[Pytorch] Visualization feature map

In computer vision project, especially object classification, the key point detection experiments, we often need to visualize the middle of the feature map to help determine whether our model can extract the features we want, to help us or adjust model parameters.

Visualization Code:

from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms

def visualize_feature(x, model, layers=[0,1]):
    net = nn.Sequential(*list(model.children())[:layers[0]])
    img = net(x)
    transform1 = transforms.ToPILImage(mode='L')
    #img = torch.cpu().clone()
    for i in range(img.size(0)):
        image = img[i]
        #print(image.size())
        image = transform1(np.uint8(image.numpy().transpose(1,2,0)))
        image.show()
    

transform function:

The Tensor Numpy or converted into the ndarray [PILImage type on data types, both specific requirements]

  1. ndarray data type requirements dtype = uint8, range [0, 255] and shape H x W x C
  2. Tensor the shape of C x H x W FloadTensor the requirements , or other types allowed DoubleTensor

numpy into PIL:

#初始化随机数种子
np.random.seed(0)
 
data = np.random.randint(0, 255, 300)
print(data.dtype)
n_out = data.reshape(10,10,3)
 
#强制类型转换
n_out = n_out.astype(np.uint8)
print(n_out.dtype)
 
img2 = transforms.ToPILImage()(n_out)
img2.show()

tensor into PIL:

t_out = torch.randn(3,10,10)
img1 = transforms.ToPILImage()(t_out)
img1.show()

Training process visualization function call

def train(epoch):
    cnn.train()
    for data in tqdm(train_loader, desc='Train: epoch {}'.format(epoch), leave=False, total=len(train_loader)):  # 对于训练集的每一个batch
        img, label = data
        if cuda_available:
            img = img.cuda()
            label = label.cuda()
        #visualize_feature(img, cnn)
 
    
        out = cnn( img )  # 送进网络进行输出
        #out = torch.nn.functional.softmax(out, dim=1)
        #print(out.size())
        #print(label.size())
        loss = loss_function( out, label )  # 获得损失
 
        optimizer.zero_grad()  # 梯度归零
        loss.backward()  # 反向传播获得梯度,但是参数还没有更新
        optimizer.step()  # 更新梯度

Pre-load direct training model and good output feature map

model = Residual_Model()
model.load_state_dict(torch.load('./model.pkl'))

output = get_features(model,x)## model是训练好的model,前面已经import 进来了Residual model
print('output.shape:',output.shape)
Published 10 original articles · won praise 0 · Views 214

Guess you like

Origin blog.csdn.net/weixin_43844219/article/details/104482843