Training problem one of pytorch version of Unet

Question 1 Error content only batches of spatial targets supported (3D tensors) but got targets of dimension: 4

The pytorch version of Unet is still very simple and easy to understand. The
official code has a little problem, and it is not very easy to handle in terms of input and output.
For my convenience, and I plan to change the network model in the future. So I built on the official example. I have a branch. The
official branch is
https://github.com/milesial/Pytorch-UNet.
My branch is
https://github.com/phker/Pytorch-MyUNet.
The code has not been submitted yet. I
encountered the following during training . problem.

Traceback (most recent call last):
  File "f:/project/AI/123/AILabelSystem/Server/Pytorch-MyUNet/mytrain.py", line 216, in <module>
    train_net(net=net,
  File "f:/project/AI/123/AILabelSystem/Server/Pytorch-MyUNet/mytrain.py", line 89, in train_net
    loss = criterion(masks_pred, true_masks) # 求损失
  File "D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "D:\Anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 961, in forward
    return F.cross_entropy(input, target, weight=self.weight,
  File "D:\Anaconda3\lib\site-packages\torch\nn\functional.py", line 2468, in cross_entropy
    return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
  File "D:\Anaconda3\lib\site-packages\torch\nn\functional.py", line 2266, in nll_loss
    ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4

The original solution is here.https:
//github.com/milesial/Pytorch-UNet/issues/123

Change train.py

loss = criterion(masks_pred, true_masks)  # 80行上下

Change to the following

  if net.n_classes > 1: 
     loss = criterion(masks_pred, true_masks.squeeze(1)) # 求损失  # patch 123#bug
  else:
     loss = criterion(masks_pred, true_masks) # 求损失

At the same time, modify eval.py to

if net.n_classes > 1:
  # tot += F.cross_entropy(mask_pred, true_masks).item() # patch 123#bug  把这行注释掉, 改成下面这行
    tot += F.cross_entropy(mask_pred.unsqueeze(dim=0), true_masks.unsqueeze(dim=0).squeeze(1)).item()  # patch 123#bug
else:
    pred = torch.sigmoid(mask_pred)
    pred = (pred > 0.5).float()
    tot += dice_coeff(pred, true_masks).item()

Guess you like

Origin blog.csdn.net/phker/article/details/112600793