PyTorch - BUG record and Solutions

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/qq_31347869/article/details/97390350

BUG 1

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/THCGeneral.cpp line=844 error=11 : invalid argument

BUG 2

ValueError: Expected more than 1 value per channel when training, got input size [1, 512, 1, 1]

This is batchsize can not be set to 1, when using a sample words BatchNorm y = - calculating (x mean (x)) / (std (x) + eps) in, xmean (x) causes the output to 0, pay attention to this situation is when the feature map is 1, it may appear xmean(x)。

Most likely you have a nn.BatchNorm layer somewhere in your model, which expects more then 1 value to calculate the running mean and std of the current batch.
In case you want to validate your data, call model.eval() before feeding the data, as this will change the behavior of the BatchNorm layer to use the running estimates instead of calculating them for the current batch.
If you want to train your model and can’t use a bigger batch size, you could switch e.g. to InstanceNorm.

Here Insert Picture Description

Reference: https: //blog.csdn.net/u011276025/article/details/73826562

Guess you like

Origin blog.csdn.net/qq_31347869/article/details/97390350