PyTorch——BUG记录与解决方法

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/qq_31347869/article/details/97390350

BUG 1

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/THCGeneral.cpp line=844 error=11 : invalid argument

BUG 2

ValueError: Expected more than 1 value per channel when training, got input size [1, 512, 1, 1]

这个是在使用 BatchNorm 时不能把batchsize设置为1,一个样本的话y = (x - mean(x)) / (std(x) + eps)的计算中,xmean(x)导致输出为0,注意这个情况是在feature map为1的情况时,才可能出现xmean(x)。

Most likely you have a nn.BatchNorm layer somewhere in your model, which expects more then 1 value to calculate the running mean and std of the current batch.
In case you want to validate your data, call model.eval() before feeding the data, as this will change the behavior of the BatchNorm layer to use the running estimates instead of calculating them for the current batch.
If you want to train your model and can’t use a bigger batch size, you could switch e.g. to InstanceNorm.

在这里插入图片描述

参考:https://blog.csdn.net/u011276025/article/details/73826562

猜你喜欢

转载自blog.csdn.net/qq_31347869/article/details/97390350