RuntimeError: one of the variables needed for gradient computation has been modified by an inplace

Traceback (most recent call last):
  File "train.py", line 182, in <module>
    train(config)
  File "train.py", line 120, in train
    loss.backward()
  File "/home/zqzhu/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 102, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/zqzhu/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Let me first talk about the reason for this error: a new item was added when calculating the loss, and then it was written as follows:

loss += new_loss

As a result, the above error occurred. Error analysis: After pytorch was upgraded to pytorch0.4, some changes have taken place from the previous usage of pytorch0.3. For example, the most important thing is to combine Tensor and Variance in pytorch0.4. For the same thing, pytorch0.4 no longer supports inplace operations.
Solution:
1. All inplace=Truechange inplace=False
2. loss+=new_lossso that all +=operations into loss=loss+new_loss
3. pytorch fallback version 0.3, or add a pytorch0.3 of conda environment.

Reference material:
remember the pits PyTorch stepped on

Guess you like

Origin blog.csdn.net/zzq060143/article/details/88914075