loss.backward()处遇到“RuntimeError: Found dtype Double but expected Float”

error message

The type is wrong, and the parameter types passed in by the function for calculating the loss value are not uniform.

Solution

Check the parameter type of the loss calculation code above, such as loss=f.mse_loss(out,label), and check that the types of out and label are both torch.float types. Use label.dtype to view the type of tensor.

specific process

The error is located on this line

insert image description here
Thinking about whether it is a loss type problem, so I added

loss = loss.to(torch.float32)

But the error is still reported here, so I started to think about whether there is a problem with the parameter type of the loss calculation code above
insert image description here
Add these two lines, run again, sucess, perfect

Guess you like

Origin blog.csdn.net/qq_44391957/article/details/127109170