torch.nn.functional as F smooth_l1_loss

import torch.nn.functional as F

1.

output = model(imgL,imgR)
output = torch.squeeze(output,1)

loss = F.smooth_l1_loss(output[mask], disp_true[mask], size_average=True)

loss.backward()

2.

output1, output2, output3 = model(imgL,imgR)
            output1 = torch.squeeze(output1,1)
            output2 = torch.squeeze(output2,1)
            output3 = torch.squeeze(output3,1)
            loss = 0.5*F.smooth_l1_loss(output1[mask], disp_true[mask], size_average=True) + 0.7*F.smooth_l1_loss(output2[mask], disp_true[mask], size_average=True) + F.smooth_l1_loss(output3[mask], disp_true[mask], size_average=True) 

loss.backward()

loss.data[0]

backward()在pytorch中是一个经常出现的函数,我们一般会在更新loss的时候使用它,比如loss.backward()。通过对loss进行backward来实现从输出到输入的自动求梯度运算。

猜你喜欢

转载自blog.csdn.net/lzglzj20100700/article/details/84874273