如何只反向传播部分样本loss

import torch
# 假设预测值为a
a = torch.tensor([1, 2, 3, 4], dtype=torch.float)
a.requires_grad_(True)
# 假设groud truth为[2, 2, 2, 2]
y = torch.tensor([2, 2, 2, 2], dtype=torch.float)
# loss采用L2
loss = (y - a) * (y - a) / 2
print(loss)
gradients = torch.tensor([1, 1, 1, 1], dtype=torch.float)
# loss.backward(gradients)
# print(a.grad)
# 结果为[-1., 0., 1., 2.]
loss[loss<1] = 0
print(loss)
loss.backward(gradients)
print(a.grad)

 结果:

tensor([0.5000, 0.0000, 0.5000, 2.0000], grad_fn=<DivBackward0>)
tensor([0., 0., 0., 2.], grad_fn=<PutBackward>)
tensor([-0., -0.,  0.,  2.])

猜你喜欢

转载自www.cnblogs.com/leebxo/p/11296485.html
今日推荐