How to implement differentiable hamming loss in pytorch?

Oleg Dats :

How to implement a differentiable loss function that counts the number of wrong predictions?

output = [1,0,4,10]
target = [1,2,4,15]
loss = np.count_nonzero(output != target) / len(output) # [0,1,0,1] -> 2 / 4 -> 0.5

enter image description here

I have tried a few implementations but they are not differentiable. RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

def hamming_loss(output, target):
  #loss = torch.tensor(torch.nonzero(output != target).size(0)).double() / target.size(0)
  #loss = torch.sum((output != target), dim=0).double() / target.size(0)
  loss = torch.mean((output != target).double())
  return loss

Maybe there is some similar but differential loss function?

Shai :

Why don't you convert your discrete predictions (e.g., [1, 0, 4, 10]) with "soft" predictions, i.e. probability of each label (e.g., output becomes a 4x(num labels) probability vectors).
Once you have "soft" predictions, you can compute the cross entropy loss between the predicted output probabilities and the desired targets.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=198268&siteId=1