版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/zjucor/article/details/86629087
对于多个类的F1 loss,是先计算出每个类的F1,然后取平均
不能求每个batch的F1,然后平均
比如,下面的写法是错误的
def f1_loss(logits, labels):
__small_value=1e-6
beta = 1
batch_size = logits.size()[0]
p = F.sigmoid(logits)
l = labels
num_pos = torch.sum(p, 1) + __small_value
num_pos_hat = torch.sum(l, 1) + __small_value
tp = torch.sum(l * p, 1)
precise = tp / num_pos
recall = tp / num_pos_hat
fs = (1 + beta * beta) * precise * recall / (beta * beta * precise + recall + __small_value)
loss = fs.sum() / batch_size
return (1 - loss)
正确的解法是:
def f1_loss(predict, target):
predict = torch.sigmoid(predict)
predict = torch.clamp(predict * (1-target), min=0.01) + predict * target
tp = predict * target
tp = tp.sum(dim=0)
precision = tp / (predict.sum(dim=0) + 1e-8)
recall = tp / (target.sum(dim=0) + 1e-8)
f1 = 2 * (precision * recall / (precision + recall + 1e-8))
return 1 - f1.mean()