[pytorch]关于KL散度的计算

可以直接调用torch.nn.functional.kl_div进行计算,具体的用法为:

torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False)

输入变量的含义:

  • reduction (str, optional) – 制定输出的加权方式. Default: “mean”,也就是平均.“batchmean”是求各个样本的平均,等于sum/batchsize。还可以选’sum’,将batch中样本的平均损失相加。
  • log_target (bool, optional) – 指定target是否为log空间。默认值:False.如果是True,则输入的Target应当位于log空间。

举例说明:

target在softmax空间
import torch.nn as nn
>>> import torch.nn.functional as F
>>> kl_loss = nn.KLDivLoss(reduction="batchmean")
 # input should be a distribution in the log space
>>> input = F.log_softmax(torch.randn(3, 5, requires_grad=True), dim=1)
 # Sample a batch of distributions. Usually this would come from the dataset
>>> target = F.softmax(torch.rand(3, 5), dim=1)
>>> output = kl_loss(input, target)

target在log空间
>>> kl_loss = nn.KLDivLoss(reduction="batchmean", log_target=True)
>>> log_target = F.log_softmax(torch.rand(3, 5), dim=1)
>>> output = kl_loss(input, log_target)

猜你喜欢

转载自blog.csdn.net/condom10010/article/details/129524759