pytorch calculates kl divergence F.kl_div()

First attach the official document description: https://pytorch.org/docs/stable/nn.functional.html

torch.nn.functional.kl_div(inputtargetsize_average=Nonereduce=Nonereduction='mean')

Parameters

  • input – Tensor of arbitrary shape

  • target – Tensor of the same shape as input

  • size_average (booloptional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True

  • reduce (booloptional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True

  • reduction (stringoptional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean''none': no reduction will be applied 'batchmean': the sum of the output will be divided by the batchsize 'sum': the output will be summed 'mean': the output will be divided by the number of elements in the output Default: 'mean'

然后看看怎么用:第一个参数传入的是一个对数概率矩阵,第二个参数传入的是概率矩阵。这里很重要,不然求出来的kl散度可能是个负值

比如现在我有两个矩阵X, Y。因为kl散度具有不对称性,存在一个指导和被指导的关系,因此这连个矩阵输入的顺序需要确定一下。

举个例子:如果现在想用Y指导X,第一个参数要传X,第二个要传Y。就是被指导的放在前面,然后求相应的概率和对数概率就可以了。

import torch
import torch.nn.functional as F

# 定义两个矩阵
x = torch.randn((4, 5))
y = torch.randn((4, 5))

# 因为要用y指导x,所以求x的对数概率,y的概率
logp_x = F.log_softmax(x, dim=-1)
p_y = F.softmax(y, dim=-1)


kl_sum = F.kl_div(logp_x, p_y, reduction='sum')
kl_mean = F.kl_div(logp_x, p_y, reduction='mean')

print(kl_sum, kl_mean)


>>> tensor(3.4165) tensor(0.1708)

 

Guess you like

Origin blog.csdn.net/Answer3664/article/details/106265132