寒假PyTorch工具第七天

课程记录

从权值初始化到各种loss


课程代码


作业

2.     损失函数的reduction有三种模式,它们的作用分别是什么?

当inputs和target及weight分别如以下参数时,reduction=’mean’模式时,loss是如何计算得到的?

inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)

target = torch.tensor([0, 1, 1], dtype=torch.long)

weights = torch.tensor([1, 2]

加权交叉熵 Loss

import torch
import torch.nn as nn


inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)
# def loss function
weights = torch.tensor([1, 200], dtype=torch.float)

loss_f_none_w = nn.CrossEntropyLoss(weight=weights, reduction='none')
loss_f_sum = nn.CrossEntropyLoss(weight=weights, reduction='sum')
loss_f_mean = nn.CrossEntropyLoss(weight=weights, reduction='mean')

# forward
loss_none_w = loss_f_none_w(inputs, target)
loss_sum = loss_f_sum(inputs, target)
loss_mean = loss_f_mean(inputs, target)

# view
print("\nweights: ", weights)
print(loss_none_w, loss_sum, loss_mean)

猜你喜欢

转载自blog.csdn.net/u013625492/article/details/114236856