pytorch学习笔记1 - loss max

1. loss函数

https://pytorch.org/docs/stable/nn.html#loss-functions

torch.nn.CrossEntropyLoss(weight=None, size_average=True, ignore_index=-100, reduce=True)

参数:
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) : False - sum of mibi-batch; True - average of mibi-batch.
ignore_index (int, optional) – 是否忽略某特定index对gradient的贡献,size_average = True, 忽略指定index目标的loss.
reduce (bool, optional) – False - 返回单个样本的loss, True - 按照size_average设定来计算loss

2. max

torch.max(input, other, out=None) → Tensor

找最大值,返回两个tensor,第一个tensor存放最大值,第二个tensor存放最大值的index
第二个参数为0,按列求最大;为1,按行求最大值

>>> print(a)
tensor([[ 0.7179,  0.6337,  0.9852,  0.4298],
        [ 0.2666,  0.9482,  0.6927,  0.6029],
        [ 0.1988,  0.1044,  0.6534,  0.1656]])
#按行找最大值
>>> b = torch.max(a,1)
>>> print(b)
(tensor([ 0.9852,  0.9482,  0.6534]), tensor([ 2,  1,  2]))

#按列找最大值
>>> b,c = torch.max(a,0)
>>> print(b)
tensor([ 0.7179,  0.9482,  0.9852,  0.6029])
>>> print(c)
tensor([ 0,  1,  0,  1])

猜你喜欢

转载自blog.csdn.net/weixin_41043240/article/details/80257402
今日推荐