Pytorch: loss function

Loss of function by calling torch.nn implementation of the package.

Basic usage:

LossCriterion = Criterion () # constructor has its own parameters 
Loss = Criterion (X, Y) # also calls the standard parameters

 

L1 norm loss L1Loss

Calculating an absolute value of the difference between the target and the output of.

torch.nn.L1Loss (= Reduction ' Mean ' ) 
# reduction- three values, none: no use of reduction; mean: the average value and return loss; sum: and return loss. Default: mean.

 

Mean square error loss MSELoss

Calculates the difference between output and the target of mean square error.

torch.nn.MSELoss (= Reduction ' Mean ' )
 # reduction- three values, none: no use of reduction; mean: the average value and return loss; sum: and return loss. Default: mean.

 

Cross entropy loss CrossEntropyLoss
very effective when trained classification C categories. Optional parameters weight must be a 1-dimensional Tensor, weights are assigned to each category. Very effective training set unbalanced.
Classification in the multi-task, often used softmax activation function + cross entropy loss function, because the cross-entropy describes the differences of two probability distributions, however, the output of the neural network is in the form of a vector, not the probability distribution. Therefore, a need softmax activation function vector "normalized" form into a probability distribution, and then calculating the loss function using cross entropy loss.

 

 

 

torch.nn.CrossEntropyLoss (= None weight, ignore_index = -100, = Reduction ' Mean ' )
 # weight (the Tensor, optional) -. Since weights for each category must be defined by a length weight of C the Tensor 
# ignore_index (int , optional) - setting a target value which will be ignored, so as not to affect the gradient of the input. 
# Reduction- three values, none: no use of reduction; mean: the average value and return loss; sum: and return loss. Default: mean.

 

KL divergence loss KLDivLoss

KL divergence calculated between the input and target. Effective when the KL divergence can be used to measure the distance between different continuous distribution, direct regression spatially distributed over the continuous output (discrete sampling)

torch.nn.KLDivLoss (= Reduction ' Mean ' ) 

# reduction- three values, none: no use of reduction; mean: the average value and return loss; sum: and return loss. Default: mean.

 

Binary cross entropy loss BCELoss

Cross entropy of binary classification task computing functions. For measuring the error of reconstruction, encoding such as automatic machines. Note that the target value of t [i] is in the range between 0 and 1.

torch.nn.BCELoss (= None weight, = Reduction ' Mean ' )
 # weight (the Tensor, optional) - weight loss from each batch of the elements must be re-defined by a length "nbatch" of the Tensor. 
pos_weight (the Tensor , optional) - from each positive sample weight loss of weight must be defined by a length "classes" of Tensor.

 

BCEWithLogitsLoss

After BCEWithLogitsLoss loss function is integrated into the Sigmoid layer BCELoss class. BCELoss the plate and more stable than a simple numerically Sigmoid layer, because these two operations into a layer, using log-sum-exp Tips to achieve numerical stability.

torch.nn.BCEWithLogitsLoss (= None weight, = Reduction ' Mean ' , pos_weight = None)
 # weight (the Tensor, optional) - weight loss from each batch of the elements must be re-defined by a length "nbatch" of Tensor. 
pos_weight (Tensor, optional) -. since the weight loss of each of the positive samples must be re-defined by a length "classes" of Tensor

 

MarginRankingLoss

torch.nn.MarginRankingLoss(margin=0.0, reduction='mean')

For (small batches) in each instance mini-batch loss function as follows:
Here Insert Picture Description

 

 HingeEmbeddingLoss

torch.nn.HingeEmbeddingLoss (margin = 1.0, = Reduction ' Mean ' )
 # margin: Default 1

For (small batches) in each instance mini-batch loss function as follows:
Here Insert Picture Description

 

Multi-label classification loss MultiLabelMarginLoss

torch.nn.MultiLabelMarginLoss(reduction='mean')

For the mini-batch (small quantities) loss of each sample is calculated by the following formula:
Here Insert Picture Description

 

L1 smoothed version loss SmoothL1Loss

Also known as Huber loss function.

torch.nn.SmoothL1Loss(reduction='mean')

Here Insert Picture Description
among them
Here Insert Picture Description

 

2 classification of logistic loss SoftMarginLoss

torch.nn.SoftMarginLoss(reduction='mean')

**parameter:**

 

Multi-label one-versus-all loss MultiLabelSoftMarginLoss

torch.nn.MultiLabelSoftMarginLoss(weight=None, reduction='mean')

 

 

cosine loss CosineEmbeddingLoss

torch.nn.CosineEmbeddingLoss(margin=0.0, reduction='mean')

Here Insert Picture Description

 

 Multi-class classification hinge loss MultiMarginLoss

torch.nn.MultiMarginLoss(p=1, margin=1.0, weight=None,  reduction='mean')

 

Here Insert Picture Description

p = 1 or 2 Default: 1
margin: Default 1

 

Triples loss TripletMarginLoss

torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, reduction='mean')

Here Insert Picture Description
among them:
Here Insert Picture Description

 

The connection sequence classification loss CTCLoss

CTC loss connection sequence classification, data can not aligned automatically aligned, mainly used in the training data sequence without prior alignment. Such as voice recognition, ocr recognition and so on.

torch.nn.CTCLoss(blank=0, reduction='mean')

reduction- three values, none: no use of reduction; mean: the average value and return loss; sum: and return loss. Default: mean.

 

Negative log-likelihood loss NLLLoss

Negative log-likelihood losses for training classification C categories.

torch.nn.NLLLoss (= None weight, ignore_index = -100, = Reduction 'Mean') 
weight (the Tensor, optional) -. Since weights for each category must be defined by a length weight of C the Tensor
ignore_index (int, optional ) - setting a target value which will be ignored, so as not to affect the gradient of the input.

 

NLLLoss2d

For the negative logarithm of the likelihood of loss of the input image which is calculated for each pixel of the negative log likelihood of loss.

torch.nn.NLLLoss2d(weight=None, ignore_index=-100, reduction='mean')

weight (Tensor, optional) - Self weights for each category must be defined by a length weight of C the Tensor.
reduction- three values, none: no use of reduction; mean: the average value and return loss; sum: return loss and. Default: mean.

 

PoissonNLLLoss

The target value of the negative Poisson log-likelihood loss

torch.nn.PoissonNLLLoss(log_input=True, full=False,  eps=1e-08,  reduction='mean')

log_input (bool, optional) - If set to True, loss will (input) exp according to the formula - target * input to calculate, if set to False, loss will be in accordance with the input - target * log (input + eps) is calculated.
Full (bool, optional) - whether the calculated entire loss, ie approximately plus Stirling * log entry target (target) - target + 0.5 * log (2 * PI * target).
(a float, optional) EPS - default: 1e- 8

 

----------------
Disclaimer: This article is the original article CSDN bloggers' mingo_ sensitive ", and follow CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source and link this statement.
Original link: https: //blog.csdn.net/shanglianlm/article/details/85019768

Guess you like

Origin www.cnblogs.com/ziwh666/p/12398246.html