L1Loss and MSELoss in torch.nn

We open the Pytorch official website, find the loss function in torch.nn, and enter it as shown in the figure below.

 

L1LOSS

Let's first look at the use of the L1LOSS loss function. The picture below is the description given on the official website.

        There are two ways of L1loss, one is to accumulate all errors as the total loss, and the other is to accumulate all errors and average them as the total loss.
        For example, the given input is input = [1,2,3], and the desired target is target = [1,2,5], if L1loss uses cumulative summation to find the total loss, then there will be a total loss L=|1-1 |+|2-2|+|5-3|=2. As shown in example 2.
     If L1loss uses cumulative summation and averaging as the total loss, then there is a total loss L=(|1-1|+|2-2|+|5 -3|)/3=0.6667. As shown in example 1.

We use code to implement the L1loss function.

Example 1: The method of L1loss is to accumulate and sum and then average. 

import torch
from torch.nn import L1Loss
inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)

inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))

loss = L1Loss()
result = loss(inputs, targets)
print(result) # tensor(0.6667)

Example 2: The method of L1loss is cumulative summation. At this time, the parameter reduction in L1loss should be 'sum'. Defaults to 'mean'.

import torch
from torch.nn import L1Loss
inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)

inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))


loss = L1Loss(reduction='sum')
result = loss(inputs, targets)
print(result) # tensor(2.)

MSELOSS

Let's take a look at the use of the MSELOSS loss function. The picture below is the description given on the official website.

        The only difference between MSELOSS and L1LOSS is that MSELOSS considers the square when calculating each loss. Let's take the example above as an example.
        Given that the input is input = [1,2,3], and the desired target is target = [1,2,5], if MSEloss uses cumulative summation to calculate the total loss, then there will be a total loss L=(1-1)^ 2+(2-2)^2+(5-3)^2=4. As shown in example 3.
     If MSEloss uses the cumulative sum and average as the total loss, then the total loss L = {(1-1)^2+(2-2)^2+(5-3)^2} /3=4/ 3. As shown in Example 4.

Example 3

import torch
from torch.nn import MSELoss
inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)

inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))


loss = MSELoss(reduction='sum')
result = loss(inputs, targets)
print(result) # tensor(4.)

Example 4

import torch
from torch.nn import MSELoss
inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)

inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))


loss = MSELoss()
result = loss(inputs, targets)
print(result) # tensor(1.3333)

Guess you like

Origin blog.csdn.net/m0_48241022/article/details/132639400