Deep Neural Networks - Common Loss Functions

classification task

The most used is the cross entropy loss function

multi-category task

insert image description here
insert image description here
insert image description here

import tensorflow as tf
y_true=[[0,1,0],[0,0,1]]
y_pre=[[0.05,0.9,0.05],[0.3,0.2,0.5]]
cce=tf.keras.losses.CategoricalCrossentropy()
cce(y_true,y_pre)

<tf.Tensor: shape=(), dtype=float32, numpy=0.39925388>

Binary classification task

insert image description here

y_true=[[0],[1]]
y_pre=[[0.4],[0.6]]
bce=tf.keras.losses.BinaryCrossentropy()
bce(y_true,y_pre)

<tf.Tensor: shape=(), dtype=float32, numpy=0.5108254>

return task

MAE loss (L1 Loss)

insert image description here
insert image description here

y_true=[[0.],[1.]]
y_pre=[[1.],[0.]]
mae=tf.keras.losses.MeanAbsoluteError()
mae(y_true,y_pre)

<tf.Tensor: shape=(), dtype=float32, numpy=1.0>

MSE loss (L2 Loss)

Euclidean distance
insert image description here
insert image description here

y_true=[[0.],[1.]]
y_pre=[[1.],[1.]]
mse=tf.keras.losses.MeanSquaredError()
mse(y_true,y_pre)

<tf.Tensor: shape=(), dtype=float32, numpy=0.5>

smooth L1 loss

insert image description here

y_true=[[0.],[1.]]
y_pre=[[0.2],[0.6]]
smooth=tf.keras.losses.Huber()
smooth(y_true,y_pre)

<tf.Tensor: shape=(), dtype=float32, numpy=0.049999997>

Guess you like

Origin blog.csdn.net/qq_40527560/article/details/131493351