What are the commonly used functions for calculating loss in the training model model.compile()?

http://www.ifunvr.cn/180.html

https://www.cnblogs.com/smuxiaolei/p/8662177.html

The objective function, or loss function, is the performance function in the network and one of the two parameters necessary to compile a model. Due to the many types of loss functions, the following is an example of the keras official website manual.

In the official keras.io, there are the following information:

  • mean_squared_error或mse

  • mean_absolute_error or mae

  • mean_absolute_percentage_error或mape

  • mean_squared_logarithmic_error或msle

  • squared_hinge

  • hinge

  • binary_crossentropy (also known as log loss, logloss)

  • categorical_crossentropy: also known as multi-class logarithmic loss. Note that when using this objective function, the label needs to be converted into (nb_samples, nb_classes)a binary sequence of the form

  • sparse_categorical_crossentrop: As above, but accepts sparse tags. Note that when using this function, you still need your label to have the same dimension as the output value. You may need to add a dimension to the label data:np.expand_dims(y,-1)

  • kullback_leibler_divergence: The information gain from the predicted value probability distribution Q to the true value probability distribution P to measure the difference between the two distributions.

  • cosine_proximity: the opposite of the average value of the cosine distance between the predicted value and the true label

Guess you like

Origin blog.csdn.net/ch206265/article/details/107935790