Loss function Keras deep learning framework

A. Use function loss

  Loss function [also known as optimization objective function or scoring function] is one of the two parameters required for the compilation model.

  model.compile(loss='mean_squared_error', optimizer='sgd')

  or

  from keras import losses

  model.compile(loss=losses.mean_squared_error, optimizer='sgd')

  You can pass a current loss function name or a TensorFlow / Theano signum function. Each data point returns a scalar, has a lower symbol of the two parameters as a function of:

  1.y_true

    The real label, TensorFlow / Theano tensor.

  2.y_pred

    Prediction value, / Theano tensor TensorFlow, which is the same shape and y_true.

  The actual optimization goal is an average of all the output array data points.

II. Available loss function

  1.mean_squared_error (y_true, y_pred) [MSE, mean square error}

    The formula is:

    

    Source:

    

  2.mean_absolute_error (y_true, y_pred) [MAE, the average absolute error]

    MAE mentioned can not say significant target detection, target the so-called significant, for example, when we look at a picture, we will first focus on those colorful, eye-catching content. As we will first of all look at the same time to see Transformers Optimus Prime, this is absolutely the C bits. So we define Transformers Optimus Prime is a significant goal.

    The evaluation index calculating significance target detection, the detection algorithms have used mean absolute error, which is calculated as follows:

    

    Source:

    

  3.mean_absolute_percentage_error [MAPE, the mean absolute percentage error]

    Similarly the average absolute error, mean absolute percentage error between the predicted results and the variation ratio of the true value. Calculated as follows:

    

    Source:

    

    Remarks:

    1.clip

      The number of element by element, will exceed the specified range into a number of border enforcement.

    2.epsilon

      Fixed parameters, the default value 1 * e-7.

  [4.mean_squared_logarithmic_error MSLE, logarithmic mean square error}

    Before calculating the mean square error to logarithmic data, recalculated.

    The formula is:

    

     Source:

    

  5.squared_hinage [not used]

    The formula is:

    

    Source:

    

  6.hinage [not used]

    The formula is:

    

    Source:

    

  7.categorical_hinge [not used]

    Source:

    

  8.logcosh [not used]

    Prediction error of the logarithm of the hyperbolic cosine. The results mean square error with roughly the same, but not strongly influenced by the occasional crazy wrong prediction.

    Source:

    

  9.categorical_crossentropy [not used]

    When using categorical_crossentropy loss, the target should be classified [i.e., if the format type 10, then the target value of each sample should be a 10-dimensional vector, this vector represents the index categories in addition to 1, others are 0} . To convert the integer target classification target, utility function can use keras to_categorical.

    from keras.utils.np_utils import to_categorical

    categorical_labels = to_categorical(int_labels, num_classes=None)

    Source:

    

   10.sparse_categorical_crossentropy [not used]

    Source:

    

   11.binary_crossentropy [not used]

    Source:

    

  12.kullback_leibler_divergence [not used]

    Source:

    

  13.poisson [not used]

    The formula is:

    

     Source:

    

  14.cosine_proximity [not used]

    The formula is:

    

    Source:

    

III. Other types of loss function

  1.ctc_batch_cost [high performance]

    Source:

    

 

    CTC loss algorithm running on each batch element.

    parameter:

    1.y_true

      It contains the true value of the tag tensor. Type (samples, max_string_length).

    2.y_pred

      包含预测值或softmax输出的张量。类型(samples, time_steps, num_categories)。

    3.input_length

      张量(samples, 1),包含y_pred中每个批处理项的序列长度。

    4.label_length

      张量(samples, 1), 包含y_true中每个批处理项的序列长度。

    返回shape为(samples, 1)的张量,包含每一个元素的CTC损失。

Guess you like

Origin www.cnblogs.com/yszd/p/12362461.html