Example of using KLDivergence in tensorflow

loss = y_true * log(y_true / y_pred)

Case 1: Counting two one-dimensional arrays

Reference: tf.keras.losses.KLDivergence

k = tf.keras.losses.KLDivergence()
loss = k([.4, .9, .2], [.5, .8, .12])
print('Loss: ', loss.numpy())  # Loss: 0.11891246

insert image description here

Case 2: Calculate by batch

Reference: TensorFlow->API->TensorFlow Core v2.2.0->Python

import tensorflow as tf
y_true = [[0.3, 0.7], [0.2, 0.8]]
y_pred = [[0.4, 0.6], [0.1, 0.9]]
kl = tf.keras.losses.KLDivergence()
kl(y_true, y_pred).numpy()

insert image description here

The actual operation is:
insert image description here

Because batch_size=2 here, all require the average on the batch dimension.

Full example:
insert image description here

Guess you like

Origin blog.csdn.net/aa2962985/article/details/124236598