稀疏标签的交叉熵 :tensorflow

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/NockinOnHeavensDoor/article/details/82713189

熵 :

  • 离散形式:
    H ( X ) k = 1 K P ( X = k ) log P ( X = k ) H(X)\equiv - \sum_{k=1}^KP(X=k)\log{P(X=k)}

##KL散度:
K L ( P Q ) = k = 1 K P k log P k Q k = k = 1 K P k log P k k = 1 K P k l o g Q k = H ( P ) + H ( P , Q ) / / + KL(P||Q)= \sum_{k=1}^KP_k \log\frac{P_k}{Q_k}= \\ \sum_{k=1}^KP_k\log{P_{k}} -\sum_{k=1}^KP_k{log{Q_k}} = \\ -H(P)+H(P,Q) // 负熵 + 交叉熵

这里的 Q Q 是先验分布【model of p】,P是后验分布[distribution of data].我们用Q去近似P,即模型分布拟合数据分布

##交叉熵:
H ( P , Q ) = k = 1 K P k log Q k H(P,Q) = -\sum_{k=1}^K{P_k}\log{Q_k}

##JS散度:
J S ( P 1 , P 2 ) = 0.5 K L ( P 1 Q ) + 0.5 K L ( P 2 Q ) JS(P_1,P_2)=0.5KL(P_1 || Q) + 0.5KL(P_2 || Q)
这里 Q = 0.5 P 1 + 0.5 P 2 Q = 0.5P_1 + 0.5P_2

tensorflow code


import tensorflow as tf  

#our NN's output  

logits=tf.constant([[1.0,2.0,3.0],[1.0,2.0,3.0],[1.0,2.0,3.0]])  

#step1:do softmax  

y=tf.nn.softmax(logits)  

#true label 

#注意这里标签必须是浮点数,不然在后面计算tf.multiply时就会因为类型不匹配tf_log的float32数据类型而出错

y_=tf.constant([[0,0,1.0],[0,1.0,0],[0,0,1.0]])#这个是稀疏的标签

#step2:do log  

tf_log=tf.log(y)

#step3:do mult  

pixel_wise_mult=tf.multiply(y_,tf_log)

#step4:do cross_entropy  

cross_entropy = -tf.reduce_sum(pixel_wise_mult)  

 

#do cross_entropy just two step  

#将标签稠密化
###看下打印结果就知道什么意思了
dense_y=tf.arg_max(y_,1)

cross_entropy2_step1=tf.nn.sparse_softmax_cross_entropy_with_logits(labels=dense_y,logits=logits)

cross_entropy2_step2=tf.reduce_sum(cross_entropy2_step1)#dont forget tf.reduce_sum()!!  

with tf.Session() as sess:    
    dense_y_value = sess.run(dense_y)
    y_value,tf_log_value,pixel_wise_mult_value,cross_entropy_value=sess.run([y,tf_log,pixel_wise_mult,cross_entropy])

    sparse_cross_entropy2_step1_value,sparse_cross_entropy2_step2_value=sess.run([cross_entropy2_step1,cross_entropy2_step2])
    print("dense_y_value = \n%s\n"% (dense_y_value))
    print("step1:softmax result=\n%s\n"%(y_value))

    print("step2:tf_log_result result=\n%s\n"%(tf_log_value))

    print("step3:pixel_mult=\n%s\n"%(pixel_wise_mult_value))

    print("step4:cross_entropy result=\n%s\n"%(cross_entropy_value))
    
    print("Function(softmax_cross_entropy_with_logits) result=\n%s\n"%(sparse_cross_entropy2_step1_value))  

    print("Function(tf.reduce_sum) result=\n%s\n"%(sparse_cross_entropy2_step2_value))  

WARNING:tensorflow:From <ipython-input-122-bf4ee12f6b66>:35: arg_max (from tensorflow.python.ops.gen_math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `argmax` instead
dense_y_value = 
[2 1 2]

step1:softmax result=
[[0.09003057 0.24472848 0.66524094]
 [0.09003057 0.24472848 0.66524094]
 [0.09003057 0.24472848 0.66524094]]

step2:tf_log_result result=
[[-2.407606   -1.4076059  -0.407606  ]
 [-2.407606   -1.4076059  -0.407606  ]
 [-2.407606   -1.4076059  -0.40760598]]

step3:pixel_mult=
[[-0.         -0.         -0.407606  ]
 [-0.         -1.4076059  -0.        ]
 [-0.         -0.         -0.40760598]]

step4:cross_entropy result=
2.222818

Function(softmax_cross_entropy_with_logits) result=
[0.40760595 1.4076059  0.40760595]

Function(tf.reduce_sum) result=
2.2228177

其中的

###看下打印结果就知道什么意思了
dense_y=tf.arg_max(y_,1)

需注意。

猜你喜欢

转载自blog.csdn.net/NockinOnHeavensDoor/article/details/82713189
今日推荐