[TensorFlow深度学习入门]实战四·逻辑回归鸢尾花进行分类(对比均方根误差与softmax交叉熵误差区别)

[TensorFlow深度学习入门]实战四·逻辑回归鸢尾花进行分类

  • 问题描述
    数据集
    鸢尾花数据集下载地址
    鸢尾花数据集包含四个特征和一个标签。这四个特征确定了单株鸢尾花的下列植物学特征:
    1、花萼长度
    2、花萼宽度
    3、花瓣长度
    4、花瓣宽度

该标签确定了鸢尾花品种,品种必须是下列任意一种:

山鸢尾 (0)
变色鸢尾 (1)
维吉尼亚鸢尾 (2)

  • 代码
    使用均方根误差
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf


file_path = "iris.csv"
df_iris = pd.read_csv(file_path, sep=",", header="infer")
np_iris = df_iris.values
np.random.shuffle(np_iris)

def normalize(temp):
    temp = 2*(temp - np.mean(temp,axis = 0))/(np.max(temp,axis = 0)-np.min(temp,axis = 0))
    return(temp)

def convert2onehot(data):
    # covert data to onehot representation
    return pd.get_dummies(data)

xs = normalize(np_iris[:,1:5]).astype(np.double)
ys = convert2onehot(np_iris[:,-1]).values

x = tf.placeholder(tf.float32,[None,4])
y_ = tf.placeholder(tf.float32,[None,3])

w1 = tf.get_variable("w1",initializer=tf.random_normal([4,64]))
w2 = tf.get_variable("w2",initializer=tf.random_normal([64,3]))
b1 = tf.get_variable("b1",initializer=tf.zeros([1,64]))
b2 = tf.get_variable("b2",initializer=tf.zeros([1,3]))
l1 = tf.matmul(x,w1)+b1
l1 = tf.nn.elu(l1)

y = tf.matmul(l1,w2)+b2

loss = tf.reduce_mean(tf.square(y-y_))
opt = tf.train.GradientDescentOptimizer(0.05).minimize(loss)

with tf.Session() as sess:
    srun = sess.run
    init = tf.global_variables_initializer()
    srun(init)
    
    for e in range(6001):
        loss_val,_ = srun([loss,opt],{x:xs[:90,:],y_:ys[:90,:]})
        if(e%400 ==0):
            print("%d steps loss is %f"%(e,loss_val))
    ys_pre = srun(y,{x:xs[90:,:]})
    result = (np.argmax(ys_pre,axis=1) == np.argmax(ys[90:,:],axis=1))
    print(np.sum(result)/60)
  • 结果
    log
0 steps loss is 62.941807
400 steps loss is 0.056762
800 steps loss is 0.039173
1200 steps loss is 0.032764
1600 steps loss is 0.029213
2000 steps loss is 0.026903
2400 steps loss is 0.025220
2800 steps loss is 0.023925
3200 steps loss is 0.022888
3600 steps loss is 0.022027
4000 steps loss is 0.021291
4400 steps loss is 0.020648
4800 steps loss is 0.020077
5200 steps loss is 0.019560
5600 steps loss is 0.019088
6000 steps loss is 0.018654
0.9933333333333333
  • 使用softmax
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf


file_path = "./0DNN/iris.csv"
df_iris = pd.read_csv(file_path, sep=",", header="infer")
np_iris = df_iris.values
np.random.shuffle(np_iris)

def normalize(temp):
    temp = 2*(temp - np.mean(temp,axis = 0))/(np.max(temp,axis = 0)-np.min(temp,axis = 0))
    return(temp)

def convert2onehot(data):
    # covert data to onehot representation
    return pd.get_dummies(data)

xs = normalize(np_iris[:,1:5]).astype(np.double)
ys = convert2onehot(np_iris[:,-1]).values

x = tf.placeholder(tf.float32,[None,4])
y_ = tf.placeholder(tf.float32,[None,3])

w1 = tf.get_variable("w1",initializer=tf.random_normal([4,64],stddev=1))
w2 = tf.get_variable("w2",initializer=tf.random_normal([64,3],stddev=1))
b1 = tf.get_variable("b1",initializer=tf.zeros([1,64])+0.01)
b2 = tf.get_variable("b2",initializer=tf.zeros([1,3])+0.01)
l1 = tf.nn.elu(tf.matmul(x,w1)+b1)


"""y = tf.matmul(l1,w2)+b2

loss = tf.reduce_mean(tf.square(y-y_))
opt = tf.train.GradientDescentOptimizer(0.05).minimize(loss)"""

y = tf.nn.softmax(tf.matmul(l1,w2)+b2)

y = tf.clip_by_value(y,1e-4,10)
cross_entropy = -tf.reduce_mean(tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
loss = cross_entropy
opt = tf.train.GradientDescentOptimizer(0.05).minimize(loss)


with tf.Session() as sess:
    srun = sess.run
    init = tf.global_variables_initializer()
    srun(init)
    
    for e in range(6001):
        loss_val,_ = srun([loss,opt],{x:xs[:,:],y_:ys[:,:]})
        if(e%400 ==0):
            print("%d steps loss is %f"%(e,loss_val))
    ys_pre = srun(y,{x:xs[:,:]})
    result = (np.argmax(ys_pre,axis=1) == np.argmax(ys[:,:],axis=1))
    print(np.sum(result)/150)


  • 输出结果
    log:
0 steps loss is 3.956946
400 steps loss is 0.049743
800 steps loss is 0.043666
1200 steps loss is 0.041287
1600 steps loss is 0.039875
2000 steps loss is 0.038858
2400 steps loss is 0.038027
2800 steps loss is 0.037303
3200 steps loss is 0.036683
3600 steps loss is 0.036195
4000 steps loss is 0.035810
4400 steps loss is 0.035505
4800 steps loss is 0.035253
5200 steps loss is 0.035039
5600 steps loss is 0.034851
6000 steps loss is 0.034681
0.9866666666666667

猜你喜欢

转载自blog.csdn.net/xiaosongshine/article/details/84595403
今日推荐