tensorflow逻辑回归

本文利用tensorflow构建逻辑回归模型,利用softmax分类器对mnist手写字体进行识别分类。每个样本是28*28大小的。

一、导入数据集

import tensorflow as tf
import numpy as np
import input_data #文件夹下要有input_data.py文件

#读入手写字体数据集,会下载tensorFlow自带的mnist手写字体数据集
mnist = input_data.read_data_sets('/data',one_hot=True)

trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg    = mnist.test.images
testlabel  = mnist.test.labels

#简单看一下样本的结构
print (trainimg.shape)
print (trainlabel.shape)
print (testimg.shape)
print (testlabel.shape)
print (trainlabel[0])

####运行结果如下:
Extracting data\train-images-idx3-ubyte.gz
Extracting data\train-labels-idx1-ubyte.gz
Extracting data\t10k-images-idx3-ubyte.gz
Extracting data\t10k-labels-idx1-ubyte.gz
(55000, 784)
(55000, 10)
(10000, 784)
(10000, 10)
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]

二、看看mnist数据集长什么样

nsample = 5
#在所有样本中生成5个随机数
randidx = np.random.randint(trainimg.shape[0], size=nsample)

for i in randidx:
    #样本是1*784的行向量,转换为28*28的图像
    img   = np.reshape(trainimg[i, :], (28, 28)) 
    label = np.argmax(trainlabel[i, :]) 
    plt.matshow(img, cmap=plt.get_cmap('gray'))
    print ( str(i) + "Label is " + str(label))         
    plt.show()

运行结果如下图:

 

三、构建逻辑回归模型(softmax分类器)

x = tf.placeholder("float", [None, 784]) 
y = tf.placeholder("float", [None, 10])  # None is for infinite 
w = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([1,10]))

#计算soft_value值,即概率值
soft_value = tf.nn.softmax(tf.matmul(x,w)+b)

#定义sofamax分类器的损失函数
loss = tf.reduce_mean(-tf.reduce_sum(y * tf.log(soft_value),reduction_indices=1))

#利用梯度下降进行参数更新
optimizer = tf.train.GradientDescentOptimizer(0.01)

#对模型进行训练
train = optimizer.minimize(loss)

四、模型的预测

#预测,tf.equal,里面两参数是否相等,相等为True,不等为False. argmax求最大索引,1为按列,即一行一行的求
pred = tf.equal(tf.argmax(soft_value,1),tf.argmax(y,1))

#精度,tf.cast将pred参数转换为float型的,pred如果为True,转换为1,如果为False,转换为0
accuracy = tf.reduce_mean(tf.cast(pred,'float'))

五、训练模型

#打开一个会议
sess = tf.Session()
sess.run(tf.global_variables_initializer())

#迭代次数
iteration = 50
#每一次训练多少个样本,批处理
batch_size = 100
#每迭代多少次显示一下
display_step = 5

for i in range(iteration):
    act_loss = 0
    #总的训练样本有多少批
    num_batch = int(mnist.train.num_examples/batch_size)

    for j in range(num_batch):
        #一批一批的获取训练样本
        batch_x,batch_y = mnist.train.next_batch(batch_size)
        sess.run(train,feed_dict = {x:batch_x,y:batch_y})
        act_loss += sess.run(loss,feed_dict={x:batch_x,y:batch_y})/num_batch
    if i%display_step==0:
        train_acc = sess.run(accuracy,feed_dict={x:batch_x,y:batch_y})
        test_acc = sess.run(accuracy,feed_dict={x:testimg,y:testlabel})
        print("iteration:%d loss:%3f train_acc:%3f test_acc:%3f" %(i,act_loss,train_acc,test_acc))

####运行结果如下:
iteration:0 loss:1.176432 train_acc:0.830000 test_acc:0.850200
iteration:5 loss:0.440939 train_acc:0.930000 test_acc:0.894900
iteration:10 loss:0.383324 train_acc:0.910000 test_acc:0.905300
iteration:15 loss:0.357305 train_acc:0.940000 test_acc:0.908800
iteration:20 loss:0.341447 train_acc:0.900000 test_acc:0.912400
iteration:25 loss:0.330520 train_acc:0.950000 test_acc:0.914200
iteration:30 loss:0.322364 train_acc:0.950000 test_acc:0.915500
iteration:35 loss:0.315945 train_acc:0.900000 test_acc:0.916800
iteration:40 loss:0.310734 train_acc:0.890000 test_acc:0.918000
iteration:45 loss:0.306360 train_acc:0.880000 test_acc:0.918200

猜你喜欢

转载自blog.csdn.net/qq_24946843/article/details/81979021