tensorflow训练自己的图片进行分类

我主要用于tensorflow来进行二分类:
下面先直接上代码:
1、图片的准备与预处理阶段:
在G的文件夹下要有你分类名称的文件夹:比如你要分类猫,狗,猪,那么你就要在这G的文件夹下新建三个文件夹,名称分别为这三个。这里的G也是我随便起的,你只需要在下面的data_path = pathlib.Path(‘c:/users/hb/.keras/datasets/G’),中把路径修改一下。

import tensorflow as tf
import random
import pathlib
import numpy as np
data_path = pathlib.Path('c:/users/hb/.keras/datasets/G')
all_image_paths = list(data_path.glob('*/*'))  
all_image_paths = [str(path) for path in all_image_paths]  # 所有图片路径的列表
random.shuffle(all_image_paths)  # 打散

image_count = len(all_image_paths)

label_names = sorted(item.name for item in data_path.glob('*/') if item.is_dir())
label_to_index = dict((name, index) for index, name in enumerate(label_names))
all_image_labels = [label_to_index[pathlib.Path(path).parent.name] for path in all_image_paths]
ds = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))

def load_and_preprocess_from_path_label(path, label):
    image = tf.io.read_file(path)  # 读取图片
    image = tf.image.decode_jpeg(image, channels=3)
    image = tf.image.resize(image, [60, 60])  # 原始图片大小为(100, 100, 3),重设为(192, 192)
    image /= 255.0  # 归一化到[0,1]范围
    return image, label

image_label_ds  = ds.map(load_and_preprocess_from_path_label)

train_image = []
train_label = []
for image, label in zip(all_image_paths, all_image_labels):
    r_image,r_label= load_and_preprocess_from_path_label(image, label)
    train_image.append(r_image)
    train_label.append(r_label)
    
train_images = np.array(train_image)
train_labels = np.array(train_label)

2、模型的建立与编译运行阶段:
因为Keras是一个比较高级的API,所以,你只需要明白每一层(layers)的参数如何设置,以及定义优化器(optimizer),损失函数(loss),及度量标准(metrics),这些定义好了,然后用到前面你已经准备好的训练集了:model.fit(图像(train_images),标签(train_labels),epochs = 训练次数)

from tensorflow import keras
from tensorflow.keras import layers

model = keras.Sequential(
[
    layers.Flatten(input_shape=[60, 60, 3]),
    layers.Dense(128, activation='relu'),
    layers.Dense(10, activation='softmax')#这个地方有篇博客中说是二分类的话,用一个神经元!,而且要用sigmoid方式,不能用softmax方式
])

model.compile(optimizer='adam',
             loss='sparse_categorical_crossentropy',
             metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=500)

这下面我省略了一步,就是测试集来验证,如果正规的步骤,这是必不可少的!用测试集可以来评判我们的模型是不是出现过拟合(或者欠拟合)问题,而我只是简单的跑通一个模型,就先省略那一步,其实那一步就是按照你处理了训练集的步骤处理测试集,再用同样的方法看下结果。

3、用模型进行预测阶段:

#itt = img_convert_ndarray.reshape(1,60,60,3)
a1,b1 = load_and_preprocess_from_path_label("c:/users/hb/desktop/test1/nbbbb/nbya/50.png",1)
it = np.array(list(a1))
itt = it.reshape(1,60,60,3)
model.predict(itt)

返回的结果如下:

array([[1.6504961e-01, 8.3495039e-01, 1.6727029e-12, 1.8568482e-14,
    8.2999219e-11, 4.0040343e-20, 9.6357375e-15, 3.9275240e-17,
    3.2707468e-13, 5.0083925e-19]], dtype=float32)

Be happy every day!

发布了40 篇原创文章 · 获赞 16 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/Black_Friend/article/details/104529859