win7+SSD-Tensorflow 训练自己的数据集

毕设第一次接触这个,菜鸟级别,记录过程。如有错误,感谢指正。

前期准备

系统与环境

win7系统以及老cpu我的处理器
anaconda3以及tensorflow环境
jupyter
opencv库
pycharm

代码

SSD-Tensorflow(源码)
解压到SSD-Tensorflow-master文件夹
checkpoints子文件夹下的两个文件直接解压到此文件夹中
测试代码
1.方法一
在文件夹处开终端,运行:
jupyter notebook notebooks/ssd_notebook.ipynb
2.方法二
自己创建一个py脚本,命名为SSD_detect.py,保存在SSD-Tensorflow-master中,代码如下:

import os
import math
import random
 
import numpy as np
import tensorflow as tf
import cv2
 
slim = tf.contrib.slim
#%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys
#sys.path.append('../')
from nets import ssd_vgg_300, ssd_common, np_methods
from preprocessing import ssd_vgg_preprocessing
from notebooks import visualization
# TensorFlow session: grow memory when needed. TF, DO NOT USE ALL MY GPU MEMORY!!!
gpu_options = tf.GPUOptions(allow_growth=True)
config = tf.ConfigProto(log_device_placement=False, gpu_options=gpu_options)
isess = tf.InteractiveSession(config=config)
# Input placeholder.
net_shape = (300, 300)
data_format = 'NHWC'
img_input = tf.placeholder(tf.uint8, shape=(None, None, 3))
# Evaluation pre-processing: resize to SSD net shape.
image_pre, labels_pre, bboxes_pre, bbox_img = ssd_vgg_preprocessing.preprocess_for_eval(
    img_input, None, None, net_shape, data_format, resize=ssd_vgg_preprocessing.Resize.WARP_RESIZE)
image_4d = tf.expand_dims(image_pre, 0)
 
# Define the SSD model.
reuse = True if 'ssd_net' in locals() else None
ssd_net = ssd_vgg_300.SSDNet()
with slim.arg_scope(ssd_net.arg_scope(data_format=data_format)):
    predictions, localisations, _, _ = ssd_net.net(image_4d, is_training=False, reuse=reuse)
 
# Restore SSD model.
ckpt_filename = 'checkpoints/ssd_300_vgg.ckpt'
# ckpt_filename = '../checkpoints/VGG_VOC0712_SSD_300x300_ft_iter_120000.ckpt'
isess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.restore(isess, ckpt_filename)
 
# SSD default anchor boxes.
ssd_anchors = ssd_net.anchors(net_shape)
 
# Main image processing routine.
def process_image(img, select_threshold=0.5, nms_threshold=.45, net_shape=(300, 300)):
    # Run SSD network.
    rimg, rpredictions, rlocalisations, rbbox_img = isess.run([image_4d, predictions, localisations, bbox_img],
                                                              feed_dict={img_input: img})
    
    # Get classes and bboxes from the net outputs.
    rclasses, rscores, rbboxes = np_methods.ssd_bboxes_select(
            rpredictions, rlocalisations, ssd_anchors,
            select_threshold=select_threshold, img_shape=net_shape, num_classes=21, decode=True)
    
    rbboxes = np_methods.bboxes_clip(rbbox_img, rbboxes)
    rclasses, rscores, rbboxes = np_methods.bboxes_sort(rclasses, rscores, rbboxes, top_k=400)
    rclasses, rscores, rbboxes = np_methods.bboxes_nms(rclasses, rscores, rbboxes, nms_threshold=nms_threshold)
    # Resize bboxes to original image shape. Note: useless for Resize.WARP!
    rbboxes = np_methods.bboxes_resize(rbbox_img, rbboxes)
    return rclasses, rscores, rbboxes
 
# Test on some demo image and visualize output.
path = 'demo/'
image_names = sorted(os.listdir(path))
 
img = mpimg.imread(path + image_names[-5])
rclasses, rscores, rbboxes =  process_image(img)
 
# visualization.bboxes_draw_on_img(img, rclasses, rscores, rbboxes, visualization.colors_plasma)
visualization.plt_bboxes(img, rclasses, rscores, rbboxes)

在终端直接输入python SSD_detect.py 运行即可

数据集制备

创建VOC2007文件夹,方便起见,仍保存在SSD-Tensorflow-master
VOC2007下,创建三个子文件夹,分别命名为AnnotationsImageSetsJPEGImages
1.JPEGImages
装载图片文件(.jpg或.jpeg)
注意图片命名方式必须为000001.jpg,000002.jpg…形式。
重命名方式有很多,可参考这个较为简单的使用自制的VOC2007数据集制作工具
2.Annotations
装载训练的标签文件(.xml)
制作方式:labelimg标注
3.ImageSets
在其中创建子文件夹Main,存放train.txt, trainval.txt, test.txt, val.txt
生成代码:

import os  
import random   
  
xmlfilepath=r'自己的Annotations文件路径'  
saveBasePath=r'自己的ImageSets文件路径'
  
trainval_percent=0.7 
train_percent=0.7  
total_xml = os.listdir(xmlfilepath)  
num=len(total_xml)    
list=range(num)    
tv=int(num*trainval_percent)    
tr=int(tv*train_percent)    
trainval= random.sample(list,tv)    
train=random.sample(trainval,tr)    
  
print("train and val size",tv)  
print("traub suze",tr)  
ftrainval = open(os.path.join(saveBasePath,'Main/trainval.txt'), 'w')    
ftest = open(os.path.join(saveBasePath,'Main/test.txt'), 'w')    
ftrain = open(os.path.join(saveBasePath,'Main/train.txt'), 'w')    
fval = open(os.path.join(saveBasePath,'Main/val.txt'), 'w')    
  
for i  in list:    
    name=total_xml[i][:-4]+'\n'    
    if i in trainval:    
        ftrainval.write(name)    
        if i in train:    
            ftrain.write(name)    
        else:    
            fval.write(name)    
    else:    
        ftest.write(name)    
    
ftrainval.close()    
ftrain.close()    
fval.close()    
ftest .close() 

生成后在VOC2007外另外建个文件夹VOCtest,存放test.txt。

网络训练

代码修改

以下部分全部在pycharm中进行,将SSD-Tensorflow-mastet文件夹作为project打开。
1.SSD-Tensorflow-mastet/datasets/pascalvoc_common.py文件中,24-46行,第一类none不要动,其他的类修改为自己数据集的类。
VOC中默认20类,加一个背景类,所以一共21类。自己训练时根据实际情况设置
我修改过后的:

VOC_LABELS = {
    'none': (0, 'Background'),
    'car': (1, 'Car'),
    'aeroplane': (2, 'Vehicle'),
    'bicycle': (3, 'Vehicle'),
     
#    'aeroplane': (1, 'Vehicle'),
#    'bicycle': (2, 'Vehicle'),
#    'bird': (3, 'Animal'),
#    'boat': (4, 'Vehicle'),
#    'bottle': (5, 'Indoor'),
#    'bus': (6, 'Vehicle'),
#    'car': (7, 'Vehicle'),
#    'cat': (8, 'Animal'),
#    'chair': (9, 'Indoor'),
#    'cow': (10, 'Animal'),
#    'diningtable': (11, 'Indoor'),
#    'dog': (12, 'Animal'),
#    'horse': (13, 'Animal'),
#    'motorbike': (14, 'Vehicle'),
#    'person': (15, 'Person'),
#    'pottedplant': (16, 'Indoor'),
#    'sheep': (17, 'Animal'),
#    'sofa': (18, 'Indoor'),
#    'train': (19, 'Vehicle'),
#    'tvmonitor': (20, 'Indoor'),
}

2.SSD-Tensorflow-master/datasets/pascalvoc_to_tfrecords.py文件中,82行,格式改为相对应的.jpg.jpeg(已经是jpg形式则不用更改),83行r改为rb
67行修改SAMPLES_PER_FILES参数,设置几张图片转为一个tfrecord文件。
我设置的值是1.

3.SSD-Tensorflow-master/nets/ssd_vgg_300.py文件中,96-97行的num_classes和no_annotation_label改为类别数+1

4.SSD-Tensorflow-master/eval_ssd_network.py 文件中,66行,修改num_classes为类别数+1

5.SSD-Tensorflow-master/datasets/pascalvoc_2007.py文件中,第31行和55行的none类不动,其他类修改为自己数据集的类,其中括号内的第一个数为图片数,第二个数为目标数(也就是bonding box数),52行和76行的total是所有类的总和。79和80行的数改为自己数据集的训练和测试集总数,86行的NUM_CLASSES改为自己的类别数(不用加一)。
计算脚本命名为collect_class.py:

import re
import os
import xml.etree.ElementTree as ET
class1 = '自己类别1'
class2 = '自己类别2'
class3 = '自己类别3'

annotation_folder = '自己标签文件夹的路径'		
list = os.listdir(annotation_folder)

def file_name(file_dir):
	L = []
	for root, dirs, files in os.walk(file_dir):
		for file in files:
			if os.path.splitext(file)[1] == '.xml':
				L.append(os.path.join(root, file))
	return L


total_number1 = 0
total_number2 = 0
total_number3 = 0

pic_num1 = 0
pic_num2 = 0
pic_num3 = 0

flag1 = 0
flag2 = 0
flag3 = 0


xml_dirs = file_name(annotation_folder)
total_pic =0
total =0
for i in range(0, len(xml_dirs)):
	print(xml_dirs[i])
	#path = os.path.join(annotation_folder,list[i])
	#print(path)

	annotation_file = open(xml_dirs[i],encoding='UTF-8').read()

	root = ET.fromstring(annotation_file)
	#tree = ET.parse(annotation_file)
	#root = tree.getroot()

	total_pic = total_pic + 1
	for obj in root.findall('object'):
		label = obj.find('name').text
		if label == class1:
			total_number1=total_number1+1
			flag1=1
			total = total + 1
			#print("bounding box number:", total_number1)
		if label == class2:
			total_number2=total_number2+1
			flag2=1
			total = total + 1
		if label == class3:
			total_number3=total_number3+1
			flag3=1
			total = total + 1

	if flag1==1:
		pic_num1=pic_num1+1
		#print("pic number:", pic_num1)
		flag1=0
	if flag2==1:
		pic_num2=pic_num2+1
		flag2=0
	if flag3==1:
		pic_num3=pic_num3+1
		flag3=0

print(class1,pic_num1,total_number1)
print(class2,pic_num2,total_number2)
print(class3,pic_num3, total_number3)
print("total", total_pic, total)

按输出值填入即可。

6.SSD-Tensorflow-master/train_ssd_network.py文件中,修改135行的num_classes类别数+1
修改154行的‘None’为最大训练步数(如50000),设为None时训练会一直进行,需要手动停止。还可以修改此文件中的batch size,leaning rate等等……
也可以不修改,等到最后运行时再修改

生成tfrecord文件

SSD-Tensorflow-master/tf_convert_data.py文件中,修改’dataset_dir’,‘output_name’,'output_dir’分别为自己的VOC2007文件夹路径、输出数据集名称、输出路径。如下:

tf.app.flags.DEFINE_string(
    'dataset_name', 'pascalvoc',
    'The name of the dataset to convert.')
tf.app.flags.DEFINE_string(
    'dataset_dir', r'F:/cjc/SSD-Tensorflow-master/VOC2007/',
    'Directory where the original dataset is stored.')
tf.app.flags.DEFINE_string(
    'output_name', 'voc_2007_train',
    'Basename used for TFRecords output files.')
tf.app.flags.DEFINE_string(
    'output_dir', r'F:/cjc/SSD-Tensorflow-master/tfrecords/',
    'Output directory where to store TFRecords files.')
    

保存并运行。

训练

运行SSD-Tensorflow-master/train_ssd_network.py之前,在控制台:run- edit configurations - parameters,编辑为:

#每600s保存一下模型
--weight_decay=0.0005 \
#正则化的权值衰减的系数
--optimizer=adam \
#选取的最优化函数
--learning_rate=0.00001 \
#学习率
--learning_rate_decay_factor=0.94 \
#学习率的衰减因子
--batch_size=4 \
--gpu_memory_fraction=0.4

保存并运行。

注意事项
1.ssd支持CPU或者GPU运算。但一定要注意,没有GPU的情况下batch_size不能设置过大,否则电脑会卡死。保险起见设为4
2.如果出现 CPU BiasOp错误,则把27行DATA_FORMAT 改为NHWC
3.如果global_step不增加,则在367行

        config = tf.ConfigProto(log_device_placement=False,
                                gpu_options=gpu_options)

之后增加一行代码:

config.gpu_options.per_process_gpu_memory_fraction = *** 
//(***可自行设置)

4.如果出现${DATASET_DIR} : ϵͳ\udcd5Ҳ\udcbb\udcb5这类错误,则把文件绝对路径改为相对路径:‘./DATASET_DIR’

5.解读文件时若出现UnicodeDecodeError: ‘gbk’ codec can’t decode,则把open(’…/report.html’, mode=‘rb’)改为open(’…/report.html’, mode=‘rb’, encoding=‘UTF-8’)

6.注意python 2和3的区别。文件路径写成左斜杠,不要写右斜杠。

猜你喜欢

转载自blog.csdn.net/weixin_41285413/article/details/90695207