Ubuntu下编译caffe并训练自定义数据

ubuntu下编译caffe 并训练自定义数据集

caffe编译

这部分主要参见 https://www.linuxidc.com/Linux/2019-05/158422.htm
为了方便一并记下来,由于系统环境的不同网上的配置各式各样,我的系统环境:
ubuntu 18.04+CUDA 10.1+cudnn 7.5+python 3.6+opencv 4.0

  1. 安装依赖项
apt install -y libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
apt install -y --no-install-recommends libboost-all-dev
apt install -y libatlas-base-dev
apt install -y libgflags-dev libgoogle-glog-dev liblmdb-dev

2.下载caffe源码 https://github.com/BVLC/caffe 并修改Makefile.config.example 为 Makefile.config编辑配置文件。

  • 关于opencv 4的修改,替换命令:
cd caffe
sed -i 's/CV_LOAD_IMAGE_COLOR/cv::IMREAD_COLOR/g' src/caffe/layers/window_data_layer.cpp
sed -i 's/CV_LOAD_IMAGE_COLOR/cv::IMREAD_COLOR/g' src/caffe/util/io.cpp
sed -i 's/CV_LOAD_IMAGE_GRAYSCALE/cv::ImreadModes::IMREAD_GRAYSCALE/g' src/caffe/util/io.cpp
  • .config修改依次如下
USE_CUDNN := 1
OPENCV_VERSION := 3
CUDA_DIR := /usr/local/cuda-10.1
CUDA_ARCH := \
		-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61
		
PYTHON_LIBRARIES := boost_python3 python3.6m
PYTHON_INCLUDE := /usr/include/python3.6m \
                /usr/lib/python3.6/dist-packages/numpy/core/include
WITH_PYTHON_LAYER := 1
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial /usr/local/cuda/include /usr/local/include/opencv4
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
  • make编译
make -j6
make pycaffe

编译完成得到如下信息以及.build_release

CXX/LD -o .build_release/tools/upgrade_solver_proto_text.bin
CXX/LD -o .build_release/tools/extract_features.bin
CXX/LD -o .build_release/tools/caffe.bin
CXX/LD -o .build_release/tools/upgrade_net_proto_binary.bin
CXX/LD -o .build_release/tools/convert_imageset.bin
CXX/LD -o .build_release/tools/upgrade_net_proto_text.bin
CXX/LD -o .build_release/tools/compute_image_mean.bin
CXX/LD -o .build_release/examples/mnist/convert_mnist_data.bin
CXX/LD -o .build_release/examples/cifar10/convert_cifar_data.bin
CXX/LD -o .build_release/examples/cpp_classification/classification.bin
CXX/LD -o .build_release/examples/siamese/convert_mnist_siamese_data.bin
CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp
touch python/caffe/proto/__init__.py
PROTOC (python) src/caffe/proto/caffe.proto

生成label以及lmdb

-label
将所有数据按照一定比例划分为训练集与验证集,这里使用8:2的比例,生成关于所有文件的文件名+分类标签的txt文件

root_path = "path/to/images/
mode = 'train'
label_path = ".."
with open('{}/label/{}.txt'.format(label_path, mode), 'w') as file1:
        with open('{}/label/val.txt'.format(label_path), 'w') as file2:
            for idx in range(10):
                for i in os.listdir('{}/{}_{}'.format(root_path, mode, idx)):
                    if random.random() < 0.2:
                        file2.write('{}_{}/{} {}\n'.format(mode, idx, i, idx))
                    else:
                        file1.write('{}_{}/{} {}\n'.format(mode, idx, i, idx))
train_0/0_10000.png 0
train_0/0_10001.png 0
train_0/0_10002.png 0
train_0/0_10003.png 0
train_0/0_10004.png 0
...
  • lmdb
    使用caffe的convert_imageset生成训练所需的lmdb文件,在.py文件中使用Python3 的subprocess.run()函数运行convert_imageset
subprocess.run([
        convert_imageset_path,//convert_imageset地址
        imgs_path,//图片地址
        label_train_path,//label.txt地址
        lmdb_train_path,//lmdb存放地址
        "-backend=lmdb",
        "--gray=true",
        "--shuffle=true",
        "--check_size=true",
        "--resize_width=40",
        "--resize_height=60"], stdout=subprocess.PIPE)

图片数据地址和标签文件中的图片名称应该能够组合为该图片的绝对路径,实现图片读取:

imgs_path + / + train_0/0_10000.png 

程序处理之后就得到lmdb数据,但存在偶发性的内存越界情况,特别是在数据量特别大的时候。目前还没有找到具体原因。

I0715 17:22:51.333714  2833 convert_imageset.cpp:147] Processed 85000 files.
I0715 17:22:51.592039  2833 convert_imageset.cpp:147] Processed 86000 files.
I0715 17:22:51.850489  2833 convert_imageset.cpp:147] Processed 87000 files.
terminate called after throwing an instance of 'std::out_of_range'
  what():  basic_string::substr: __pos (which is 140) > this->size() (which is 0)

另外虽然计算图像均值并在训练的时候减去均值能够取得更好的训练效果,但由于实际部署中不便引入均值文件,所以在训练阶段同样没有减去图像均值。
二次生成lmdb数据需要先将原有文件夹删除,不然会报关于文件夹校验的错误

训练

训练时调用编译的caffe/build/tools/caffe并导入net_solver.prototxt和net_train_test.prototxt

caffe_path = 'caffe/build/tools/caffe'
subprocess.run([caffe_path,"train",
	"--solver=mynet_solver.prototxt"],stdout=subprocess.PIPE)

solver.prototxt文件包含训练过程参数

net: "mynet_train_test.prototxt"
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.001
momentum: 0.9
weight_decay: 0.0002
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 15000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "mynet"
# solver mode: CPU or GPU
solver_mode: GPU

训练完成:

I0715 19:21:44.468984  4051 solver.cpp:414]     Test net output #0: accuracy = 0.993125
I0715 19:21:44.469009  4051 solver.cpp:414]     Test net output #1: loss = 0.039863 (* 1 = 0.039863 loss)
I0715 19:21:44.469014  4051 solver.cpp:332] Optimization Done.
I0715 19:21:44.469017  4051 caffe.cpp:250] Optimization Done.
发布了4 篇原创文章 · 获赞 0 · 访问量 76

猜你喜欢

转载自blog.csdn.net/qq_35078996/article/details/95974172