目标检测:Faster rcnn 安装、训练、测试

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/lilai619/article/details/53071155

说明:这个是Faster RCNN刚出来时候的博文记录,最新的可能会有更变,如有问题,请大家查阅官网链接。 

先上个检测效果:

(1)图片人脸检测+关键点                                    

  

(2)摄像头实时人脸+关键点

*************************************************************************

                                     安装
************************************************************************

###1 

解压
py-faster-rcnn-master.zip下载  解压到  py-faster-rcnn;

caffe-faster-rcnn.zip下载          解压到  caffe-faster-rcnn

替换:

用解压的 caffe-faster-rcnn 替换 py-faster-rcnn/caffe-faster-rcnn

###2 

修改 py-faster-rcnn/caffe-faster-rcnn/Makefile.config下载参考

# USE_CUDNN := 1 (我默认是关闭的)
MATLAB_DIR、PYTHON_INCLUDE、cuda计算能力和路径
WITH_PYTHON_LAYER := 1

###3 

检查安装依赖项

pip install cython
sudo apt-get install python-opencv
pip install easydict

###4 

编译Cython modules

cd py-faster-rcnn/lib
make

###5 

编译 Caffe and pycaffe

cd py-faster-rcnn/caffe-fast-rcn
make -j8 && make pycaffe

###6 

下载预训练模型,解压到 py-faster-rcnn/data

cd py-faster-rcnn/
./data/scripts/fetch_faster_rcnn_models.sh
This will populate the `py-faster-rcnn/data` folder with `faster_rcnn_models`. 
These models were trained on VOC 2007 trainval.



*************************************************************************
                                     训练
*************************************************************************

###1 

制作数据集目录格式
删除:
(1)data/VOCdevkit2007/VOC2007下所有文件

新建

在 ./data/VOCdevkit2007/VOC2007新建 Annotations;ImageSets/Main;JPEGImages

说明:

Annotations:       保存标签txt转换的xml文件
JPEGImages:     图片文件
ImageSets/Main:文件名列表(不含后缀)
训练集:               train.txt
训练验证集:        trainval.txt
测试集:               test.txt
验证集:               val.txt
 
#Annotations举例:   
data/VOCdevkit2007/VOC2007/Annotations/0_1_5.xml
内容格式:

<annotation>
<folder>VOC2007</folder>
<filename>0_1_5.jpg</filename>
<source>
<database>My Database</database>
<annotation>VOC2007</annotation>

<flickrid>NULL</flickrid>
</source>
<owner>
<flickrid>NULL</flickrid>
<name>deeplearning</name>
</owner>
<size>
<width>160</width>
<height>216</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>1</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>48</xmin>
<ymin>48</ymin>
<xmax>107</xmax>
<ymax>107</ymax>
</bndbox>
</object>
</annotation>

###2 

修改接口

#(1) 修改prototxt配置文件

models/pascal_voc/ZF/faster_rcnn_alt_opt文件夹下的5个文件,分别为
stage1_rpn_train.pt、stage1_fast_rcnn_train.pt、
stage2_rpn_train.pt、stage2_fast_rcnn_train.pt和fast_rcnn_test.pt

① stage1_fast_rcnn_train.pt、stage2_fast_rcnn_train.pt

修改3个参数

num_class:2(识别1类+背景1类)
cls_score中num_output:2
bbox_pred中num_output:8

② stage1_rpn_train.pt、stage2_rpn_train.pt

修改1个参数

num_class:2(识别1类+背景1类)

③ fast_rcnn_test.pt

修改2个参数:

cls_score中num_output:2
bbox_pred中num_output:8


#(2) 修改lib/datasets/pascal_voc.py

self._classes = ('__background__', # always index 0
                              'people')(只有这一类)


#(3) 修改lib/datasets/imdb.py
该文件的append_flipped_images(self)函数

widths = [PIL.Image.open(self.image_path_at(i)).size[0]  
                  for i in xrange(num_images)]

在 boxes[:, 2] = widths[i] - oldx1 - 1下加入代码:

for b in range(len(boxes)):
      if boxes[b][2]< boxes[b][0]:
         boxes[b][0] = 0


#(4) 修改完pascal_voc.py和imdb.py后进入lib/datasets目录下删除原来的pascal_voc.pyc和imdb.pyc文件,重新生成这两个文件,因为这两个文件是python编译后的文件,系统会直接调用。

终端进入lib/datasets文件目录输入:
python(此处应出现python的版本)

>>>importpy_compile
>>>py_compile.compile(r'imdb.py')
>>>py_compile.compile(r'pascal_voc.py')


#(5) 删除缓存文件
① 删除output/
② 删除py-faster-rcnn/data/cache中的文件和
py-faster-rcnn/data/VOCdevkit2007/annotations_cache中的文件删除。

#(6) 调参
① 学习率等之类的设置

py-faster-rcnn/models/pascal_voc/ZF/faster_rcnn_alt_opt中的solve文件设置

② 迭代次数
py-faster-rcnn/tools/train_faster_rcnn_alt_opt.py中修改
py-faster-rcnn/models/pascal_voc/ZF/faster_rcnn_alt_opt里对应的solver文件(有4个)也修改,stepsize小于上面修改的数值。

#(7) 训练
./experiments/scripts/faster_rcnn_alt_opt.sh 0 ZF pascal_voc

*************************************************************************
                                     测试
*************************************************************************
#(1) 训练完成之后,将output/faster_rcnn_alt_opt/voc_2007_trainval中的最终模型ZF_faster_rcnn_final.caffemodel拷贝到data/faster_rcnn_models中。

#(2) 修改/tools/demo.py:

① CLASSES =('__background__',
            'people')

② NETS ={'vgg16': ('VGG16',
                                    'VGG16_faster_rcnn_final.caffemodel'),
                     'zf': ('ZF',
                                     'ZF_faster_rcnn_final.caffemodel')}

#(3) 在训练集图片中找一张出来放入py-faster-rcnn/data/demo文件夹中,命名为000001.jpg。

im_names = ['000001.jpg']
   for im_name in im_names:
       print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'
       print 'Demo for data/demo/{}'.format(im_name)
        demo(net, im_name)

#(4) 运行demo,即在py-faster-rcnn文件夹下终端输入:

./tools/demo.py --net zf</span>

#(5) 或者将默认的模型改为zf:   

parser.add_argument('--net', dest='demo_net', help='Network to use [vgg16]', 
                            choices=NETS.keys(), default='vgg16')  

修改:
    default='zf' 
执行:
    ./tools/demo.py

*************************************************************************
                                     错误调试
*************************************************************************

error 1:assert (boxes[:, 2] >= boxes[:, 0]).all()

Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(self._args, *self._kwargs)
File "./tools/train_faster_rcnn_alt_opt.py", line 123, in train_rpn
roidb, imdb = get_roidb(imdb_name)
File "./tools/train_faster_rcnn_alt_opt.py", line 68, in get_roidb
roidb = get_training_roidb(imdb)
File "/home/microway/test/pytest/py-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 121, in get_training_roidb
imdb.append_flipped_images()
File "/home/microway/test/pytest/py-faster-rcnn/tools/../lib/datasets/imdb.py", line 108, in append_flipped_images
assert (boxes[:, 2] >= boxes[:, 0]).all()
AssertionError

error1 解决办法:

将py-faster-rcnn/lib/datasets/imdb.py中的相应代码改成如下代码即可:

    def append_flipped_images(self):
        num_images = self.num_images
        widths = [PIL.Image.open(self.image_path_at(i)).size[0]
                  for i in xrange(num_images)]
        for i in xrange(num_images):
            boxes = self.roidb[i]['boxes'].copy()
            oldx1 = boxes[:, 0].copy()
            oldx2 = boxes[:, 2].copy()
            boxes[:, 0] = widths[i] - oldx2 - 1
            boxes[:, 2] = widths[i] - oldx1 - 1

            for b in range(len(boxes)):
                if boxes[b][2] < boxes[b][0]:
                   boxes[b][0] = 0

            assert (boxes[:, 2] >= boxes[:, 0]).all()

error 2:IndexError: list index out of range

File "./tools/train_net.py", line 85, in 
roidb = get_training_roidb(imdb)
File "/usr/local/fast-rcnn/tools/../lib/fast_rcnn/train.py", line 111, in get_training_roidb
rdl_roidb.prepare_roidb(imdb)
File "/usr/local/fast-rcnn/tools/../lib/roi_data_layer/roidb.py", line 23, in prepare_roidb
roidb[i]['image'] = imdb.image_path_at(i)
IndexError: list index out of range

error2 解决办法:

删除fast-rcnn-master/data/cache/ 文件夹下的.pkl文件,或者改名备份,重新训练即可。

参考资料: 
https://github.com/rbgirshick/py-faster-rcnn/issues/34 
https://github.com/rbgirshick/fast-rcnn/issues/79

猜你喜欢

转载自blog.csdn.net/lilai619/article/details/53071155
今日推荐