Caffe官方教程翻译(10):Editing model parameters

前言

最近打算重新跟着官方教程学习一下caffe,顺便也自己翻译了一下官方的文档。自己也做了一些标注,都用斜体标记出来了。中间可能额外还加了自己遇到的问题或是运行结果之类的。欢迎交流指正,拒绝喷子!
官方教程的原文链接:
http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/net_surgery.ipynb

Net Surgery

通过编辑模型参数,可以将caffe网络转换成符合你特定需求的网络。在pycaffe中,模型的数据、差值以及参数都是公开的。
开干吧!

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

# Make sure that caffe is on the python path:
# caffe_root = '../'  # this file is expected to be in {caffe_root}/examples
# 修改为自己的caffe目录的路径
caffe_root = '/home/xhb/caffe/caffe/'
import os
os.chdir(os.path.join(caffe_root, 'examples'))
import sys
sys.path.insert(0, caffe_root + 'python')

import caffe

# configure plotting
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

Designer Filters

为了说明如何加载,操作和保存参数,我们把自己的滤波器设计为一个简单的网络,其只有一个卷积层。这个网络有两个blob,输入数据层data和卷积输出层conv。以及一个参数,表示卷积层上权重和偏置参数的conv

# Load the net, list its data and params, and filter an example image.
caffe.set_mode_cpu()
net = caffe.Net('net_surgery/conv.prototxt', caffe.TEST)
print("blobs {}\nparams {}".format(net.blobs.keys(), net.params.keys()))

# load image and prepare as a single input batch for Caffe
im = np.array(caffe.io.load_image('images/cat_gray.jpg', color=False)).squeeze()
plt.title('original image')
plt.imshow(im)
plt.axis('off')

im_input = im[np.newaxis, np.newaxis, :, :]
net.blobs['data'].reshape(*im_input.shape)
net.blobs['data'].data[...] = im_input
blobs ['data', 'conv']
params ['conv']

png这里写图片描述

卷积层的权重都是由高斯噪声初始化而来,而偏置则直接初始化为0.这些随机滤波器给出的结果有点类似边缘检测的效果。

# helper show filter outputs
def show_filters(net):
    net.forward()
    plt.figure()
    filt_min, filt_max = net.blobs['conv'].data.min(), net.blobs['conv'].data.max()
    for i in range(3):
        plt.subplot(1,4,i+2)
        plt.title("filter #{} output".format(i))
        plt.imshow(net.blobs['conv'].data[0, i], vmin=filt_min, vmax=filt_max)
        plt.tight_layout()
        plt.axis('off')

# filter the image with initial 
show_filters(net)

png

增加滤波器的权重也会相应地增加其输出。

# pick first filter output
conv0 = net.blobs['conv'].data[0, 0]
print("pre-surgery output mean {:.2f}".format(conv0.mean()))
net.params['conv'][1].data[0] = 1.
net.forward()
print("post-surgery output mean {:.2f}".format(conv0.mean()))
pre-surgery output mean -0.02
post-surgery output mean 0.98

改变滤波器的权重更令人兴奋,因为我们可以分配任意的卷积核,比如高斯模糊、Sobel边缘检测算子,等等。一下操作将第0个滤波器的内核变为了高斯模糊,第1和第2个滤波器的内核变为了Sobel水平和垂直边缘检测算子。
看看第0个滤波器的输出如何模糊处理的,第1个如何提取水平边缘,第2个如何提取垂直边缘。

ksize = net.params['conv'][0].data.shape[2:]
# make Gaussian blur
sigma = 1.
y, x = np.mgrid[-ksize[0]//2 + 1 : ksize[0]//2 + 1, -ksize[1]//2 + 1 : ksize[1]//2 + 1]
g = np.exp(-((x**2 + y**2) / (2.0 * sigma**2)))
gaussian = (g / g.sum()).astype(np.float32)
net.params['conv'][0].data[0] = gaussian
# make Sobel operator for edge detection
net.params['conv'][0].data[1:] = 0.
sobel = np.array((-1, -2, -1, 0, 0, 0, 1, 2, 1), dtype=np.float32).reshape(3,3)
net.params['conv'][0].data[1, 0, 1:-1, 1:-1] = sobel # horizontal
net.params['conv'][0].data[2, 0, 1:-1, 1:-1] = sobel.T # vertical
show_filters(net)

png

通过调整网络,我们可以跨网络移植参数,通过自定义的每个对参数的操作进行规范化,并根据自己的方案进行转换。

Casting a Classifier into a Fully Convolutional Network

我们使用caffe中的ImageNet模型——“CaffeNet”进行实验,并将其转换为一个完全的卷积网络,以便对较大的输入数据进行高效、密集的推理。这个模型生成的分类图谱包含了给定的输入尺寸,而不是单个分类的结果。特别是输入为451××451,8××8的分类图谱只需要3倍的时间就可以把输出提高64倍。这里,我们通过撤销重叠感受区域的计算,提升了卷积神经网络本身的高效性。
为此,我们将CaffeNet的InnerProduct矩阵内积层换成了卷积层。这是唯一的改变:其他层的类型都与空间大小无关。卷积层是平移不变的,激活层是对每个元素的点积,等等。当通过fc6-conv卷积层进行计算之后,fc6内积层(全连接层)变为在pool5上的步长为1的6××6的卷积核。回到图像空间,这里给出了对区域为227××227的图像,对应每个像素步长为32的分类结果。请记住输出/接收区域大小的计算公式,output = (input - kernel_size) / stride + 1,并通过下面的注释理解。

# 注:比较两个网络的差异,可以根据上面所说的两个网络的差异比对一下是不是一致的
!diff net_surgery/bvlc_caffenet_full_conv.prototxt ../models/bvlc_reference_caffenet/deploy.prototxt
1,2c1
< # Fully convolutional network version of CaffeNet.
< name: "CaffeNetConv"
---
> name: "CaffeNet"
7,11c6
<   input_param {
<     # initial shape for a fully convolutional network:
<     # the shape can be set for each input by reshape.
<     shape: { dim: 1 dim: 3 dim: 451 dim: 451 }
<   }
---
>   input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }
157,158c152,153
<   name: "fc6-conv"
<   type: "Convolution"
---
>   name: "fc6"
>   type: "InnerProduct"
160,161c155,156
<   top: "fc6-conv"
<   convolution_param {
---
>   top: "fc6"
>   inner_product_param {
163d157
<     kernel_size: 6
169,170c163,164
<   bottom: "fc6-conv"
<   top: "fc6-conv"
---
>   bottom: "fc6"
>   top: "fc6"
175,176c169,170
<   bottom: "fc6-conv"
<   top: "fc6-conv"
---
>   bottom: "fc6"
>   top: "fc6"
182,186c176,180
<   name: "fc7-conv"
<   type: "Convolution"
<   bottom: "fc6-conv"
<   top: "fc7-conv"
<   convolution_param {
---
>   name: "fc7"
>   type: "InnerProduct"
>   bottom: "fc6"
>   top: "fc7"
>   inner_product_param {
188d181
<     kernel_size: 1
194,195c187,188
<   bottom: "fc7-conv"
<   top: "fc7-conv"
---
>   bottom: "fc7"
>   top: "fc7"
200,201c193,194
<   bottom: "fc7-conv"
<   top: "fc7-conv"
---
>   bottom: "fc7"
>   top: "fc7"
207,211c200,204
<   name: "fc8-conv"
<   type: "Convolution"
<   bottom: "fc7-conv"
<   top: "fc8-conv"
<   convolution_param {
---
>   name: "fc8"
>   type: "InnerProduct"
>   bottom: "fc7"
>   top: "fc8"
>   inner_product_param {
213d205
<     kernel_size: 1
219c211
<   bottom: "fc8-conv"
---
>   bottom: "fc8"

这个两个网络的结构的唯一差异就是:将全连接层(内积层)换成卷积核(滤波器)尺寸为6××6的卷积层,因为参考分类器模型将pool5的36个元素作为输入,并且以步长为1进行分类。请注意,由于被修改的部分层被重命名了,所以在加载预训练模型时,caffe不会盲目地加载旧参数。

# Load the original network and extract the fully connected layers' parameters.
net = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt', 
                '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel', 
                caffe.TEST)
params = ['fc6', 'fc7', 'fc8']
# fc_params = {name: (weights, biases)}
fc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params}

for fc in params:
    print '{} weights are {} dimensional and biases are {} dimensional'.format(fc, fc_params[fc][0].shape, fc_params[fc][1].shape)
fc6 weights are (4096, 9216) dimensional and biases are (4096,) dimensional
fc7 weights are (4096, 4096) dimensional and biases are (4096,) dimensional
fc8 weights are (1000, 4096) dimensional and biases are (1000,) dimensional

考虑到内积参数的形状。权重参数的尺寸为输入尺寸××输出尺寸,而偏置参数的尺寸是输出的尺寸。

# Load the fully convolutional network to transplant the parameters.
net_full_conv = caffe.Net('net_surgery/bvlc_caffenet_full_conv.prototxt', 
                          '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
                          caffe.TEST)
params_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv']
# conv_params = {name: (weights, biases)}
conv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv}

for conv in params_full_conv:
    print '{} weights are {} dimensional and biases are {} dimensional'.format(conv, conv_params[conv][0].shape, conv_params[conv][1].shape)
fc6-conv weights are (4096, 256, 6, 6) dimensional and biases are (4096,) dimensional
fc7-conv weights are (4096, 4096, 1, 1) dimensional and biases are (4096,) dimensional
fc8-conv weights are (1000, 4096, 1, 1) dimensional and biases are (1000,) dimensional

卷积层的权重参数排列在output ×× input ×× height ×× width维度中。为了将内积的权重参数映射到卷积核上,我们可以将内积权重向量展开,放到channel ×× height ×× width的滤波器矩阵当中,单实际上这些矩阵在内存中(作为行主要阵列)是相同的,因此我们可以直接分配它们。
卷积层的偏差值依然与原来内积层的偏差值相同。
接下来开始移植!

for pr, pr_conv in zip(params, params_full_conv):
    conv_params[pr_conv][0].flat = fc_params[pr][0].flat # flat unrolls the arrays
    conv_params[pr_conv][1][...] = fc_params[pr][1]

接下来,保存新的模型权重。

net_full_conv.save('net_surgery/bvlc_caffenet_full_conv.caffemodel')

最后,让我们用前面示例的猫图片来制作分类图,并将“tiger cat”这一类的置信率可视化为概率热图。这里在451 ×× 451的输入图片的重叠区域上给出8-by-8的预测结果。

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

# load input and configure preprocessing
im = caffe.io.load_image('images/cat.jpg')
transformer = caffe.io.Transformer({'data': net_full_conv.blobs['data'].data.shape})
transformer.set_mean('data', np.load('../python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1))
transformer.set_transpose('data', (2,0,1))
transformer.set_channel_swap('data', (2,1,0))
transformer.set_raw_scale('data', 255.0)

# make classification map by forward and print prediction indices at each location
out = net_full_conv.forward_all(data=np.asarray([transformer.preprocess('data', im)]))
print out['prob'][0].argmax(axis=0)
# show net input and confidence map (probability of the top prediction at each location)
plt.subplot(1, 2, 1)
plt.imshow(transformer.deprocess('data', net_full_conv.blobs['data'].data[0]))
plt.subplot(1, 2, 2)
plt.imshow(out['prob'][0,281])
[[282 282 281 281 281 281 277 282]
 [281 283 283 281 281 281 281 282]
 [283 283 283 283 283 283 287 282]
 [283 283 283 281 283 283 283 259]
 [283 283 283 283 283 283 283 259]
 [283 283 283 283 283 283 259 259]
 [283 283 283 283 259 259 259 277]
 [335 335 283 259 263 263 263 277]]
<matplotlib.image.AxesImage at 0x7f163dc96d10>

png

这个分类预测结果给出了多种不同结果:比如282对应“tiger cat”,281对应“tabby”,283对应“persian”,等等,其他结果还对应了其他一些哺乳类动物。
通过这种方式,全连接层可以被提取为图像上的密集特征(详情查看net_full_conv.blobs['fc6'].data),也许这甚至比分类图本身更有用。
注意,这个模型不是完全适合与华东窗口检测法,因为它是针对整幅图像的分类而训练的。尽管如此,它依然可以工作得很棒。滑动窗口法训练和微调都可以通过定义滑动窗口的ground truth和loss来完成,从而为每个位置制作loss图谱,并像往常一样解决。(这是给读者的练习)

猜你喜欢

转载自blog.csdn.net/hongbin_xu/article/details/79734882
今日推荐