caffe中添加自定义的layer

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u013498583/article/details/79651332

有两种方式,一种是使用python layer相对简单,另一种是使用C++。

1.python layer

参考 http://chrischoy.github.io/research/caffe-python-layer/

layer {
  type: 'Python'
  name: 'loss'
  top: 'loss'
  bottom: 'ipx'
  bottom: 'ipy'
  python_param {
    # the module name -- usually the filename -- that needs to be in $PYTHONPATH
    module: 'pyloss'
    # the layer name -- the class name in the module
    layer: 'EuclideanLossLayer'
  }
  # set loss weight so Caffe knows this is a loss layer
  loss_weight: 1
}

module的名字就是自定义的layer的python文件的文件名,比如上面的新文件就是pyloss.py,文件需在$PYTHONPAT路径下。layer的名字就是新定义的类名,比如上面的类名就是EuclideanLossLayer。该类一般必须包含四个函数,分别是setup、reshape、forword、backword

下面是示例,解释python层该怎么写。创建pyloss.py,并定义EuclideanLossLayer

import caffe
import numpy as np
class EuclideadLossLayer(caffe.Layer):#EuclideadLossLayer没有权值,反向传播过程中不需要进行权值的更新。如果需要定义需要更新自身权值的层,最好还是使用C++
       def setup(self,bottom,top):
           #在网络运行之前根据相关参数参数进行layer的初始化
           if len(bottom) !=2:
              raise exception("Need two inputs to compute distance")
       def reshape(self,bottom,top):
           #在forward之前调用,根据bottom blob的尺寸调整中间变量和top blob的尺寸
           if bottom[0].count !=bottom[1].count:
              raise exception("Inputs must have the same dimension.")
           self.diff=np.zeros_like(bottom[0].date,dtype=np.float32)
           top[0].reshape(1)
       def forward(self,bottom,top):
           #网络的前向传播
            self.diff[...]=bottom[0].data-bottom[1].data
            top[0].data[...]=np.sum(self.diff**2)/bottom[0].num/2.
       def backward(self,top,propagate_down,bootm):
            #网络的前向传播
             for i in range(2):
                 if not propagate_down[i]:
                         continue
                 if i==0:
                    sign=1
                 else:
                    sign=-1
                  bottom[i].diff[...]=sign*self.diff/bottom[i].num

2.C++方式

参考

https://github.com/BVLC/caffe/issues/684 

https://chrischoy.github.io/research/making-caffe-layer/


Here's roughly the process I follow.

扫描二维码关注公众号,回复: 3948403 查看本文章
  1. Add a class declaration for your layer to the appropriate one of common_layers.hppdata_layers.hpp,loss_layers.hppneuron_layers.hpp, or vision_layers.hpp. Include an inline implementation oftype and the *Blobs() methods to specify blob number requirements. Omit the *_gpu declarations if you'll only be implementing CPU code.
  2. Implement your layer in layers/your_layer.cpp.
    • SetUp for initialization: reading parameters, allocating buffers, etc.
    • Forward_cpu for the function your layer computes
    • Backward_cpu for its gradient
  3. (Optional) Implement the GPU versions Forward_gpu and Backward_gpu in layers/your_layer.cu.
  4. Add your layer to proto/caffe.proto, updating the next available ID. Also declare parameters, if needed, in this file.
  5. Make your layer createable by adding it to layer_factory.cpp.
  6. Write tests in test/test_your_layer.cpp. Use test/test_gradient_check_util.hpp to check that your Forward and Backward implementations are in numerical agreement.

猜你喜欢

转载自blog.csdn.net/u013498583/article/details/79651332