## Caffe | layer

1 Acitivation / Neuron Layers

ReLU / Rectified-Linear and Leaky-ReLU Layer

PReLU

Sigmoid

2 Utility Layers

Flattern

Slice

Concat

Eltwise

# 1 Acitivation / Neuron Layers

## ReLU / Rectified-Linear and Leaky-ReLU Layer

ReLU中通过下述公式计算

$y=\left\{\begin{matrix} x, x>=0 & \\ sclop_{neg} \times x, x<0 & \end{matrix}\right.$

$y_i = max(0, x)$

## PReLU

$yi = max(0, x_i)+a_imin(0, x_i)$

PReLU与ReLU的不同之处：

• 负斜率通过通过反向传播是可以学习的；
• 负斜率可以通过channel变化。

Input blob 的axes大于等于2，且第一个轴被视为channels。

## Sigmoid

message SigmoidParameter {
enum Engine {
DEFAULT = 0;
CAFFE = 1;
CUDNN = 2;
}
optional Engine engine = 1 [default = DEFAULT]

Sigmoid function non-linearity

# 2 Utility Layers

## Flattern

Reshapes the input Blob into flat vectors.

The Flatten(展平) layer is a utility layer that flattens an input of shape n * c * h * w to a simple vector output of shape n * (c*h*w).

/// Message that stores parameters used by FlattenLayer
message FlattenParameter {
// The first axis to flatten: all preceding axes are retained in the output.
// May be negative to index from the end (e.g., -1 for the last axis).
optional int32 axis = 1 [default = 1];

// The last axis to flatten: all following axes are retained in the output.
// May be negative to index from the end (e.g., the default -1 for the last
// axis).
optional int32 end_axis = 2 [default = -1];
}

Reshape

• Input: a single blob with arbitrary dimensions (任意维度)
• Output: the same blob, with modified dimensions, as specified by reshape_param
  layer {
name: "reshape"
type: "Reshape"
bottom: "input"
top: "output"
reshape_param {
shape {
dim: 0  # copy the dimension from below
dim: 2
dim: 3
dim: -1 # infer it from the other dimensions
}
}
}

## Slice

Slice 层是一个实用层（utility layer），它沿着给定的维度（当前仅 num / channel）将输入层切片到多个输出层，并具有给定的切片索引。

Takes a Blob and slices it along either the num or channel dimension, outputting multiple sliced Blob results.

layer {
name: "slicer_label"
type: "Slice"
bottom: "label"
## Example of label with a shape N x 3 x 1 x 1
top: "label1"
top: "label2"
top: "label3"
slice_param {
axis: 1
slice_point: 1
slice_point: 2
}
}

## Concat

• Input
• n_i * c_i * h * w for each input blob i from 1 to K.
• Output
• if axis = 0(n_1 + n_2 + ... + n_K) * c_1 * h * w, and all input c_i should be the same.
• if axis = 1n_1 * (c_1 + c_2 + ... + c_K) * h * w, and all input n_i should be the same.
• Sample

layer {
name: "concat"
bottom: "in1"
bottom: "in2"
top: "out"
type: "Concat"
concat_param {
axis: 1
}
}

The Concat layer is a utility layer that concatenates its multiple input blobs to one single output blob.

Takes at least two Blobs and concatenates them along either the num or channel dimension, outputting the result.

## Eltwise

Compute elementwise operations, such as product and sum, along multiple input Blobs.

message EltwiseParameter {
enum EltwiseOp {
PROD = 0;
SUM = 1;
MAX = 2;
}
optional EltwiseOp operation = 1 [default = SUM]; // element-wise operation
repeated float coeff = 2; // blob-wise coefficient for SUM operation

// Whether to use an asymptotically slower (for >2 inputs) but stabler method
// of computing the gradient for the PROD operation. (No effect for SUM op.)
optional bool stable_prod_grad = 3 [default = true];
}