Caffe入门:Blobs, Layers, and Nets

读一下官方tutorial:Blobs, Layers, and Nets

Blob

Caffe stores and communicates data using blobs. Blobs provide a unified memory interface holding data; e.g., batches of images, model parameters, and derivatives for optimization.

caffe通过blob存储和传递数据,blob为保持数据提供一个统一的接口,可以存储图像,模型参数,优化求导等。

既可以在CPU环境跑也可以在GPU环境跑,也就是说各州函数的代码要写两边,一遍CPU一遍GPU。

The conventional blob dimensions for batches of image data are number N x channel K x height H x width W. Blob memory is row-major in layout, so the last / rightmost dimension changes fastest. For example, in a 4D blob, the value at index (n, k, h, w) is physically located at index ((n * K + k) * H + h) * W + w.

卷积层blob,4维,标准的是(n,k,h,w),对应batch数量,filter数量,高,宽。对非图片应用可以存2维的blob(N,D),作为全连接网络的输入输出。

Parameter blob dimensions vary according to the type and configuration of the layer. For a convolution layer with 96 filters of 11 x 11 spatial dimension and 3 inputs the blob is 96 x 3 x 11 x 11. For an inner product / fully-connected layer with 1000 output channels and 1024 input channels the parameter blob is 1000 x 1024.

参数也是通过blob表示的,比如卷积filter(也称kernel)的参数,(#c,#c_pre,f,f).全连接网络的参数W。


Layer

The layer is the essence of a model and the fundamental unit of computation. Layers convolve filters, pool, take inner products, apply nonlinearities like rectified-linear and sigmoid and other elementwise transformations, normalize, load data, and compute losses like softmax and hinge. 

layer包括的范围很广,卷积,池化,点乘,激活函数,还有其他elementwise的转化,标准化,读数据,计算loss


A layer takes input through bottom connections and makes output through top connections.

就像个pipeline一样。

Each layer type defines three critical computations: setupforward, and backward.

每个layer都要定义3个关键的计算,设置,正向传播和反向传播(还要有CPU和GPU两个版本,要是没有GPU版本只能用CPU数据就得copy来再copy回去)

  • Setup: initialize the layer and its connections once at model initialization.
  • Forward: given input from bottom compute the output and send to the top.
  • Backward: given the gradient w.r.t. the top output compute the gradient w.r.t. to the input and send to the bottom. A layer with parameters computes the gradient w.r.t. to its parameters and stores it internally.

详细的layers categories

Net

看到再说

猜你喜欢

转载自blog.csdn.net/nemoyy/article/details/79559211
今日推荐