caffe's python interface learning (5) to generate deploy files

If you want to use the trained model to test new images, you must have a deploy.prototxt file. This file is actually similar to the test.prototxt file, but the head and tail are different. The deploy file does not have the first data input layer, nor the final Accuracy layer, but at the end there is a Softmax probability layer.

Here we use code to automatically generate the file, taking mnist as an example.

deploy.py

copy code
# -*- coding: utf-8 -*-

from caffe import layers as L,params as P,to_proto root='/home/xxx/' deploy=root+'mnist/deploy.prototxt' #文件保存路径 def create_deploy(): #少了第一层,data层 conv1=L.Convolution(bottom='data', kernel_size=5, stride=1,num_output=20, pad=0,weight_filler=dict(type='xavier')) pool1=L.Pooling(conv1, pool=P.Pooling.MAX, kernel_size=2, stride=2) conv2=L.Convolution(pool1, kernel_size=5, stride=1,num_output=50, pad=0,weight_filler=dict(type='xavier')) pool2=L.Pooling(conv2, pool=P.Pooling.MAX, kernel_size=2, stride=2) fc3=L.InnerProduct(pool2, num_output=500,weight_filler=dict(type='xavier')) relu3=L.ReLU(fc3, in_place=True) fc4 = L.InnerProduct(relu3, num_output=10,weight_filler=dict(type='xavier')) #最后没有accuracy层,但有一个Softmax层 prob=L.Softmax(fc4) return to_proto(prob) def write_deploy(): with open(deploy, 'w') as f: f.write('name:"Lenet"\n') f.write('input:"data"\n') f.write('input_dim:1\n') f.write('input_dim:3\n') f.write('input_dim:28\n') f.write('input_dim:28\n') f.write(str(create_deploy())) if __name__ == '__main__': write_deploy()
copy code

After running the file, a deploy.prototxt file will be generated in the mnist directory.

This file is not recommended to be generated by code, but it is troublesome. After everyone is familiar with it, you can make a copy of test.prototxt and modify the corresponding place, which is more convenient.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325246012&siteId=291194637