Compilation of ncnn environment and its model use

ncnn compilation process Although Tencent wrote very clearly on GitHub , but I tried two computers and failed to compile the environment successfully.

So I started to explore

Note: I am here to compile on win10, because I want to write code on win10, so I need a vs2019 environment

First compile protobuf, I directly use the zip download link, but the process of creating a new build folder is unsuccessful, because it will prompt me that there are duplicate build files, so my newly created tmp is actually the same

Just need to modify the path when compiling ncnn at that time

The protobuf compilation process is as follows:

cd protobuf-3.4.0
mkdir tmp
cd tmp
cmake -G"NMake Makefiles" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=%cd%/install -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_MSVC_STATIC_RUNTIME=OFF ../cmake
nmake
nmake install

The compilation process is easy, and there is generally no error

Next, compile ncnn directly. I remember here that the source code of clone ncnn will be associated with other sub-modules. Direct git clone will not be complete, and errors will occur during compilation.

git clone https://github.com.cnpmjs.org/Tencent/ncnn
cd ncnn
git  submodule update --init

As can be seen from the above, here has also been modified into a cnpmjs domestic mirror source to improve the download speed

The compilation process can be compiled according to the official link, but I encountered a bug, cmake was not successfully compiled, and it was stuck in one place.

I saw on GitHub that someone said that you can ignore it first, and you can add a parameter to skip this bug

-DNCNN_BUILD_TOOLS=OFF

cmake -G"NMake Makefiles" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=%cd%/install -DProtobuf_INCLUDE_DIR=C:/protobuf-3.4.0/protobuf-3.4.0/tmp/install/install/include -DProtobuf_LIBRARIES=C:/protobuf-3.4.0/protobuf-3.4.0/tmp/install/lib/libprotobuf.lib -DProtobuf_PROTOC_EXECUTABLE=C:/protobuf-3.4.0/protobuf-3.4.0/tmp/install/bin/protoc.exe -DNCNN_VULKAN=OFF -DNCNN_BUILD_TOOLS=OFF  ..

Then you can cmake successfully

The rest can follow the installation guide provided by ncnn

So far, compiling ncnn under vs2019 has succeeded

Can take a look

Now you can open vs2019 to configure the environment

(1) Contains the directory

(2) Library directory

(3) Additional dependencies

(4) Additional library directory

Now you can use ncnn directly

Model files must be prepared before use

The model file is still the model file of the previous article , and the ncnn file successfully converted under ncnn in the Linux environment

Convert the used onnx model file to ncnn model file

The ncnn model file is divided into two parts, similar to caffe

The conversion command is as follows:

./tools/onnx/onnx2ncnn FashionMNIST.onnx FashionMNIST.param  FashionMNIST.bin

Then use ncnnoptimize to optimize the file

tools/ncnnoptimize FashionMNIST.param FashionMNIST.bin FashionMNIST_OPT.param FashionMNIST_OPT.bin 1

The results are as follows:

Obviously the optimized model file has become smaller, mainly the float16 used. The help of ncnnoptimize is as follows:

usage: tools/ncnnoptimize [inparam] [inbin] [outparam] [outbin] [flag]

Among them, flag can be 0 or 1, 0 stands for float32, 1 stands for float16

The float16 we use here

After getting the model file, I can read the model directly in vs2019

However, it is read in an encrypted manner. ncnn provides a ncnn2mem tool to transfer the model file to memory.

The conversion code is as follows:

ncnn2mem.exe FashionMNIST.param FashionMNIST.bin FashionMNIST.id.h FashionMNIST_de.mem.h

The call is very simple. Just include the two .h files directly.

The complete calling code is as follows:

#include<iostream>
#include<opencv2/core/core.hpp>
#include<opencv2/imgproc.hpp>
#include<opencv2/highgui.hpp>
#include<ncnn/net.h>
#include "FashionMNIST.id.h"
#include "FashionMNIST_de.mem.h"

int main() {
	//读取模型文件
	ncnn::Net net;
	net.load_param(FashionMNIST_param_bin);
	net.load_model(FashionMNIST_bin);

	//ncnn::Mat in;
	ncnn::Mat out;
	cv::Mat m = cv::imread("C:\\123.jpg", CV_8U);
	ncnn::Mat in = ncnn::Mat::from_pixels(m.data, ncnn::Mat::PIXEL_GRAY, m.cols, m.rows);

	ncnn::Extractor ex = net.create_extractor();
	ex.set_light_mode(true);
	ex.input(FashionMNIST_param_id::BLOB_input, in);
	ex.extract(FashionMNIST_param_id::BLOB_output, out);
	 
	for (int j = 0; j < out.w; j++)
	{
		std::cout << out[j] << std::endl;
}	
	return 0;
}

operation result:

In the same way, the optimized model is also transferred to .h for comparison.

ncnn2mem.exe FashionMNIST_OPT.param FashionMNIST_OPT.bin FashionMNIST_OPT.id.h FashionMNIST_OPT.mem.h

Calling code:

#include<iostream>
#include<opencv2/core/core.hpp>
#include<opencv2/imgproc.hpp>
#include<opencv2/highgui.hpp>
#include<ncnn/net.h>
#include "FashionMNIST_OPT.id.h"
#include "FashionMNIST_OPT.mem.h"

int main() {
	//读取模型文件
	ncnn::Net net;
	net.load_param(FashionMNIST_OPT_param_bin);
	net.load_model(FashionMNIST_OPT_bin);

	//ncnn::Mat in;
	ncnn::Mat out;
	cv::Mat m = cv::imread("C:\\123.jpg", CV_8U);
	ncnn::Mat in = ncnn::Mat::from_pixels(m.data, ncnn::Mat::PIXEL_GRAY, m.cols, m.rows);

	ncnn::Extractor ex = net.create_extractor();
	ex.set_light_mode(true);
	ex.input(FashionMNIST_OPT_param_id::BLOB_input, in);
	ex.extract(FashionMNIST_OPT_param_id::BLOB_output, out);
	 
	for (int j = 0; j < out.w; j++)
	{
		std::cout << out[j] << std::endl;
}	
	return 0;
}

The results of the operation are as follows: 

It can be seen that the results are almost the same, and there will definitely be differences in accuracy

Guess you like

Origin blog.csdn.net/zhou_438/article/details/112436476