MegEngine usage tips: How to do MegCC model performance evaluation

MegCC  is a deep learning model compiler with the following features:

  • Extremely lightweight runtime : Only the required compute cores are kept in the binary. For example, the 81KB runtime of MobileNet v1
  • High performance : every operation is carefully optimized by experts
  • Portable : only generate computational code, easy to compile and use on Linux, Android, TEE, BareMetal
  • Low Memory Usage while Boot Instantly : Model optimization and memory planning are generated at compile time. Get state-of-the-art memory usage without spending extra CPU during inference

MegCC supports the basic Benchmark module to test the reasoning performance of various models, obtain the performance data of each Kernel during reasoning, and analyze model performance bottlenecks.

How to use the MegCC benchmark

introduce

MegCC Benchmark is a simple tool to get benchmark results of different models in MegCC, the file structure is as follows:

├── clean.sh
├── CMakeLists.txt
├── main.cpp
├── model
│   ├── model_arm.json
│   ├── model_riscv.json
│   ├── model_x86.json
│   └── request.txt
├── python
│   ├── example.py
│   ├── format.sh
│   └── src
│       ├── benchmark.py
│       └── models.py
├── README.md
├── src
│   ├── benchmark.h
│   ├── build_config.h.in
│   ├── CCbenchmark.cpp
│   ├── CCbenchmark.h
│   ├── MGEbenchmark.cpp
│   └── MGEbenchmark.h
└── tools
    ├── cc_analysis.py
    └── inference_visual.py

In src, it is a c++ application for running benchmark results on different platforms. In python, it includes model conversion, other related preparations and benchmarking examples, and gives some tool scripts that can be used to analyze benchmarking results

support model

mobilenetv2、resnet18、efficientnetb0 shufflenetv2 vgg16

Require

mgeconvert > v.1.0.2
onnx==1.11.0
torch==1.10.0
cmake >=3.15.2
clang
ninja
torchvision==0.11.1`

mgeconvert can be installed with the following command:


git clone https://github.com/MegEngine/mgeconvert.git
cd mgeconvert
git checkout master
python3 -m pip install . --user --install-option="--targets=onnx"

Get the model and run the benchmark example

cd megcc/benchmark
export MEGCC_MGB_TO_TINYNN_PATH=<your_mgb_to_tinynn_path>
python3  python/example.py

example will download the corresponding model from torchvision and convert it to onnx, and the onnx model will be converted to megcc model through mgeconvert and mgb-to-tiynn

If you want to run on other platforms, please refer to the example to add your new run_platform_xxx function in BenchmarkRunner, the example gives a ssh remote device test template

Analyzing megcc logs

After example.py runs, an output directory will be generated under the benchmark directory, which contains the model's inference log and profile log. These logs can be visualized with relevant analysis scripts for further analysis and utilization

The generated log example is as follows:

output/
├── megcc-x86-efficientnetb0-0-log-local.txt
├── megcc-x86-efficientnetb0-3-log-local.txt
├── megcc-x86-mobilenetv2-0-log-local.txt
├── megcc-x86-mobilenetv2-3-log-local.txt
├── megcc-x86-resnet18-0-log-local.txt
├── megcc-x86-resnet18-3-log-local.txt
├── megcc-x86-resnet50-0-log-local.txt
├── megcc-x86-resnet50-3-log-local.txt
├── megcc-x86-shufflenetv2-0-log-local.txt
├── megcc-x86-shufflenetv2-3-log-local.txt
├── megcc-x86-vgg11-0-log-local.txt
├── megcc-x86-vgg11-3-log-local.txt
├── megcc-x86-vgg16-0-log-local.txt
└── megcc-x86-vgg16-3-log-local.txt

0 represents only the log of speed measurement, 3 represents the log of profile

Note: matplotlib needs to be installed

可视化不同模型的推理结果

benchmark 下tools/inference_visual.py工具可以用于分析测速日志,获取各个模型推理的性能对照,用法如下:

python3 tools/inference_visual.py output -o figure_dir 

After running, the following performance comparison chart will be generated in the figure_dir directory:

1.png

Visualize analysis results of different kernels in different models

benchmark 下tools/cc_analysis.py工具可以用于分析profile日志,获取各个模型推理时前10个最耗时的kernel 耗时占比饼图,用法如下:

python3 tools/cc_analysis.py output -o figure_dir

After running, the related pie chart will also be generated in the figure_dir directory, as shown in the following example:

2.png

attached

For more information about MegEngine, you can: view the documentation , the official website of the deep learning framework MegEngine and the GitHub project , or join the MegEngine user communication QQ group: 1029741705

https://wiki.megvii-inc.com/pages/viewpage.action?pageId=444979800

Clarification about MyBatis-Flex plagiarizing MyBatis-Plus Arc browser officially released 1.0, claiming to be a substitute for Chrome OpenAI officially launched Android version ChatGPT VS Code optimized name obfuscation compression, reduced built-in JS by 20%! LK-99: The first room temperature and pressure superconductor? Musk "purchased for zero yuan" and robbed the @x Twitter account. The Python Steering Committee plans to accept the PEP 703 proposal, making the global interpreter lock optional . The number of visits to the system's open source and free packet capture software Stack Overflow has dropped significantly, and Musk said it has been replaced by LLM
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/5265910/blog/10082846