Summary of problems encountered in mxnet compilation and installation under python (2)

Last time I talked about compiling and installing mxnet, this time I will talk about compiling and installing mxnet ( mxnet-mkl ) optimized for Intel CPU processors, which is also required for work. by almost 10 times).

First of all, download the source code first. I downloaded the latest release version mxnet-1.6.0 here. It is worth noting a little detail, mxnet-1.6.0 version is also the last version to support python2, after that it will no longer support python2.

wget  https://github.com/apache/incubator-mxnet/archive/1.6.0.tar.gz

The first step, environment preparation:

The second step is to switch the code installation path:

cd python

pip install setup.py

 

The third step is to verify whether the installation is correct

import mxnet as mx
import numpy as np

shape_x = (1, 10, 8)
shape_w = (1, 12, 8)

x_npy = np.random.normal(0, 1, shape_x)
w_npy = np.random.normal(0, 1, shape_w)

x = mx.sym.Variable('x')
w = mx.sym.Variable('w')
y = mx.sym.batch_dot(x, w, transpose_b=True)
exe = y.simple_bind(mx.cpu(), x=x_npy.shape, w=w_npy.shape)

exe.forward(is_train=False)
o = exe.outputs[0]
t = o.asnumpy()

          More detailed verification results:

# You can open the MKL_VERBOSE flag by setting environment variable:
export MKL_VERBOSE=1

         output result

Numpy + Intel(R) MKL: THREADING LAYER: (null)
Numpy + Intel(R) MKL: setting Intel(R) MKL to use INTEL OpenMP runtime
Numpy + Intel(R) MKL: preloading libiomp5.so runtime
MKL_VERBOSE Intel(R) MKL 2019.0 Update 3 Product build 20190125 for Intel(R) 64 architecture Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) enabled processors, Lnx 2.40GHz lp64 intel_thread NMICDev:0
MKL_VERBOSE SGEMM(T,N,12,10,8,0x7f7f927b1378,0x1bc2140,8,0x1ba8040,8,0x7f7f927b1380,0x7f7f7400a280,12) 8.93ms CNR:OFF Dyn:1 FastMM:1 TID:0  NThr:40 WDiv:HOST:+0.000

The fourth step is to optimize settings based on mkl reasoning

export MXNET_SUBGRAPH_BACKEND=MKLDNN

For more MKL-DNN calculation graph optimization, please refer to https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN

Finally, for calculation operator operations supported by mkl,

For more details, please see https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md

 

Guess you like

Origin blog.csdn.net/jinhao_2008/article/details/104718114