After installing CUDA and cudnn, start to compile mxnet.
1. Download the source code
git clone --recursive https://github.com/apache/incubator-mxnet mxnet
Or use proxy:
git clone --recursive https://g.0x6.xyz/https://github.com/apache/incubator-mxnet mxnet
If the submodule download is slow, modify the .gitmodules file so that the submodule uses a proxy:
[submodule "3rdparty/dmlc-core"]
path = 3rdparty/dmlc-core
url = https://g.0x6.xyz/https://github.com/dmlc/dmlc-core.git
[submodule "3rdparty/ps-lite"]
path = 3rdparty/ps-lite
url = https://g.0x6.xyz/https://github.com/dmlc/ps-lite
[submodule "3rdparty/dlpack"]
path = 3rdparty/dlpack
url = https://g.0x6.xyz/https://github.com/dmlc/dlpack
[submodule "3rdparty/openmp"]
path = 3rdparty/openmp
url = https://g.0x6.xyz/https://github.com/llvm-mirror/openmp
[submodule "3rdparty/googletest"]
path = 3rdparty/googletest
url = https://g.0x6.xyz/https://github.com/google/googletest.git
[submodule "3rdparty/mkldnn"]
path = 3rdparty/mkldnn
url = https://g.0x6.xyz/https://github.com/oneapi-src/oneDNN.git
[submodule "3rdparty/tvm"]
path = 3rdparty/tvm
url = https://g.0x6.xyz/https://github.com/apache/incubator-tvm.git
[submodule "3rdparty/onnx-tensorrt"]
path = 3rdparty/onnx-tensorrt
url = https://g.0x6.xyz/https://github.com/onnx/onnx-tensorrt.git
[submodule "3rdparty/nvidia_cub"]
path = 3rdparty/nvidia_cub
url = https://g.0x6.xyz/https://github.com/NVlabs/cub.git
[submodule "3rdparty/libzip"]
path = 3rdparty/libzip
url = https://g.0x6.xyz/https://github.com/nih-at/libzip.git
[submodule "3rdparty/intgemm"]
path = 3rdparty/intgemm
url = https://g.0x6.xyz/https://github.com/kpu/intgemm
2. Installation dependencies
Chocolatey for (1):
choco install python git 7zip cmake ninja opencv
(2) Compile OpenBLAS:
Download the source code (if using a proxy, the same as above): git clone --recursive https://github.com/xianyi/OpenBLAS.git
Enter the source directory: cd OpenBLAS
In the Anaconda Command Prompt environment:
Install OpenBLAS dependencies
conda update -n base conda
conda config --add channels conda-forge
conda install -y cmake flang clangdev perl libflang ninja
Open the vs2019 environment
"c:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Auxiliary/Build/vcvars64.bat"
Generate vs2019 project
set "LIB=%CONDA_PREFIX%\Library\lib;%LIB%"
set "CPATH=%CONDA_PREFIX%\Library\include;%CPATH%"
mkdir build
cd build
cmake .. -G "Ninja" -DCMAKE_CXX_COMPILER=clang-cl -DCMAKE_C_COMPILER=clang-cl -DCMAKE_Fortran_COMPILER=flang -DBUILD_WITHOUT_LAPACK=no -DNOFORTRAN=0 -DDYNAMIC_ARCH=ON -DCMAKE_BUILD_TYPE=Release
Compile and build
cmake --build . --config Release
Download the latest OpenBLAS 64-bit zip package from https://github.com/xianyi/OpenBLAS/releases , and copy the lib and include directories in the package to the build directory above.
3. Configure the environment
CUDA_PATH: CUDA installation path
OpenBLAS_HOME: OpenBLAS build path
OpenCV_DIR: opencv path
4. Build
Use cmake-gui to configure (configure), generate (generate) the vs2019 project, and compile the release version of vs2019.
reference:
https://mxnet.apache.org/get_started/build_from_source
https://github.com/xianyi/OpenBLAS/wiki/How-to-use-OpenBLAS-in-Microsoft-Visual-Studio