【Tensorflow】Win10以bazel方式源码编译安装tensorflow gpu版本

版权声明:本文为博主原创文章,未经作者允许请勿转载。 https://blog.csdn.net/heiheiya https://blog.csdn.net/heiheiya/article/details/88946716

【Tensorflow】Win10以bazel方式源码编译安装tensorflow cpu版本,这里编译tensorflow的gpu版本,前面的配置和编译cpu版本都一样,这里就不赘述了。

与cpu版本不同的是,这里编译tensorflow 1.12,因为tensorflow 1.9里还没有加入对windows gpu版本的支持,在编译中会报

No toolchain found for cpu 'x64_windows'. Valid cpus are: [
  k8,
  piii,
  arm,
  darwin,
  ppc,
].

改配置文件改来改去也没法成功编译,所以只好换个版本了。

与此对应,bazel的版本也更换成0.15.0。

切换到r1.12分支。

git checkout r1.12

执行配置。

python ./configure.py

下面是选择的配置。

WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.15.0 installed.
Please specify the location of python. [Default is D:\software\Anaconda3\envs\mytensorflow_gpu\python.exe]:


Found possible Python library paths:
  D:\software\Anaconda3\envs\mytensorflow_gpu\lib\site-packages
Please input the desired Python library path to use.  Default is [D:\software\Anaconda3\envs\mytensorflow_gpu\lib\site-packages]

Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: n
No Apache Ignite support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: N
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.2


Please specify the location where CUDA 9.2 toolkit is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.2]:


Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.1


Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.2]:


Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 6.1


Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]: /arch:AVX2


Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]: Y
Eigen strong inline overridden.

开始编译。

bazel build --config=opt --config=cuda --copt=-nvcc_options=disable-warnings //tensorflow/tools/pip_package:build_pip_package

编译完成。

 

编译完成之后就可以构建软件包。

bazel-bin\tensorflow\tools\pip_package\build_pip_package C:/tmp/tensorflow_pkg

成功生成whl文件。

下面测试一下,进入python环境,输入

import tensorflow as tf
a = tf.constant(3)
b = tf.constant(4)
sess = tf.Session()
print(sess.run(a+b))

输出

并且还有输出信息

说明gpu版本的tensorflow是正确安装了。


在编译过程中报错

more than one instance of overloaded function "__hadd" matches the argument list:
            function "__hadd(int, int)"
            function "__hadd(__half, __half)"
            argument types are: (const Eigen::half, const Eigen::half)

解决方法请参考链接:【Tensorflow】more than one instance of overloaded function "__hadd" matches the argument list错误的解决方法

猜你喜欢

转载自blog.csdn.net/heiheiya/article/details/88946716