TensorFlow技术内幕(二):编译与安装

本篇中介绍一下TensorFlow的安装。TensorFlow的安装分为安装包安装和编译安装.

一般的用户使用安装包安装就可以了,并且安装包的方式简单方便,具体又分为基于pip安装、基于docker安装、基于VirtualEnv的安装和基于Anaconda的安装,基本的过程都是先准备好Python环境,然后直接通过Pip(python的包管理器)直接下载安装TensorFlow的Python包,比较简单,这里就不再赘述了,可自行google或参考这两篇文章,写的十分的详细:wikidoc

基于编译源码的安装方式在用户找不到自己平台合适的安装包、或则是想要深入学习TensorFlow实现的情况下使用。这里我们详细介绍一下基于源码的安装方式。

TensorFlow官网也有一篇讲解源码安装的文章,也值得参考一下:源码安装

那么本文跟这些文章的区别在哪呢?本人力求做到跟官网的手册互补,本文中我会将重点放在原理的讲解上,而这些参考资料的重点都是在实际动手操作上。

至于阅读顺序完全在于读者的喜好,可以对照这些操作手册先实践一遍,然后回过头来看这篇讲解;也可以先看我这偏讲解,然后再实践。

环境准备

编译TensorFlow工程的时候,有很多可选功能可以选择是否开启,有是否需要GPU支持,还有是否需要支持HDFS,是否需要OpenGL,Google Cloud,XLA优化等等。用户选择的开启的功能越多,TensorFlow的依赖项就越多,环境准备就越复杂些。

1、首先我们需要选择平台和操作系统:
Ubuntu和max OS X是官方推荐的两个平台,本篇中我们选择Ubuntu 16.04 64位作为编译平台。

2、安装构建工具Bazel:
Bazel是一个开源的构建系统,同样来自于google,在google内部使用的也比较广泛。

构建系统的需求是随着软件规模的增大而提出的。在软件规模很小的时候,我们可以手动调用gcc编译和链接生成目标文件。但是随着软件规模的增大,这种方式显然很低效,于是出现的构建工具,我们可以定义构建目标的规则文件,然后由构建工具来解析这个规则文件,调用gcc来编译何生成目标文件。随着软件规模的进一步扩大,出现了跨平台的需求。这时候构建工具也提供了根据不同的平台定义不同的构建规则的功能。

类似的构建工具还有Make, Maven, Gradle, GPY,GN(chromium目前采用的构建工具)等等。

现代构建工具的功能越来越强大,很多都支持多平台,多语言,远程依赖等等。

Bazel的安装也很简单,详细参考:安装Bazel

我们通过一个例子来测试和熟悉一下Bazel的使用,例子是Bazel官方提供的。

首先来获取例子代码:

git clone https://github.com/bazelbuild/examples/

可以看到目录 examples/cpp-tutorial 结构如下:

examples
└── cpp-tutorial
    ├──stage1
    │  ├── main
    │  │   ├── BUILD
    │  │   └── hello-world.cc
    │  └── WORKSPACE
    ├──stage2
    │  ├── main
    │  │   ├── BUILD
    │  │   ├── hello-world.cc
    │  │   ├── hello-greet.cc
    │  │   └── hello-greet.h
    │  └── WORKSPACE
    └──stage3
       ├── main
       │   ├── BUILD
       │   ├── hello-world.cc
       │   ├── hello-greet.cc
       │   └── hello-greet.h
       ├── lib
       │   ├── BUILD
       │   ├── hello-time.cc
       │   └── hello-time.h
       └── WORKSPACE

来看一下BUILD文件的内容,cpp-tutorial/stage1/main/BUILD如下:

# 通过cc_binary规则定义了一个binary目标,
# 目标名称为 hello-world,源文件是 hello-world.cc.
cc_binary(
    name = "hello-world",
    srcs = ["hello-world.cc"],
)

构建hello-world的方式也简单,执行如下命令:

bazel build //main:hello-world

cpp-tutorial/stage2/main/BUILD内容如下:

# 通过cc_library规则定义了一个library目标,
# 目标名称为 hello-greet,源文件是 hello-greet.cc,
# hello-greet.h
cc_library(
    name = "hello-greet",
    srcs = ["hello-greet.cc"],
    hdrs = ["hello-greet.h"],
)


# 通过cc_binary规则定义了一个binary目标,
# 目标名称为 hello-world,源文件是 hello-world.cc.
# 并且依赖包内目标hello-greet
cc_binary(
    name = "hello-world",
    srcs = ["hello-world.cc"],
    deps = [
        ":hello-greet",
    ],
)

构建方式没变化:

bazel build //main:hello-world

cpp-tutorial/stage3/main/BUILD内容如下:

# 通过cc_library规则定义了一个library目标,
# 目标名称为 hello-greet,源文件是 hello-greet.cc,
# hello-greet.h
cc_library(
    name = "hello-greet",
    srcs = ["hello-greet.cc"],
    hdrs = ["hello-greet.h"],
)

# 通过cc_binary规则定义了一个binary目标,
# 目标名称为 hello-world,源文件是 hello-world.cc.
# 并且依赖包内目标hello-greet和包lib下的目标//lib:hello-time
cc_binary(
    name = "hello-world",
    srcs = ["hello-world.cc"],
    deps = [
        ":hello-greet",
        "//lib:hello-time",
    ],
)

在看一下包lib下的BUILD文件:

# 通过cc_library规则定义了一个library目标,
# 目标名称为 hello-time,源文件是 hello-time.cc,
# hello-time.h
cc_library(
    name = "hello-time",
    srcs = ["hello-time.cc"],
    hdrs = ["hello-time.h"],
    visibility = ["//main:__pkg__"],
)

的确定义了一个目标hello-time,并且设置了main包可见。

构建方式依然没变:

bazel build //main:hello-world

3、安装Python以及依赖项:

主要的Python依赖有这几项

  • numpy:这是 Python 中常用的科学计算包,支持很多矩阵运算,并提供了高纬运算的优化算法。

  • dev:这是 Python 开发包,用于向 Python 添加扩展程序;其中包括了一些用C/Java/C#等语言编写的python扩展在编译的时候依赖的头文件,静态库等文件。TensorFlow不完全由Python写成,核心执行模块是有C++,CUDA写成的,因此需要此包。

  • pip:Python 软件包管理器;提供了对 Python 包的查找、下载、安装、卸载的功能。

  • wheel:用于管理 wheel (.whl) 格式的 Python 压缩包。

根据你的Python版本,安装方式稍有不同,详细安装方式,参考:安装python依赖

4、安装GPU依赖:

  • Nvidia显卡

  • Nvidia显卡驱动

  • Cuda ToolKit : 是Nvidia推出的使用GPU资源进行通用计算的SDK,TensorFlow的核心计算层通过cuda接口,驱动显卡的GPU进行计算。CUDA安装包一般会集成了显卡驱动。

  • cuDNN: 是Nvidia推出的深度学习中CNN和RNN的高度优化的实现。因为底层使用了很多先进的技术何接口没有对外开源,因此性能高很多。

安装方式不再赘述,参考:安装GPU依赖

从源码安装

目前为止,我们的准备工作就完成了,可以开始编译工程了。接下来的工作就比较简单了,基本流程是 git clone 获取源码、执行 configure 脚本配置编译选项、执行bazel build命令构建目标、执行目标脚本生成TensorFlow安装包、pip安装目标TensorFlow安装包,具体操作不再赘述,参考官网手册:编译TensorFlow,我们来重点理解一下这几个问题:

1、configure脚本是如何配置编译选项?

编译之前,需要执行configure脚本,脚本会提示用户配置一些编译选项,例如是否支持CUDA,OpenGL,HDFS,Google Cloud等等。那么这些配置选项是如果一项后续的编译的呢?

以python环境配置为例看一下配置的原理;我们来看一下脚本configure中setup_python函数:

function setup_python {
  ## Set up python-related environment settings:
  ##
  ## 这个while循环用来设置Python可执行文件的路径PYTHON_BIN_PATH,
  ## 使用which命令自动查找python的路径作为可选项,用户可以自己指定路径
  ##
  while true; do
    fromuser=""
    if [ -z "$PYTHON_BIN_PATH" ]; then
      default_python_bin_path=$(which python || which python3 || true)
      read -p "Please specify the location of python. [Default is $default_python_bin_path]: " PYTHON_BIN_PATH
      fromuser="1"
      if [ -z "$PYTHON_BIN_PATH" ]; then
        PYTHON_BIN_PATH=$default_python_bin_path
      fi
    fi
    if [ -e "$PYTHON_BIN_PATH" ]; then
      break
    fi
    echo "Invalid python path. ${PYTHON_BIN_PATH} cannot be found" 1>&2
    if [ -z "$fromuser" ]; then
      exit 1
    fi
    PYTHON_BIN_PATH=""
    # Retry
  done


  ##
  ## 下面的if逻辑用来设置PYTHON_LIB_PATH
  ##
  if [ -z "$PYTHON_LIB_PATH" ]; then
    # Split python_path into an array of paths, this allows path containing spaces
    IFS=',' read -r -a python_lib_path <<< "$(python_path)"

    if [ 1 = "$USE_DEFAULT_PYTHON_LIB_PATH" ]; then
      PYTHON_LIB_PATH=${python_lib_path[0]}
      echo "Using python library path: $PYTHON_LIB_PATH"

    else
      echo "Found possible Python library paths:"
      for x in "${python_lib_path[@]}"; do
        echo "  $x"
      done
      set -- "${python_lib_path[@]}"
      echo "Please input the desired Python library path to use.  Default is [$1]"
      read b || true
      if [ "$b" == "" ]; then
        PYTHON_LIB_PATH=${python_lib_path[0]}
        echo "Using python library path: $PYTHON_LIB_PATH"
      else
        PYTHON_LIB_PATH="$b"
      fi
    fi
  fi

  ##
  ## 检查PYTHON_BIN_PATH路径是否有效,无效则结束配置脚本
  ##
  if [ ! -x "$PYTHON_BIN_PATH" ]  || [ -d "$PYTHON_BIN_PATH" ]; then
    echo "PYTHON_BIN_PATH is not executable.  Is it the python binary?"
    exit 1
  fi

  local python_major_version
  python_major_version=$("${PYTHON_BIN_PATH}" -c 'from __future__ import print_function; import sys; print(sys.version_info[0]);' | head -c1)
  if [ -z "$python_major_version" ]; then
    echo -e "\n\nERROR: Problem getting python version.  Is $PYTHON_BIN_PATH the correct python binary?"
    exit 1
  fi

  # Convert python path to Windows style before writing into bazel.rc
  if is_windows; then
    PYTHON_BIN_PATH="$(cygpath -m "$PYTHON_BIN_PATH")"
    PYTHON_LIB_PATH="$(cygpath -m "$PYTHON_LIB_PATH")"
  fi


  ##
  ## 接下来的逻辑是将配置固化到磁盘,涉及两个磁盘文件.tf_configure.bazelrc和
  ## tools/python_bin_path.sh,后面我们讲介绍这两个文件的作用。
  ## 

  # Set-up env variables used by python_configure.bzl
  write_action_env_to_bazelrc "PYTHON_BIN_PATH" "$PYTHON_BIN_PATH"
  write_action_env_to_bazelrc "PYTHON_LIB_PATH" "$PYTHON_LIB_PATH"
  write_to_bazelrc "build --define PYTHON_BIN_PATH=\"$PYTHON_BIN_PATH\""
  write_to_bazelrc "build --define PYTHON_LIB_PATH=\"$PYTHON_LIB_PATH\""
  write_to_bazelrc "build --force_python=py$python_major_version"
  write_to_bazelrc "build --host_force_python=py$python_major_version"
  write_to_bazelrc "build --python${python_major_version}_path=\"$PYTHON_BIN_PATH\""
  write_to_bazelrc "test --force_python=py$python_major_version"
  write_to_bazelrc "test --host_force_python=py$python_major_version"
  write_to_bazelrc "test --define PYTHON_BIN_PATH=\"$PYTHON_BIN_PATH\""
  write_to_bazelrc "test --define PYTHON_LIB_PATH=\"$PYTHON_LIB_PATH\""
  write_to_bazelrc "run --define PYTHON_BIN_PATH=\"$PYTHON_BIN_PATH\""
  write_to_bazelrc "run --define PYTHON_LIB_PATH=\"$PYTHON_LIB_PATH\""

  # Write tools/python_bin_path.sh
  echo "export PYTHON_BIN_PATH=\"$PYTHON_BIN_PATH\"" > tools/python_bin_path.sh
}

这里面使用了两个函数来保存配置,如下:

function write_to_bazelrc() {
  echo "$1" >> .tf_configure.bazelrc
}

function write_action_env_to_bazelrc() {
  write_to_bazelrc "build --action_env $1=\"$2\""
}

如果执行成功,会生成文件.tf_configure.bazelrc,内容如下:

build --action_env PYTHON_BIN_PATH="/usr/bin/python"
build --action_env PYTHON_LIB_PATH="/usr/local/lib/python2.7/dist-packages"
build --define PYTHON_BIN_PATH="/usr/bin/python"
build --define PYTHON_LIB_PATH="/usr/local/lib/python2.7/dist-packages"
build --force_python=py2
build --host_force_python=py2
build --python2_path="/usr/bin/python"
test --force_python=py2
test --host_force_python=py2
test --define PYTHON_BIN_PATH="/usr/bin/python"
test --define PYTHON_LIB_PATH="/usr/local/lib/python2.7/dist-packages"
run --define PYTHON_BIN_PATH="/usr/bin/python"
run --define PYTHON_LIB_PATH="/usr/local/lib/python2.7/dist-packages"
build:opt --cxxopt=-march=native --copt=-march=native
build --action_env TF_NEED_CUDA="0"
build --action_env TF_NEED_OPENCL="0"

这个文件在后面的bazel编译中会用到。可以看出,这其中记录的是编译时期需要传递给bazel的参数信息。根据配置的不同,用户自己机器上的文件内容可以会差异,属于正常现象。

2、编译目标是什么?又是如何构建的呢?

仅支持 CPU 的情况下,构建的目标的命令如下:

$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

支持 GPU 的情况下,构建的目标的命令如下:

$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package 

我们来看一下构建的目标build_pip_package,回忆前面bazel的例子,找到定义它的BUILD文件 tensorflow/tools/pip_package/BUILD, 目标的定义如下:

sh_binary(
    name = "build_pip_package",
    srcs = ["build_pip_package.sh"],
    data = select({
        "//tensorflow:windows": [":simple_console_for_windows"],
        "//tensorflow:windows_msvc": [":simple_console_for_windows"],
        "//conditions:default": [
            ":licenses",
            "MANIFEST.in",
            "README",
            "setup.py",
            ":included_headers",
            ":simple_console",
            "//tensorflow:tensorflow_py",
            "//tensorflow/contrib/graph_editor:graph_editor_pip",
            "//tensorflow/contrib/keras:keras",
            "//tensorflow/contrib/labeled_tensor:labeled_tensor_pip",
            "//tensorflow/contrib/ndlstm:ndlstm",
            "//tensorflow/contrib/nn:nn_py",
            "//tensorflow/contrib/session_bundle:session_bundle_pip",
            "//tensorflow/contrib/signal:signal_py",
            "//tensorflow/contrib/slim:slim",
            "//tensorflow/contrib/slim/python/slim/data:data_pip",
            "//tensorflow/contrib/slim/python/slim/nets:nets_pip",
            "//tensorflow/contrib/tpu:tpu_estimator",
            "//tensorflow/contrib/tpu:tpu_helper_library",
            "//tensorflow/contrib/tpu:tpu_py",
            "//tensorflow/contrib/specs:specs",
            "//tensorflow/contrib/tensor_forest:init_py",
            "//tensorflow/contrib/tensor_forest/hybrid:hybrid_pip",
            "//tensorflow/contrib/predictor:predictor_pip",
            "//tensorflow/examples/tutorials/mnist:package",
            "//tensorflow/python:distributed_framework_test_lib",
            "//tensorflow/python:meta_graph_testdata",
            "//tensorflow/python:util_example_parser_configuration",
            "//tensorflow/python/debug:debug_pip",
            "//tensorflow/python/saved_model:saved_model",
            "//tensorflow/python/tools:tools_pip",
        ],
    }) + if_mkl(["//third_party/mkl:intel_binary_blob"]),
)

我们遇到了bazel的新的规则sh_binary以及一个select函数,我们来一下它们的定义:

sh_bianry用来定义一个可执行的Bourne Shell脚本目标,name表示目标的名字,srcs是脚本文件,必须是可执行的脚本,脚本运行时需要的其他文件由data属性定义,目标构建完成后,这些被依赖项都会在目标的runfiles目录内。

select函数根据bazel的command-line的参数返回不同的结果。

综合起来看,目标 build_pip_package 的可执行脚本是build_pip_package.sh,data 的属性值取决于bazel的命令行参数。我们先忽略windows平台下的取值,看到 build_pip_package 目标依赖 //tensorflow:tensorflow_py、//tensorflow/contrib/graph_editor:graph_editor_pip 等众多目标,这里暂时先不去一一细看这些被依赖项目。在构建 build_pip_package 目标的时候,bazel会递归的构建所有的被依赖目标。

接下来,我们来看下shell脚本build_pip_package.sh,它的主要工作在main函数里完成:

function main() {

  ## 
  ## 下面的代码做参数检查,用户执行此脚本的时候需要提供一个目标文件夹路径,
  ## 作为最后whl安装包生成的路径
  ## 
  if [ $# -lt 1 ] ; then
    echo "No destination dir provided"
    exit 1
  fi

  DEST=$1
  TMPDIR=$(mktemp -d -t tmp.XXXXXXXXXX)

  GPU_FLAG=""
  while true; do
    if [[ "$1" == "--gpu" ]]; then
      GPU_FLAG="--project_name tensorflow_gpu"
    fi
    shift

    if [[ -z "$1" ]]; then
      break
    fi
  done

  echo $(date) : "=== Using tmpdir: ${TMPDIR}"

  if [ ! -d bazel-bin/tensorflow ]; then
    echo "Could not find bazel-bin.  Did you run from the root of the build tree?"
    exit 1
  fi


  ##
  ## 下面的代码是在做文件拷贝,将编译生成的文件拷贝到目标路径中。
  ## 不同系统可能源目录的结构不一样,再有就是bazel的版本更新,也导致
  ## 新旧版本的源路径结构不太一样,等等原因;这里的代码兼容了各种源目
## 录的结构。runfiles目录中就是之前所有依赖生成文件会出现的位置。
  ##

  if is_windows; then
    rm -rf ./bazel-bin/tensorflow/tools/pip_package/simple_console_for_window_unzip
    mkdir -p ./bazel-bin/tensorflow/tools/pip_package/simple_console_for_window_unzip
    echo "Unzipping simple_console_for_windows.zip to create runfiles tree..."
    unzip -o -q ./bazel-bin/tensorflow/tools/pip_package/simple_console_for_windows.zip -d ./bazel-bin/tensorflow/tools/pip_package/simple_console_for_window_unzip
    echo "Unzip finished."
    # runfiles structure after unzip the python binary
    cp -R \
      bazel-bin/tensorflow/tools/pip_package/simple_console_for_window_unzip/runfiles/org_tensorflow/tensorflow \
      "${TMPDIR}"
    mkdir "${TMPDIR}/external"
    # Note: this makes an extra copy of org_tensorflow.
    cp_external \
      bazel-bin/tensorflow/tools/pip_package/simple_console_for_window_unzip/runfiles \
      "${TMPDIR}/external"
    RUNFILES=bazel-bin/tensorflow/tools/pip_package/simple_console_for_window_unzip/runfiles/org_tensorflow
  elif [ ! -d bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow ]; then
    # Really old (0.2.1-) runfiles, without workspace name.
    cp -R \
      bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/tensorflow \
      "${TMPDIR}"
    mkdir "${TMPDIR}/external"
    cp_external \
      bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/external \
      "${TMPDIR}/external"
    RUNFILES=bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles
    # Copy MKL libs over so they can be loaded at runtime
    if [ -d bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/_solib_k8/_U_S_Sthird_Uparty_Smkl_Cintel_Ubinary_Ublob___Uthird_Uparty_Smkl ]; then
      mkdir "${TMPDIR}/_solib_k8"
        cp -R \
            bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/_solib_k8/_U_S_Sthird_Uparty_Smkl_Cintel_Ubinary_Ublob___Uthird_Uparty_Smkl \
        "${TMPDIR}/_solib_k8"
    fi
  else
    if [ -d bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/external ]; then
      # Old-style runfiles structure (--legacy_external_runfiles).
      cp -R \
        bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/tensorflow \
        "${TMPDIR}"
      mkdir "${TMPDIR}/external"
      cp_external \
        bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/external \
        "${TMPDIR}/external"
      # Copy MKL libs over so they can be loaded at runtime
      if [ -d bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/_solib_k8/_U_S_Sthird_Uparty_Smkl_Cintel_Ubinary_Ublob___Uthird_Uparty_Smkl ]; then
        mkdir "${TMPDIR}/_solib_k8"
        cp -R \
          bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/_solib_k8/_U_S_Sthird_Uparty_Smkl_Cintel_Ubinary_Ublob___Uthird_Uparty_Smkl \
          "${TMPDIR}/_solib_k8"
      fi
    else
      # New-style runfiles structure (--nolegacy_external_runfiles).
      cp -R \
        bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/tensorflow \
        "${TMPDIR}"
      mkdir "${TMPDIR}/external"
      # Note: this makes an extra copy of org_tensorflow.
      cp_external \
        bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles \
        "${TMPDIR}/external"
      # Copy MKL libs over so they can be loaded at runtime
      if [ -d bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/_solib_k8/_U_S_Sthird_Uparty_Smkl_Cintel_Ubinary_Ublob___Uthird_Uparty_Smkl ]; then
        mkdir "${TMPDIR}/_solib_k8"
            cp -R \
                bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/_solib_k8/_U_S_Sthird_Uparty_Smkl_Cintel_Ubinary_Ublob___Uthird_Uparty_Smkl \
          "${TMPDIR}/_solib_k8"
      fi
    fi
    RUNFILES=bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow
  fi

  # protobuf pip package doesn't ship with header files. Copy the headers
  # over so user defined ops can be compiled.
  mkdir -p ${TMPDIR}/google
  mkdir -p ${TMPDIR}/third_party
  pushd ${RUNFILES%org_tensorflow}
  for header in $(find protobuf -name \*.h); do
    mkdir -p "${TMPDIR}/google/$(dirname ${header})"
    cp "$header" "${TMPDIR}/google/$(dirname ${header})/"
  done
  popd
  cp -R $RUNFILES/third_party/eigen3 ${TMPDIR}/third_party


  #
  # 下面的代码拷贝Python的whl格式的安装包
  # 的几个必须文件,MANIFEST.in, README, setup.py
  #
  cp tensorflow/tools/pip_package/MANIFEST.in ${TMPDIR}
  cp tensorflow/tools/pip_package/README ${TMPDIR}
  cp tensorflow/tools/pip_package/setup.py ${TMPDIR}

  # Before we leave the top-level directory, make sure we know how to
  # call python.
  source tools/python_bin_path.sh


  # 
  # 最后,下面的代码调用Python生成whl格式的包文件
  # 
  pushd ${TMPDIR}
  rm -f MANIFEST
  echo $(date) : "=== Building wheel"
  "${PYTHON_BIN_PATH:-python}" setup.py bdist_wheel ${GPU_FLAG} >/dev/null
  mkdir -p ${DEST}
  cp dist/* ${DEST}
  popd
  rm -rf ${TMPDIR}
  echo $(date) : "=== Output wheel file is in: ${DEST}"
}

看得出来,build_pip_package.sh的脚本就是将我们的编译结果打包成一个wheel格式的python包。

前面构建完脚本目标后,就可以执行脚本生成wheel包:

$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

最后我们可以用pip安装生成的wheel包:

$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.6.0-py2-none-any.whl

安装完测试

为了检查安转是否完成,可以执行一些测试代码,下面的测试用例来自官网,来看一下:

调用 Python:

$ python

在 Python 交互式 shell 中输入以下几行简短的程序代码:

# Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

如果系统输出以下内容,则说明顺利完成:

Hello, TensorFlow!

猜你喜欢

转载自blog.csdn.net/gaofeipaopaotang/article/details/80499025
今日推荐