"TensorFlow2.0 official version of the tutorial" minimalist installation TF2.0 official version (CPU & GPU) Tutorial

0 Introduction

TensorFlow 2.0, this morning, officially released version 2.0.

Many netizens said, TensorFlow 2.0 easier to use than PyTorch, has prepared a comprehensive change to the new upgrade of the deep learning framework.

This article will guide you to install the official version TF2.0 (CPU and GPU) with the simplest way for me to step on the pit to facilitate the experience of the official version of TF2.0.

Ado is now officially begin the tutorial.

 

1 Environmental ready

I am currently in Windows10 above, using python environment conda management, installation by conda cuda and cudnn (GPU support), installed by pip tensorflow2.0. After try only the most simple installation, no complex configuration environment.

(Installed on ubuntu and mac versions you can follow this method, because conda multi-platform support, should be no problem, if we issue more, you can comment and I will update later will install ubuntu tutorial)

1.0 conda environment ready

conda is a good use python management tool, you can easily manage multiple python build environment. Step back installation where I will introduce some commonly used conda instructions.

conda I recommend using the installation miniconda, we can understand as a streamlined version of the anaconda, retaining only some of the necessary components, so the installation will be much faster than, but also to meet the needs of our environmental management python. (SSD Anaconda typically take up several G memory installed, takes 1-2 hours, miniconda typically several hundred M, 10 minutes to complete the installation)

miniconda recommended Tsinghua download source: https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/

Choose their own versions can,

Next to the windows version to install miniconda as a demo, download the appropriate version from the above, download a good administrator privileges to open the click Install.

Note that both should be checked, so that we can use is a conda instructions directly in the cmd, the second is the miniconda as the system comes with python3.7 python.

After installed can be used in cmd conda instructed, Open cmd, windows key + R key to pop up input box cmd entered. Search in windows cmd can click directly run.

These instructions are described below cmd conda:

  1. View conda environment: conda env list
  2. New conda environment (env_name name is the environment created, you can customize): conda create -n env_name
  3. Activation Conda Environment (Ubuntu Conda replaced with Macos the source): conda activate env_name
  4. Exit conda environment: conda deactivate
  5. Install and uninstall python package: conda install numpy # conda uninstall numpy
  6. 查看已安装python列表:conda list -n env_name

知道这些指令就可以开始使用conda新建一个环境安装TF2.0了。

1.1 TF2.0 CPU版本安装

TF CPU安装比较简单,因为不需要配置GPU,所以windows ubuntu macOS安装方式都类似,缺点就是运行速度慢,但是用于日常学习使用还是可以的。

下面以windows版本做演示:一下均在命令行操作

1.1.0 新建TF2.0 CPU环境(使用conda 新建环境指令 python==3.6表示在新建环境时同时python3.6)

conda create -n TF_2C python=3.6

当弹出 :Proceed ([y]/n)? 输入y回车

完成后就可以进入此环境

1.1.1 进入TF_2C环境

conda activate TF_2C

进入后我们就可以发现:(TF_2C)在之前路径前面,表示进入了这个环境。使用conda deactivate可以退出。

我们再次进入 conda activate TF_2C ,便于执行下述命令

1.1.2 安装TF2.0 CPU版本(后面的 -i 表示从国内清华源下载,速度比默认源快很多)

pip install tensorflow==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

如果网不好的,多执行几次。然后过一会就安装好啦。下面我们做下简单测试。

1.1.3 测试TF2.0 CPU版本(把下面代码保存到demo.py使用TF_2C python运行)

import tensorflow as tf
version = tf.__version__
gpu_ok = tf.test.is_gpu_available()
print("tf version:",version,"\nuse GPU",gpu_ok)

如果没有问题的话输出结果如下:可以看到tf 版本为2.0.0 因为是cpu版本,所以gpu 为False

tf version: 2.0.0
use GPU False

1.2 TF2.0 GPU版本安装

GPU版本和CPU类似,但是会多一步对于GPU支持的安装。下面来一步步实现。安装之前确认你的电脑拥有Nvidia的GPU

1.2.0 新建TF2.0 GPU环境(使用conda 新建环境指令 python==3.6表示在新建环境时同时python3.6)

conda create -n TF_2G python=3.6

当弹出 :Proceed ([y]/n)? 输入y回车

完成后就可以进入此环境

1.1.1 进入TF_2G环境

conda activate TF_2G

Install version 1.1.2 GPU support, with Nvidia's GPU-driven windows generally have a default, and only need to install cudatoolkit cudnn package on it, we should note that you need to install cudatoolkit 10.0 version, note that if the system is less than 10.0 cudatoolkit it needs to be updated to 10.0

conda install cudatoolkit=10.0 cudnn

TF2.0 GPU version 1.1.3 installed (-i behind the download from the domestic Tsinghua source, much faster than the default source)

pip install tensorflow-gpu==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

If the network is not good, execute it several times. Then after a while it is okay to install. Let's do a simple test.

1.1.3 Test TF2.0 GPU version (save the following code to the demo.py use TF_2G python running)

import tensorflow as tf
version = tf.__version__
gpu_ok = tf.test.is_gpu_available()
print("tf version:",version,"\nuse GPU",gpu_ok)

If not, then the output results are as follows: tf can see the version 2.0.0 because it is the gpu version, so gpu is True, which means that GPU version of the installation is complete.

tf version: 2.0.0
use GPU True

1.2 Finally, we use a test version TF2.0 linear fitting way to write the code

The following code is saved main.py

import tensorflow as tf

X = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) y = tf.constant([[10.0], [20.0]]) class Linear(tf.keras.Model): def __init__(self): super().__init__() self.dense = tf.keras.layers.Dense( units=1, activation=None, kernel_initializer=tf.zeros_initializer(), bias_initializer=tf.zeros_initializer() ) def call(self, input): output = self.dense(input) return output # 以下代码结构与前节类似 model = Linear() optimizer = tf.keras.optimizers.SGD(learning_rate=0.01) for i in range(100): with tf.GradientTape() as tape: y_pred = model(X) # 调用模型 y_pred = model(X) 而不是显式写出 y_pred = a * X + b loss = tf.reduce_mean(tf.square(y_pred - y)) grads = tape.gradient(loss, model.variables) # 使用 model.variables 这一属性直接获得模型中的所有变量 optimizer.apply_gradients(grads_and_vars=zip(grads, model.variables)) if i % 10 == 0: print(i, loss.numpy()) print(model.variables)

Output:

0 250.0
10 0.73648137
20 0.6172349
30 0.5172956
40 0.4335389
50 0.36334264
60 0.3045124
70 0.25520816
80 0.2138865
90 0.17925593
[<tf.Variable 'linear/dense/kernel:0' shape=(3, 1) dtype=float32, numpy=
array([[0.40784496],
       [1.191065  ],
       [1.9742855 ]], dtype=float32)>, <tf.Variable 'linear/dense/bias:0' shape=(1,) dtype=float32, numpy=array([0.78322077], dtype=float32)>]

Guess you like

Origin www.cnblogs.com/xiaosongshine/p/11615639.html
Recommended