总览Tensorflow

Overview of Tensorflow

What’s TensorFlow?

“Open source software library for numerical computation using data flow graphs”

Tensorflow是一个使用数据流图进行数值计算的开源软件库。由Google于2015年11月基于Apache项目开源(核心实现尚未开源),支持Python,C++,CUDA。

Goals

  • Understand TF’s computation graph approach
  • Explore TF’s built-in functions and classes
  • Learn how to build and structure models best suited for a deep learning project

Data Flow Graphs

TensorFlow separates definition of computations from their execution

TensorFlow将计算的定义(计算图)和运行分离,并以数据流图(Data Flow Graphs)的形式展示。

Data Flow Graphs

数据流图由点(Nodes)和边(Edges)构成。

  • Nodes: operators, variables, and constants

  • Edges: tensors

我们使用TensorBoard可视化计算图。

由于每次在TensorFlow编程需要先定义计算图(静态图),再运行,比较繁琐。TensorFlow2.0推出了eager mode,优化这一流程。

Tensor

张量(Tensor)是多维数组(An n-dimensional array

  • 0-d tensor: scalar/number(标量/数)

  • 1-d tensor: vector(向量)

  • 2-d tensor: matrix(矩阵)

  • and so on(三维及以上张量无特定名称)

Session

A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.

Session will also allocate memory to store the current values of variables.

会话(Session)分配资源(内存,计算力)并运行的环境。

Subgraphs

Possible to break graphs into several chunks and run them parallelly across multiple CPUs, GPUs, TPUs, or other devices,Distributed Computation.

将数据流图分为几个子图(Subgraphs),可以在多台的设备( CPUs, GPUs, TPUs, or others)中运行,实现分布式计算

fetches is a list of tensors whose values you want.

(tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)

根据会话执行的fetches参数选择数据流图中的计算。

Example: AlexNet

Example Code

import os
import tensorflow as tf
# 忽略警告
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
a = tf.add(3, 5)

# python with语法:自动释放资源(sess)
with tf.Session() as sess:
	print(sess.run(a))
	

忽略警告是解决Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2.
稍微解释下,警告起因是系统检查发现你的CPU支持更高效的指令(AVX2),而你所使用TensorFlow并未支持。


解决方案有两种:
(1)针对你的CPU,到官网装一个更符合你CPU指令集的TensorFlow版本;
(2)忽略即可(如代码所示)。
一般倾向于第二种。为了提高计算效率,我们一般直接装TensorFlow-GPU,没有必要在CPU版大费周章。

Summary

  1. Save computation. Only run subgraphs that lead to the values you want to fetch.

  2. Break computation into small, differential pieces to facilitate auto-differentiation

  3. Facilitate distributed computation, spread the work across multiple CPUs, GPUs, TPUs, or other devices

  4. Many common machine learning models are taught and visualized as directed graphs

发布了57 篇原创文章 · 获赞 44 · 访问量 2万+

猜你喜欢

转载自blog.csdn.net/the_harder_to_love/article/details/90215483