Tensorflow官方教程笔记--Eager execution basics

import tensorflow as tf
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
print(tf.square(2) + tf.square(3))

tf.Tensor(3, shape=(), dtype=int32)
tf.Tensor([4 6], shape=(2,), dtype=int32)
tf.Tensor(25, shape=(), dtype=int32)
tf.Tensor(6, shape=(), dtype=int32)
tf.Tensor(b'aGVsbG8gd29ybGQ', shape=(), dtype=string)
tf.Tensor(13, shape=(), dtype=int32)

动态图机制:

import tensorflow as tf
tf.enable_eager_execution()

tf.reduce_sum(arg1,arg2)

对arg1进行求和,消去维度arg2。

arg1:

待处理数据

arg2:

如果为空,那么对数据中所有元素求和;

否则,代表消去的维度。

print(tf.reduce_sum( [[1, 2, 3],[1, 2, 3]] )  )
print(tf.reduce_sum( [[1, 2, 3],[1, 2, 3]],0 )  )
print(tf.reduce_sum( [[1, 2, 3],[1, 2, 3]],1 ) )

tf.Tensor(12, shape=(), dtype=int32)
tf.Tensor([2 4 6], shape=(3,), dtype=int32)
tf.Tensor([6 6], shape=(2,), dtype=int32)
x = tf.matmul([[1]], [[2, 3]]) #矩阵乘法

print(x)
print(x.shape)
print(x.dtype)
tf.Tensor([[2 3]], shape=(1, 2), dtype=int32)
(1, 2)
<dtype: 'int32'>
print(x[0][0])
tf.Tensor(2, shape=(), dtype=int32)
print(x[0][1])
tf.Tensor(3, shape=(), dtype=int32)

The most obvious differences between NumPy arrays and TensorFlow Tensors are:

Tensors can be backed by accelerator memory (like GPU, TPU).

Tensors are immutable.

Tensorflow中tensor可以通过GPU或TPU加速,并且不可更改。

x[0][1]=4
---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-20-bcdb32372673> in <module>
----> 1 x[0][1]=4


TypeError: 'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment

tf.multiply(x,y) 两个矩阵中的元素各自对应相乘

tensor和numpy类型可以自动进行转换:

扫描二维码关注公众号,回复: 5293396 查看本文章
import numpy as np

ndarray = np.ones([3, 3])

print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)


print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
TensorFlow operations convert numpy arrays to Tensors automatically
tf.Tensor(
[[42. 42. 42.]
 [42. 42. 42.]
 [42. 42. 42.]], shape=(3, 3), dtype=float64)
And NumPy operations convert Tensors to numpy arrays automatically
[[43. 43. 43.]
 [43. 43. 43.]
 [43. 43. 43.]]
The .numpy() method explicitly converts a Tensor to a numpy array
[[42. 42. 42.]
 [42. 42. 42.]
 [42. 42. 42.]]

GPU加速:

x = tf.random_uniform([3, 3])

print("Is there a GPU available: "),
print(tf.test.is_gpu_available())

print("Is the Tensor on GPU #0:  "),
print(x.device.endswith('GPU:0'))
Is there a GPU available: 
True
Is the Tensor on GPU #0:  
True

The Tensor.device property provides a fully qualified string name of the device hosting the contents of the tensor.

x.device
'/job:localhost/replica:0/task:0/device:GPU:0'

显示指定GPU:

The term “placement” in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the tf.device context manager. For example:

import time

def time_matmul(x):
  start = time.time()
  for loop in range(10):
    tf.matmul(x, x)

  result = time.time()-start
    
  print("10 loops: {:0.2f}ms".format(1000*result))


# Force execution on CPU

with tf.device("CPU:0"):
  print("On CPU:")
  x = tf.random_uniform([1000, 1000])
  assert x.device.endswith("CPU:0")
  time_matmul(x)

# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
  print("On GPU:")
  with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
    x = tf.random_uniform([1000, 1000])
    assert x.device.endswith("GPU:0")
    time_matmul(x)
On CPU:
10 loops: 219.34ms
On GPU:
10 loops: 0.00ms

数据库:

tf.data.Dataset.from_tensor_slices 对传入的数据按第0维切割

ds_tensors = tf.data.Dataset.from_tensor_slices([[1,1], [1,2],[1,3], [1,4], [1,5],[1,6]])

# Create a CSV file
#import tempfile
#_, filename = tempfile.mkstemp()
#print(filename)

with open('dataset.txt','w') as f:
  f.write("""Line 1
Line 2
Line 3
  """)

ds_file = tf.data.TextLineDataset('dataset.txt')

ds_file = tf.data.TextLineDataset(‘dataset.txt’)

将dataset.txt中的数据当作数据库读入

ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)  #batchsize=2,
#dataset.shuffle(arg),arg似乎代表打乱的混乱程度

ds_file = ds_file.batch(2)
ds_tensors
<BatchDataset shapes: (?, 2), types: tf.int32>
ds_file
<BatchDataset shapes: (?,), types: tf.string>
print('Elements of ds_tensors:')
for x in ds_tensors:
  print(x)

print('\nElements in ds_file:')
for x in ds_file:
  print(x)
Elements of ds_tensors:
tf.Tensor(
[[1 4]
 [1 1]], shape=(2, 2), dtype=int32)
tf.Tensor(
[[ 1  9]
 [ 1 25]], shape=(2, 2), dtype=int32)
tf.Tensor(
[[ 1 16]
 [ 1 36]], shape=(2, 2), dtype=int32)

Elements in ds_file:
tf.Tensor([b'Line 1' b'Line 2'], shape=(2,), dtype=string)
tf.Tensor([b'Line 3' b'  '], shape=(2,), dtype=string)

猜你喜欢

转载自blog.csdn.net/yskyskyer123/article/details/86772965
今日推荐