TensorFlow学习总结1

TensorFlow学习总结

本文字代码大多来自《实战google深度学习框架》,在这里做记录。

TensorFlow简介

TensorFlow™ 是一个采用数据流图(data flow graphs),用于数值计算的开源软件库。节点(Nodes)在图中表示数学操作,图中的线(edges)则表示在节点间相互联系的多维数据数组,即张量(tensor)。它灵活的架构让你可以在多种平台上展开计算,例如台式计算机中的一个或多个CPU(或GPU),服务器,移动设备等等。—— [ TensorFlow中文社区 ]

基本输入输出

import tensorflow as tf 
w1=tf.Variable(tf.random_normal([2,3],stddev=1 ))
w2=tf.Variable(tf.random_normal([3,1],stddev=1))

#placeholder 就是输入训练数据的入口
x=tf.placeholder(tf.float32,shape=(1,2),name='input')

a=tf.matmul(x,w1)
y=tf.matmul(a,w2)
y2=tf.matmul(a*2,w2)

with  tf.Session() as sess:
    init_op=tf.global_variables_initializer().run()
    #想查看输出,必须要在run()中,第一个参数就是获得输出,
    #同时,输入就是feed_dict的中字典形式输入,与上面的placeholder相呼应
    print(sess.run([y,y2],feed_dict={x:[[0.5,0.9]]}))

Loss中求评价:tf.reduce_mean()

在计算交叉熵或者loss时,需要把所有的y-y_的值进行计算后求平均值,这里需要tf.reduce_mean()

import tensorflow as tf 

mean=tf.reduce_mean([[2.0,4.0],[1.0,3.0]])
mean1=tf.reduce_mean([[2.0,4.0],[1.0,3.0]],0)
mean2=tf.reduce_mean([[2.0,4.0],[1.0,3.0]],1)

with tf.Session() as sess:
    print(sess.run([mean,mean1,mean2])) 

#output 
#[2.5, array([ 1.5,  3.5], dtype=float32), array([ 3.,  2.], dtype=float32)]
#可以看出和numpy.mean()相差不多

一般在神经网络的损失函数中用到,通常的句法为
这里写图片描述

关于L1,L2正则表达式

正则表达式是防止过拟合的一种方式,其中L1可以使某些参数w变为0,从而减少影响因素,L2会使某些w不会太大,不会使某一个过于夸张(自己理解):


# -*- coding: utf-8 -*-
import tensorflow as tf

weight = tf.constant([[1.0, -2.0], [-3.0, 4.0]])

with tf.Session() as sess:

    #输出为(|1|+|-2|+|-3|+|4|)*0.5=5
    print(sess.run(tf.contrib.layers.l1_regularizer(0.5)(weight)))

    #输出为(1*1+(-2*-2)+(3*3)+4*4)*0.5=7.5
    print(sess.run(tf.contrib.layers.l2_regularizer(0.5)(weight)))

关于损失函数的合集tf.add_to_collection(‘losses’,mse_loss)

tensorflow的collection是可以全局存储和获取的,可以收集多个losses:

import tensorflow as tf 

其中,y_ 和cur_layer都可以是数组
mse_loss1=tf.reduce_mean(tf.square(y_ - cur_layer))
mse_loss2=tf.reduce_mean(tf.square(y_ - cur_layer))

tf.add_to_collection('losses',mse_loss1)
tf.add_to_collection('losses',mse_loss2)

#在这里就可以把不同位置的'losses'集合起来
loss=tf.add_n(tf.get_collection('losses'))

TensorFlow 即时计算

TensorFlow平常必须要搭好框架再进行计算某变量值,现在有了

'''
Basic introduction to TensorFlow's Eager API.
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
What is Eager API?
" Eager execution is an imperative, define-by-run interface where operations are
executed immediately as they are called from Python. This makes it easier to
get started with TensorFlow, and can make research and development more
intuitive. A vast majority of the TensorFlow API remains the same whether eager
execution is enabled or not. As a result, the exact same code that constructs
TensorFlow graphs (e.g. using the layers API) can be executed imperatively
by using eager execution. Conversely, most models written with Eager enabled
can be converted to a graph that can be further optimized and/or extracted
for deployment in production without changing code. " - Rajat Monga
'''
from __future__ import absolute_import, division, print_function

import numpy as np
import tensorflow as tf
import tensorflow.contrib.eager as tfe

# Set Eager API
print("Setting Eager mode...")
tfe.enable_eager_execution()

# Define constant tensors
print("Define constant tensors")
a = tf.constant(2)
print("a = %i" % a)
b = tf.constant(3)
print("b = %i" % b)

# Run the operation without the need for tf.Session
print("Running operations, without tf.Session")
c = a + b
print("a + b = %i" % c)
d = a * b
print("a * b = %i" % d)


# Full compatibility with Numpy
print("Mixing operations with Tensors and Numpy Arrays")

# Define constant tensors
a = tf.constant([[2., 1.],
                 [1., 0.]], dtype=tf.float32)
print("Tensor:\n a = %s" % a)
b = np.array([[3., 0.],
              [5., 1.]], dtype=np.float32)
print("NumpyArray:\n b = %s" % b)

# Run the operation without the need for tf.Session
print("Running operations, without tf.Session")

c = a + b
print("a + b = %s" % c)

d = tf.matmul(a, b)
print("a * b = %s" % d)

print("Iterate through Tensor 'a':")
for i in range(a.shape[0]):
    for j in range(a.shape[1]):
        print(a[i][j])


#输出如下:
[Running] python "e:\00--code\aaa\TensorFlow\tempCodeRunnerFile.py"
WARNING:tensorflow:From C:\Users\liujun\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
Setting Eager mode...
Define constant tensors
a = 2
b = 3
Running operations, without tf.Session
a + b = 5
a * b = 6
Mixing operations with Tensors and Numpy Arrays
Tensor:
 a = tf.Tensor(
[[ 2.  1.]
 [ 1.  0.]], shape=(2, 2), dtype=float32)
NumpyArray:
 b = [[ 3.  0.]
 [ 5.  1.]]
Running operations, without tf.Session
a + b = tf.Tensor(
[[ 5.  1.]
 [ 6.  1.]], shape=(2, 2), dtype=float32)
a * b = tf.Tensor(
[[ 11.   1.]
 [  3.   0.]], shape=(2, 2), dtype=float32)
Iterate through Tensor 'a':
tf.Tensor(2.0, shape=(), dtype=float32)
tf.Tensor(1.0, shape=(), dtype=float32)
tf.Tensor(1.0, shape=(), dtype=float32)
tf.Tensor(0.0, shape=(), dtype=float32)
2018-04-17 17:12:14.568104: I C:\tf_jenkins\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

[Done] exited with code=0 in 10.386 seconds

猜你喜欢

转载自blog.csdn.net/loovelj/article/details/79977685
今日推荐