Take you read one article! TensorFlow entry

Personal blog navigation page (click on the right link to open a personal blog): Daniel take you on technology stack 

TensorFlow entry

This article describes how to use the initial code TensorFlow to farmers and procedures Yuan are programmed. Before reading please  install TensorFlow , in addition in order to better understand the content of the article, you need to understand a little knowledge before reading the following:

  1. python basic programming. Can understand python code, it is best to use a tool or pycharm IDC script like writing code.
  2. At least the idea is the array.
  3. Ideal state is the foundation of knowledge with machine learning. But if not understand before reading any machine learning-related knowledge no serious problem, as you can understand the paper machine learning start. It will open another later with MNIST understand the basics of machine learning.

TensorFlow offers a wide range of API interface, which TensorFlow Core is the lowest-level interface that provides basic support for the development of TensorFlow. The official recommendation TensorFlow Core as a machine learning research and related data modeling. In addition there is a higher abstraction TensorFlow Core API interface, the API interfaces easier to use than TensorFlow Core, easier to quickly achieve business needs. For example tf.contrib.learn interface that provides a set of data management, data evaluation, training, and other inference functions. Require special attention during the development of the use of TensorFlow to  contrib the beginning of the API interface still is still evolving, it is likely to be adjusted in a future release or outright cancellation.

This paper introduces TensorFlow Core, and will demonstrate how to use tf.contrib.learn simple modeling. Learn TensorFlow Core is to allow developers to understand how the underlying work when using the abstract interface, in order to create a more appropriate model in the training data.

TensorFlow

TensorFlow basic data unit is a tensor (tensor). A tensor that is a set of vectors from the point to be understood that this data structure is equivalent to a set of values ​​stored in a set of the plurality of queues (Tensor no way more clearly few words, to To Valley brother of the girl or search for "tensor analysis" can be simply thought of as an array of multi-dimensional). A tensor of order expressed tensor dimension tensor Here are some examples:

3 # 0阶张量,可以用图形[]来表示
[1. ,2., 3.] # 1阶张量,是一个图形为[3]的向量
[[1., 2., 3.], [4., 5., 6.]] # 2阶张量,是一个图形为[2,3]的矩阵
[[[1., 2., 3.]], [[7., 8., 9.]]] # 图形为[2,1,3]的三阶张量

TensorFlow Core Course

Import TensorFlow

The following are introduced into a standard manner TensorFlow package:

import tensorflow as tf

After python by way of introduction, tf provides access to all the inlet TensorFlow classes, methods and symbols.

FIG calculated (Computational Graph)

Programming development TensorFlow Core can be seen to do two things:

  1. Construction Calculation FIG. (Modeling)
  2. Run calculation chart. (carried out)

FIG. (Graph, can also be called a connection diagram) indicates that the link formed by a plurality of points in FIG. FIG herein refers to the path TensorFlow modeling operations, see the whole shape may be used TensorBoard FIG.

Node (node) shown in the drawings each of points which are representative of a computing task.

So in short : Programming  TensorFlow Core  is a series of pre-arranged node computing tasks well, and then run these tasks.

Below we first construct a simple graph, nodes in the graph (node) 0 or more tensors as input and produces as output a tensor. A typical node is a "constant" (constant). TensorFlow constants have existed in the building calculation model, it does not require any input when calculating the running. The following code creates two floating-point constants constant values  node1 and  node2:

node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)

After the run is printed out:

Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)

Observe the result of printing will find that it is not in accordance with the expected output as  3.0  or  4.0  value. Here is the output of a node object information. Because here has not been executed second task - to run the calculation model diagram. Only at runtime, will be used to the real value of the node  3.0  and 4.0 . In order to create a Session FIG operation (session), a session control method and a package of various state quantities TensorFlow runtime (context).

The following code would create a (session) session object instance, and then performing  run the method to calculate the model:

sess = tf.Session()
print(sess.run([node1, node2]))

After the run we will find that the printed results are 3.0 and 4.0:

[3.0, 4.0]

Then,  node1 and  node2 for sum operation, and this operation is the calculation model of FIG. The following code is a construct  node1 ,  node2 for sum operation  node3 model represents the operation and, after use build  sess.run run:

node3 = tf.add(node1, node2)
print("node3: ", node3)
print("sess.run(node3): ",sess.run(node3))

After the operation will output the following:

node3:  Tensor("Add_2:0", shape=(), dtype=float32)
sess.run(node3):  7.0

This, completed the process of TensorFlow create charts and graphs execution.

Mentioned TensorFlow provides a tool called TensorBoard, the tool is capable of displaying operation node of FIG. The following is an example of a calculation TensorBoard visual see FIG:

This constant operation results and of little value, because he was always a constant produces a fixed result. FIG nodes in a manner capable of accepting an external input parameters - such as the use placeholders. Placeholders can wait until the number and then dynamically calculated when the model runs:

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + 可以代替tf.add(a, b)构建模型

Above this line 3 with a bit like a function or a lambda expression to obtain the input parameters. We can input various parameters at runtime to calculate the figure:

print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))

The output is:

7.5
[ 3.  7.]

In TensorBoard, calculated graph is displayed:

 

We can use more complex expressions to add content calculated:

add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b:4.5}))

Calculate the output:

22.5

TensorBoard display:

In a model of machine learning often requires receive various types of data as input. In order to make possible continuous training model typically you need to be able to obtain a new output for the same input modified model of FIG. Variables (Variables) can increase the training parameters to the figure, they are created by the initial type and specify a default value:

W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b

As already mentioned in the call  tf.constant when initializes unalterable constant. And here by calling the  tf.Variable variables created will not be initialized, in order (before TensorFlow run sess.runbefore performing the operation model) initializes all variables need to increase the step  init operation:

init = tf.global_variables_initializer()
sess.run(init)

Overload can  init be globally initializes all variables TensorFlow FIG. In the above code, we call  sess.run before, all the variables are not initialized.

The following  x is a placeholder, {x:[1,2,3,4]}  represents the replacement value in the calculation of x is [1,2,3,4]:

print(sess.run(linear_model, {x:[1,2,3,4]}))

Output:

[ 0.          0.30000001  0.60000002  0.90000004]

Now that you've created a computational model, but it is not clear whether or not effectively enough to make him more effective, the need for this training model data. The following code defines named  y placeholder to provide the desired value, and then write a "loss function" (loss function).

A "loss function" is used to measure the current model for those who want to achieve the goal of how much output from the tool. The following example uses a linear regression model as a loss. Regression process are: loss calculation model output variable ( ydifference), and then squaring this difference (variance), and then the result vector and the variance calculation is performed. The following code,  linear_model - y create a vector, each vector represents a value corresponding to the error increment. Then call  tf.square on incremental error squaring. Finally, all of the variance results are added to create a scalar value to abstract an error difference, use  tf.reduce_sumto do this work. As the following code:

# 定义占位符
y = tf.placeholder(tf.float32)
# 方差运算
squared_deltas = tf.square(linear_model - y)
# 定义损益模型
loss = tf.reduce_sum(squared_deltas)
# 输出损益计算结果
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))

After calculating the difference value is:

23.66

You may be manually  W and  b change the value -1 and 1 reduce differences in the results. TensorFlow used  tf.Variable to create variables, using  tf.assign modified variables. For example  W=-1 , b=1 it is the current best value model, so you can modify their values as follows:

fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))

 The final output results after amended as follows:

0.0

 tf.train Interface

Complete the process of machine learning is beyond the scope of this article, just to illustrate the process of training. TensorFlow offers many optimizer to gradually (or iteration loop) to adjust each parameter, and ultimately the loss value as small as possible. One of the simplest optimization is "gradient descent" ( gradient descent of ), it would loss calculation model derivation, and adjusting the value of the input variable according to a result of derivation ( Wand b), so that the ultimate purpose of the derivation result of gradually moving 0. Calculating the derivative prepared by hand is very tedious and error prone, TensorFlow also provides functions  tf.gradients for automatic derivation process. The following example illustrates the process using a gradient of decreasing training samples:

# 设定优化器,这里的0.01表示训练时的步进值
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # 初始化变量值.
for i in range(1000): # 遍历1000次训练数据,每次都重新设置新的W和b值
  sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})

print(sess.run([W, b]))

The result of this mode of operation are:

[array([-0.9999969], dtype=float32), array([ 0.99999082], dtype=float32)]

 Now that we have completed the entire process of machine learning. Although simple linear regression calculation does not need to use too much TensorFlow code, but this is only the case for instance, often need to write more code to implement complex pattern matching operations in practical applications. TensorFlow provides a higher level of abstraction interface common pattern, structure and function.

A complete training course

The following is based on the foregoing description, preparation of a complete linear regression model:

import numpy as np
import tensorflow as tf

# 模型参数
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
# 模型输入
x = tf.placeholder(tf.float32)
# 模型输出
linear_model = W * x + b
# 损益评估参数
y = tf.placeholder(tf.float32)
# 损益模式
loss = tf.reduce_sum(tf.square(linear_model - y)) # 方差和
# 优化器
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# 训练数据
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
# 定义训练的循环
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x:x_train, y:y_train})

# 评估训练结果的精确性
curr_W, curr_b, curr_loss  = sess.run([W, b, loss], {x:x_train, y:y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

After running outputs:

W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11

This complex program can still TensorBoard the visual rendering:

tf.contrib.learn

As already mentioned, TensorFlow addition TensorFlow Core, in order to facilitate business development also provides many more abstract interfaces. tf.contrib.learn Is a high-level library TensorFlow, he provided a more simplified machine learning mechanisms, including:

  1. Run training cycle
  2. Run assessment cycle
  3. Management data collection
  4. Data Management Training

tf.contrib.learn defines some common modules.

Basic Usage

Take a look at using  tf.contrib.learn to achieve linear regression method.

import tensorflow as tf
# NumPy常用语加载、操作、预处理数据.
import numpy as np

# 定义一个特性列表features。
# 这里仅仅使用了real-valued特性。还有其他丰富的特性功能
features = [tf.contrib.layers.real_valued_column("x", dimension=1)]

# 一个评估者(estimator)是训练(fitting)与评估(inference)的开端。
# 这里预定于了许多类型的训练评估方式,比如线性回归(linear regression)、
# 逻辑回归(logistic regression)、线性分类(linear classification)和回归(regressors)
# 这里的estimator提供了线性回归的功能
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)

# TensorFlow提供了许多帮助类来读取和设置数据集合
# 这里使用了‘numpy_input_fn’。
# 我们必须告诉方法我们许多多少批次的数据,以及每次批次的规模有多大。
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
                                              num_epochs=1000)

# ‘fit’方法通过指定steps的值来告知方法要训练多少次数据
estimator.fit(input_fn=input_fn, steps=1000)

# 最后我们评估我们的模型价值。在一个实例中,我们希望使用单独的验证和测试数据集来避免过度拟合。
estimator.evaluate(input_fn=input_fn)

After running output:

    {'global_step': 1000, 'loss': 1.9650059e-11}

Custom models

tf.contrib.learn It is not limited to use only the default model. We suppose now need to create a preset to TensorFlow not in the model. We can still use tf.contrib.learnhighly abstract set of data retention, training data, the training process. We will use our knowledge of lower-level TensorFlow API, demonstrating how to use the equivalent model LinearRegressor achieve their own.

Use  tf.contrib.learn to create a custom model need to use its subclasses  tf.contrib.learn.Estimator . And  tf.contrib.learn.LinearRegressor is   tf.contrib.learn.Estimator subclass. The following code to  Estimator add a  model_fn feature, which will tell  tf.contrib.learn how to evaluate training and profit and loss account:

import numpy as np
import tensorflow as tf
# 定义一个特征数组,这里仅提供实数特征
def model(features, labels, mode):
  # 构建线性模型和预设值
  W = tf.get_variable("W", [1], dtype=tf.float64)
  b = tf.get_variable("b", [1], dtype=tf.float64)
  y = W*features['x'] + b
  # 损益子图
  loss = tf.reduce_sum(tf.square(y - labels))
  # 训练子图
  global_step = tf.train.get_global_step()
  optimizer = tf.train.GradientDescentOptimizer(0.01)
  train = tf.group(optimizer.minimize(loss),
                   tf.assign_add(global_step, 1))
  # ModelFnOps方法将创建我们自定义的一个抽象模型。
  return tf.contrib.learn.ModelFnOps(
      mode=mode, predictions=y,
      loss=loss,
      train_op=train)

estimator = tf.contrib.learn.Estimator(model_fn=model)
# 定义数据集
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x": x}, y, 4, num_epochs=1000)

# 训练数据
estimator.fit(input_fn=input_fn, steps=1000)
# 评估模型
print(estimator.evaluate(input_fn=input_fn, steps=10))

After running output:

{'loss': 5.9819476e-11, 'global_step': 1000}

Attached Java / C / C ++ / machine learning / Algorithms and Data Structures / front-end / Android / Python / programmer reading / single books books Daquan:

(Click on the right to open there in the dry personal blog): Technical dry Flowering
===== >> ① [Java Daniel take you on the road to advanced] << ====
===== >> ② [+ acm algorithm data structure Daniel take you on the road to advanced] << ===
===== >> ③ [database Daniel take you on the road to advanced] << == ===
===== >> ④ [Daniel Web front-end to take you on the road to advanced] << ====
===== >> ⑤ [machine learning python and Daniel take you entry to the Advanced Road] << ====
===== >> ⑥ [architect Daniel take you on the road to advanced] << =====
===== >> ⑦ [C ++ Daniel advanced to take you on the road] << ====
===== >> ⑧ [ios Daniel take you on the road to advanced] << ====
=====> > ⑨ [Web security Daniel take you on the road to advanced] << =====
===== >> ⑩ [Linux operating system and Daniel take you on the road to advanced] << = ====

There is no unearned fruits, hope you young friends, friends want to learn techniques, overcoming all obstacles in the way of the road determined to tie into technology, understand the book, and then knock on the code, understand the principle, and go practice, will It will bring you life, your job, your future a dream.

Published 47 original articles · won praise 0 · Views 274

Guess you like

Origin blog.csdn.net/weixin_41663412/article/details/104860505