TensorFlow (1) installation and simple example

installation

Tensorflow installation requires numpy, and numpy and tensorflow need to be installed in the same way, otherwise an error will be reported. There are two installation methods for tensorflow. These two methods cannot be mixed, which means that numpy installed through pip and tensorflow installed through conda are incompatible.

第一种,通过pip安装
pip install numpy
CPU版
pip install tensorflow
GPU版
pip install tensorflow-gpu
第二种,通过conda安装
conda install numpy
CPU版
conda install tensorflow
GPU版
conda install tensorflow-gpu

Simple example

The following example shows the use of tensorflow to solve the regression task in machine learning. The goal is to fit the weight and offset of the straight line from the data.

import numpy as np
import tensorflow as tf

x = np.random.rand(100)
y = 0.1 * x + 0.3 

# 定义权值
Weights = tf.Variable(tf.random_uniform([1], -1, 1))
# 定义偏置
biases = tf.Variable(tf.zeros([1]))

y_pre = Weights * x + biases
loss = tf.reduce_mean(tf.square(y - y_pre))

optimizer = tf.train.GradientDescentOptimizer(0.5)
# 定义梯度
train = optimizer.minimize(loss)
# 定义初始值,初始化权值和偏置
init = tf.global_variables_initializer()

sess = tf.Session()
# 更新初始值
sess.run(init)

for step in range(201):
	# 更新梯度
    sess.run(train)
    if step % 20 == 0:
        print(step, sess.run(Weights), sess.run(biases))

Interesting example

The following example uses a neural network to solve the quadratic function regression problem in machine learning and visualize the solution results in each step. This example uses the plt.pause() function. In order to show dynamic effects, do not run the program in spyder, but run it directly on the command line.

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

def add_layer(inputs, in_size, out_size, activation_function = None):
    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
    Wx_plus_biases = tf.matmul(inputs, Weights) + biases
    if activation_function is None:
        outputs = Wx_plus_biases
    else:
        outputs = activation_function(Wx_plus_biases)
    return outputs

x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise

xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1])

l1 = add_layer(xs, 1, 10, activation_function = tf.nn.relu)
prediction = add_layer(l1, 10, 1, activation_function = None)
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                     reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

init  = tf.global_variables_initializer()

fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x_data, y_data)

with tf.Session() as sess:
    sess.run(init)
    for step in range(1000):
        sess.run(train_step, feed_dict={
    
    xs:x_data, ys:y_data})
        if step % 50 ==0:
            try:
                ax.lines.remove(lines[0])
            except Exception:
                pass
            print(sess.run(loss, feed_dict={
    
    xs:x_data, ys:y_data}))
            prediction_value = sess.run(prediction, feed_dict = {
    
    xs: x_data})
            lines = ax.plot(x_data, prediction_value, 'r-', lw = 5)            
            plt.pause(0.1)

Guess you like

Origin blog.csdn.net/Abecedarian_CLF/article/details/90634723