[Deep Learning_2.3_1] Preliminary application of TensorFlow of neural network

Import related class library

import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict


%matplotlib inline
np.random.seed(1)

TensorFlow calculation loss function example


Code:

y_hat = tf.constant(36, name='y_hat')            # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y')                    # Define y. Set to 39


loss = tf.Variable((y - y_hat)**2, name='loss')  # Create a variable for the loss


init = tf.global_variables_initializer()         # When init is run later (session.run(init)),
                                                 # the loss variable will be initialized and ready to be computed
with tf.Session() as session:                    # Create a session and print the output
    session.run(init)                            # Initializes the variables
    print(session.run(loss))                     # Prints the loss

Note: Only the loss function loss is defined, which does not calculate the loss function; you need to execute init=tf.global_variables_initializer() to calculate

placeholder code example

When creating a placeholder, you don't need to pass in a value. You only need to pass in a specific value when you execute this variable.

x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()

Linear function code implementation

Code implementation formula: Y=WX+b

    X = tf.constant(np.random.randn(3, 1), name="X")
    W = tf.constant(np.random.randn(4, 3), name="W")
    b = tf.constant(np.random.randn(4, 1), name="b")
    Y = tf.add(tf.matmul(W,X),b)

implement:

    sess = tf.Session ()
    result = sess.run (Y)

    sess.close()

Code implementation of sigmoid function

Define placeholders:

x = tf.placeholder(tf.float32, name="x")

The code defines the sigmoid:

sigmoid = tf.sigmoid(x)

implement:

with tf.Session() as session:

result = session.run(sigmoid, feed_dict = {x: z})

The code implements the loss function


Define placeholders:

    z = tf.placeholder(tf.float32, name="z")
    y = tf.placeholder(tf.float32, name="y")

Define the calculation loss function:

cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z,  labels = y)

implement:

sex = tf.Session ()

cost = sess.run(cost,feed_dict={z:logits,y:labels})

sess.close()

matrix conversion

Code implementation: tf.one_hot(labels, depth, axis)

Initialize 0, 1 matrix

  • tf.zeros(shape)
  • tf.ones(shape)

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325530984&siteId=291194637