The first pytorch

Last night learned how to build a simple neural network, now to say something about

First, you first need to install a pytorch library anaconda

Https://www.lfd.uci.edu/~gohlke/pythonlibs/#pytorch to this URL on your computer to download a version of python libraries

For example, my python 3.5 version is 64-bit computer, choose cp35m-win_amd64 (must be correct !!)

Download after a good open anaconda prompt

After opening the default path is c anaconda prompt the user disk file, if you download the library did not exist the default path, there is the d drive, enter the following text to get to your saved location:

After the path right, enter the command: pip install the file name, and then it will start the download

If you download an error occurs, it may be a problem numpy version does not match the

Then you have to fit your computer to download a version of python numpy library

https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy

Download the same steps

Second, begin to build a simple neural network

1, introducing Torch, define the variable

Import Torch 
 
batch_n = 100    # amount of input data is 100 
hidden_layer = 100    # hidden layer is a 
Input_Data = 1000     # per data 1000 features 
output_data = 10      # output value 10

Variable definitions: batch_n is the number of input data in a batch, the value is 100, this means that we have a lot of input data 100, while each feature data including data has a input_data, since the value of input_data 1000, the feature of each data is 1000, the number of data used to define the characteristics of the hidden_layer retained after the hidden layer, there are 100, because we consider only one hidden layer model, the only tags define the parameters of a hidden layer; output_data data output value is 10, we can classify the data output as a result worthy of the number, the number 10 means that we want to get the last 10 classification result values.

2. Initialize the weights

torch.randn = X (batch_n, Input_Data)    # randomly selected 100 data, each data has characteristics 1000, 1000 * 100 matrix form, each row represents data. 
torch.randn = Y (batch_n, output_data)   # random * 100 forming a matrix 10, y is the real value of 
 
W1 = torch.randn (Input_Data, hidden_layer)     # W1 is the input layer to the hidden layer weight parameter 
w2 = torch.randn (hidden_layer, output_data)    # w2 of the right is the output layer to the hidden layer weight parameter

3, define the number of training and learning efficiency

= 20 epoch_n     # training times is 20 times 
learning_rate = 1e-6   # learning efficiency is 1e-6

4, the gradient descent optimization of the parameters of the neural network

for Epoch in Range (epoch_n):
     # forward propagation 
    H = x.mm (W1)   # 100 # 1000 * X * W1 = H 
    h1 of h.clamp = (= 0 min)            # h1 of RELU = (H) 
    y_pred = h1 of .mm (w2 of)   # 100 * 10 #y_pred a prediction value, * h1 of y_pred = w2 of 
    # Print (y_pred) 
    
    # back-propagation 
    loss = (y_pred - Y) .pow (2) .sum ()    # loss loss function 
    Print ( " Epoch: {}, Loss: {} :. 4F " .format (Epoch, Loss)) 
 
    gray_y_pred = 2 * (y_pred - Y)     # y_pred ∂loss gradient = / ∂y_pred 
    gray_w2 = h1.t () .mm (gray_y_pred)   #gradient w2 = (∂loss / ∂y_pred) * (∂y_pred / ∂w2) 

 
    grad_h = gray_y_pred.clone ()       # Returns a copy of a tensor, which is the same size as the original tensor and data types. I.e., to another alias gray_y_pred 
    grad_h = grad_h.mm (w2.t ())         
    grad_h.clamp_ (min = 0) 
    grad_w1 = XT () mm (grad_h). 
 
    W1 - * = learning_rate grad_w1 
    w2 of - * = learning_rate gray_w2

Even if the process of FIG forward propagation and back propagation:

After in-depth understanding in order to spread, you can look at this blogger https://www.cnblogs.com/wj-1314/p/9830950.html

Complete code:

Import Torch 
 
batch_n = 100    # amount of input data is 100 
hidden_layer = 100    # hidden layer is a 
Input_Data = 1000     # per data 1000 features 
output_data = 10      # output value is 10 
 
X = torch.randn (batch_n, Input_Data )    # randomly selected 100 data, each data has characteristics 1000, 1000 * 100 matrix form, each row represents data. 
torch.randn = Y (batch_n, output_data)   # random * 100 forming a matrix 10, y is the real value of 
 
W1 = torch.randn (Input_Data, hidden_layer)     # W1 is the input layer to the hidden layer weight parameter 
w2 = torch.randn (hidden_layer, output_data)    # w2 of weights is hidden layer to the output layer weight parameter 
 
epoch_n = 20 is     #Training times are 20 
learning_rate = 1E-6   # learning efficiency is 6-1E 
 
for Epoch in the Range (epoch_n):
     # forward spread 
    H = x.mm (W1)   # 100 * 1000 # * H = the X-W1 
    h1 = H .clamp (= 0 min)            # h1 of RELU = (H) 
    y_pred h1.mm = (w2 of)   # 100 * 10 #y_pred a prediction value, * h1 of y_pred = w2 of 
    # Print (y_pred) 
    
    # back-propagation 
    loss = (y_pred - Y) .pow (2) .sum ()    # loss loss function 
    Print ( " Epoch: {}, loss: {} :. 4F " .format (Epoch, loss)) 
 
    gray_y_pred = 2 * (y_pred - Y)     #y_pred ∂loss gradient = / ∂y_pred 
    gray_w2 h1.t = (). mm (gray_y_pred)   # w2 of gradient = (∂loss / ∂y_pred) * (∂y_pred / ∂w2) 

 
    grad_h = gray_y_pred.clone ()       # Returns copies of a tensor, which is the same size as the original tensor and data types. I.e., to another alias gray_y_pred 
    grad_h = grad_h.mm (w2.t ())         
    grad_h.clamp_ (min = 0) 
    grad_w1 = XT () mm (grad_h). 
 
    W1 - * = learning_rate grad_w1 
    w2 of - * = learning_rate gray_w2

result:

Epoch:0 , Loss:43615284.0000
Epoch:1 , Loss:91471992.0000
Epoch:2 , Loss:330298272.0000
Epoch:3 , Loss:683488832.0000
Epoch:4 , Loss:122780872.0000
Epoch:5 , Loss:20005414.0000
Epoch:6 , Loss:9135615.0000
Epoch:7 , Loss:5064396.5000
Epoch:8 , Loss:3213300.2500
Epoch:9 , Loss:2285546.5000
Epoch:10 , Loss:1784441.1250
Epoch:11 , Loss:1492424.2500
Epoch:12 , Loss:1307289.6250
Epoch:13 , Loss:1179117.7500
Epoch:14 , Loss:1082826.7500
Epoch:15 , Loss:1005753.7500
Epoch:16 , Loss:940913.1250
Epoch:17 , Loss:884514.8750
Epoch:18 , Loss:834361.0000
Epoch:19 , Loss:789175.0625

 

Guess you like

Origin www.cnblogs.com/panqiaoyan/p/11716614.html
Recommended