Depth study and practice pytorch entry frame (a): The basic use of the torch

 

main content:

      1, tensor definition

      2, tensor with numpy interconversion

      3, tensor use cuda acceleration

      4, tensor into the package after use Variable

 

# - * - Coding: UTF-8 - * - 
"" " 
the Created ON Thu Aug 8 16:40:47 2019 
pytorch Quick Start tutorial 
reference book:" Deep learning framework pytorch: Getting Started and Practice " 
@author: zhaoqidong 
" "" 
Torch T AS Import 
Import numpy AS NP 


based on the use of ##################### 1Tensor ########### 
# 1.1, 5 * Construction 3 matrix, unallocated space initialized 
X1 = t.Tensor (5,3) 
Print (X1) 

# 1.2, using [01] the two-dimensional array uniformly distributed random initialization 
X2 = t.rand (5,3) 
Print (X2) 

# 1.3, see the shape of the matrix 
Print (x2.shape) 
Print (x2.size ()) written equivalent # 

# 1.4, tensor addition 
Y = t.rand (5,3) 
Z = X1 + Y 
Print (Z) 
Y2 = t.add (Y, X1) adding an equivalent # writing 
Print (Y2) 
y2.add_ (100) 
Print (Y2) 
# Note: add_ of different add
_ # Add_ to be ending, the function value of the subject itself modify 
# add syntax is the form of equation 
######################## 2, Tensor and numpy combination ########################### 
#tensor unsupported operation can first turn numpy, in turn tensor after operation operation 
# 2.1, tensor convert numpy 
a_tensor t.ones = (. 5) 
Print (a_tensor) 
b_np a_tensor.numpy = () # tensor-> numpy 
Print (b_np) 
# 2.2, converted into numpy Tensor 
c_np np.ones = ( 5) 
Print (c_np) 
d_tensor = t.from_numpy (c_np) # numpy-> tensor 
Print (d_tensor) 
# Note: conversion between tensor and numpy, shared memory objects !! 
conversion between the shared memory means that # quickly, 
# but at the same time means that after changing the value of one party, the other party will also change the value. 
# Verify follows: 
a_tensor.add_ (1000) 
Print ( "shared memory:", a_tensor) 
Print ( "shared memory:", b_np) 
##################. 3, may be used CUDA accelerated Tensor ##### #################
t.cuda.is_available IF (): 
    Y = y.cuda () 
    Y2 = y2.cuda () 
    Print (X + Y2) 
    
##################### 4, Autograd: automatic differentiation ####################### 
# autograd.Veribale is Autograd in core classes, after packaging the tensor, can be called backward automatic computing a reverse gradient 

from torch.autograd Import Variable 
x_var = Variable (t.ones (2,2 &), requires_grad = True) 
Print (x_var)    
y_var = x_var.sum () 
Print (y_var) 
Print (y_var.grad_fn) 
y_var. backward () 
Print ( "first back-propagation", x_var.grad) 
y_var.backward () 
Print ( "second back-propagation", x_var.grad) 
# Note: grad is in the process of back-propagation cumulative, deep learning is a multilayer neural networks, each reverse spread after the end of the accumulation result will be the last. 
# For these reasons, the training process will be set to zero before the first gradient back-propagation.  
Print (x_var.grad)

To zero gradient #
x_var.grad.data.zero_ ()
y_var.backward () 
Print ( "zero after back-propagation", x_var.grad)

  

Guess you like

Origin www.cnblogs.com/xiaoxiaoke/p/11331272.html