Neural network layer structures are mainly two methods, one using the class (inherited torch.nn.Moudle), one is used to quickly build torch.nn.Sequential.
1) First, we load the data:
import torch
import torch.nn.functional as F
# Regression X = torch.unsqueeze (torch.linspace (-1,1,100), Dim =. 1 ) Y = x.pow (2) + 0.2 * torch.rand (x.size ())
2) two methods of template:
2.1: class (class): This is substantially fixed format, init define the number of neurons in each neural layer, number of layers and neurons, forward inherited nn.Moudle the function to be implemented before the feedback (plus excitation function)
#method1 class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() pass def forward(self,x): pass
such as:
# The method1 class Net (torch.nn.Module): DEF the __init__ (Self): . Super (Net, Self) the __init__ () self.hidden = torch.nn.Linear (1,10 ) self.prediction = torch.nn. Linear (10,1 ) DEF Forward (Self, X): X = F.relu (self.hidden (X)) # use relu as a function of excitation X = self.prediction (X) # last hidden layer to the output layer is not using the excitation function, you can also add (usually without added) return X NET = Net () Print (NET) '' ' # output: Net ( (hidden): Linear (in_features = 1, out_features = 10, bias = True) #hidden is self.hidden, no special meaning, you can name the other (prediction): Linear (in_features = 10, out_features = 1, bias True =) ) '' '
2.2: quickly build
template:
net2=torch.nn.Sequential( )
For example: NET2 = torch.nn.Sequential (
net2 = torch.nn.Sequential(
torch.nn.Linear(1, 10), torch.nn.ReLU(), torch.nn.Linear(10, 1) )
print(net2)
'''
Sequential (
(0): Linear (1 -> 10)
(1): ReLU ()
(2): Linear (10 -> 1)
)
'''
Both substantially equal, a little difference in that the place is quickly erected excitation function (RELU ....) seen as a neural layer.