Pytorch:实战RNN(正弦曲线的波形)

理论解析:

定义RNN的神经网络

数据集

输入数据(50,):

[ 1.          1.20408163  1.40816327  1.6122449   1.81632653  2.02040816
  2.2244898   2.42857143  2.63265306  2.83673469  3.04081633  3.24489796
  3.44897959  3.65306122  3.85714286  4.06122449  4.26530612  4.46938776
  4.67346939  4.87755102  5.08163265  5.28571429  5.48979592  5.69387755
  5.89795918  6.10204082  6.30612245  6.51020408  6.71428571  6.91836735
  7.12244898  7.32653061  7.53061224  7.73469388  7.93877551  8.14285714
  8.34693878  8.55102041  8.75510204  8.95918367  9.16326531  9.36734694
  9.57142857  9.7755102   9.97959184 10.18367347 10.3877551  10.59183673
 10.79591837 11.        ]

神经网络,参数设置:

  1. input_size:定义输入大小
  2. hiddem_size:定义h0的大小
  3. num_layers:定义RNN的层数

input输入的维度:[seq len,batch,features]    torch.Size([1, 49, 16])

ht的维度:[num layers,batch,h dim]       torch.Size([1, 1, 16])

#定义参数
num_time_steps = 50
input_size = 1
hidden_size = 16
output_size = 1
lr = 0.01
hidden_prev = torch.zeros(1,1,hidden_size)     #h0为随机值
#定义RNN网络
class Net(nn.Module):
    def __init__(self,):
        super(Net,self).__init__()
        self.rnn = nn.RNN(
            input_size = input_size, #输入特征数量为1
            hidden_size = hidden_size, #隐藏层的特征数量为16
            num_layers = 1, #RNN的层数
            batch_first = True, #True:输入的Tensor的shape为[seq_len,batch,input_size]
         )
        for p in self.rnn.parameters():  #parameters:包含所有模型的迭代器.
            nn.init.normal_(p,mean=0.0,std=0.001)   #normal_:正态分布
        #连接层,对数据做处理 [49,16]→[49,1]
        self.linear = nn.Linear(hidden_size,output_size) #对输入数据做线性变换y=Ax+b:(输入样本大小,输出样本大小)为A,b是自带的(估计).
    
    #前向传播函数
    def forward(self,x,hidden_prev):    #前向网络参数:输入,h0
        #[b,seq,h]
        # print('--------------x--------------')        #torch.Size([1, 49, 1])
        # print(x)
        out,hidden_prev = self.rnn(x,hidden_prev)
        # print("----------hidden_prev(ht),out---------") #torch.Size([1, 1, 16])    输出ht,torch.Size([1, 49, 16])  x
        # print(hidden_prev.shape)
        # print(out)
        #转换数据 Y(t):[1,49,16]→[1,49,1]
        out = out.view(-1,hidden_size)  #降维,View:返回相同数据的元素
        # print("---------从其他维度推断出来------")  #1.torch.Size([49, 16]),
        # print(out.shape)
        out = self.linear(out)
        print("---------线性转换---------")  #1.torch.Size([49, 1]),
        print(out.shape)
        out = out.unsqueeze(dim=0)  #unsqueeze的in-place运算形式 Tensor运算.
        print("-------unsqueeze---------")  #1.torch.Size([1, 49, 1]),
        print(out.shape)
        return out,hidden_prev   #[1, 49, 1],[1, 1, 16]

模型初始化

 model = Net()   #网络的实例
 criterion = nn.MSELoss()  #均方误差
 optimizer = optim.Adam(model.parameters(),lr)  #网络参数加到优化器里面
 hidden_prev = torch.zeros(1,1,hidden_size)     #h0为随机值

训练模型

output,hidden_prev = model(x,hidden_prev)  #x→output,h0→ht.  [1, 49, 1],[1, 1, 16]

性能评估

猜你喜欢

转载自blog.csdn.net/qq_31244453/article/details/110743029
今日推荐