【Error】: Trying to backward through the graph a second time, but the buffers have already been

错误:

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

代码:

        optimizer.zero_grad() # word
        features = encoder(src)     
        loss = decoder(features, trg)   
        loss.backward() 
        grad_norm = torch.nn.utils.clip_grad_norm(decoder.parameters(), opt.grad_clip)############3
        optimizer.step()

        optimizer.zero_grad() # pos
        # features = encoder(src)     
        loss = decoder.forward_pos(features, trg_pos)  
        loss.backward() 
        grad_norm = torch.nn.utils.clip_grad_norm(decoder.parameters(), opt.grad_clip)############3
        optimizer.step()

代码更改为:

        optimizer.zero_grad() # word
        features = encoder(src)     
        loss = decoder(features, trg)   
        loss.backward() 
        grad_norm = torch.nn.utils.clip_grad_norm(decoder.parameters(), opt.grad_clip)############3
        optimizer.step()

        optimizer.zero_grad() # pos
        features = encoder(src)     
        loss = decoder.forward_pos(features, trg_pos)  
        loss.backward() 
        grad_norm = torch.nn.utils.clip_grad_norm(decoder.parameters(), opt.grad_clip)############3
        optimizer.step()

错误消失。


其他可参考网址:https://blog.csdn.net/u010829672/article/details/79538853


猜你喜欢

转载自blog.csdn.net/ccbrid/article/details/80050399