RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be same

insert image description here
In short, the input type is torch.FloatTensor corresponding to cpu, while the hyperparameter of the model network is torch.cuda.FloatTensor corresponding to gpu. Generally,
when changing the code locally, forget to forward (step) It is caused by some passed parameters to (device), which is the case for me, haha.

The solution is as follows:

The following is when decompressing data for each batch, for each type of data to (device) , generally in for batch in self.train_data (or in the loop of train_dataloader)

if self.args.device != 'cpu':
    # batch = tuple(t.to(self.args.device) for t in batch)
    batch = (tup.to(self.args.device) if isinstance(tup, torch.Tensor) else tup for tup in batch)

Vice versa

If RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be same
it is to use to(device) for the model/network.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

Guess you like

Origin blog.csdn.net/weixin_42455006/article/details/125268319