pytorch_lightning RuntimeError: The size of tensor a (5) must match the size of tensor b (2) at non-

root@7128c9029a5b:~/AI-lab/power_dekor_danyang# python train.py
<class 'list'> [0]
fold_i:  0
soft_labels is None
soft_labels is None
/opt/conda/lib/python3.8/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: Displayed epoch numbers in the progress bar start from "1" until v0.6.x, but will start from "0" in v0.8.0.
  warnings.warn(*args, **kwargs)
/opt/conda/lib/python3.8/site-packages/pytorch_lightning/utilities/warnings.py:18: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` in the `DataLoader` init to improve performance.
  warnings.warn(*args, **kwargs)
/opt/conda/lib/python3.8/site-packages/pytorch_lightning/utilities/warnings.py:18: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` in the `DataLoader` init to improve performance.
  warnings.warn(*args, **kwargs)
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:780: UserWarning: Note that order of the arguments: ceil_mode and return_indices will changeto match the args list in nn.MaxPool2d in a future release.
  warnings.warn("Note that order of the arguments: ceil_mode and return_indices will change"
Traceback (most recent call last):
  File "train.py", line 179, in <module>
    trainer.fit(model, train_dataloader, val_dataloader)
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 704, in fit
    self.single_gpu_train(model)
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 477, in single_gpu_train
    self.run_pretrain_routine(model)
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in run_pretrain_routine
    self.train()
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 363, in train
    self.run_training_epoch()
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 435, in run_training_epoch
    _outputs = self.run_training_batch(batch, batch_idx)
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 604, in run_training_batch
    loss, batch_output = optimizer_closure()
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 571, in optimizer_closure
    output_dict = self.training_forward(
  File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 728, in training_forward
    output = self.model.training_step(*args)
  File "train.py", line 51, in training_step
    loss = self.criterion(scores, labels)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/AI-lab/power_dekor_danyang/loss_function.py", line 16, in forward
    return torch.mean(torch.sum(-labels * self.log_softmax(preds), -1))
RuntimeError: The size of tensor a (5) must match the size of tensor b (2) at non-singleton dimension 1

Modify models.py
5 classification training
num_class to 5

def __init__(self, num_class=5, emb_size=2048, s=16.0):

The first parameter of BinaryHead is changed to 5

self.binary_head = BinaryHead(5, emb_size=2048, s=1)

Guess you like

Origin blog.csdn.net/Qingyou__/article/details/125182929