pytorch官方文档有关DataLoader数据迭代器的说明

class torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=<function default_collate>, pin_memory=False, drop_last=False) [source]

Data loader. Combines a dataset and a sampler, and providessingle- or multi-process iterators over the dataset.

Parameters:
  • dataset (Dataset) – dataset from which to load the data.
  • batch_size (int, optional) – how many samples per batch to load(default: 1).
  • shuffle (bool, optional) – set to True to have the data reshuffledat every epoch (default: False).
  • sampler (Sampler, optional) – defines the strategy to draw samples fromthe dataset. If specified, shuffle must be False.
  • batch_sampler (Sampler, optional) – like sampler, but returns a batch ofindices at a time. Mutually exclusive with batch_size, shuffle,sampler, and drop_last.
  • num_workers (int, optional) – how many subprocesses to use for dataloading. 0 means that the data will be loaded in the main process(default: 0)
  • collate_fn (callable, optional) – merges a list of samples to form a mini-batch.
  • pin_memory (bool, optional) – If True, the data loader will copy tensorsinto CUDA pinned memory before returning them.
  • drop_last (bool, optional) – set to True to drop the last incomplete batch,if the dataset size is not divisible by the batch size. If False andthe size of dataset is not divisible by the batch size, then the last batchwill be smaller. (default: False)

猜你喜欢

转载自blog.csdn.net/weixin_41797117/article/details/80063162
今日推荐