Too many open files. Communication with the workers is no longer possible.

Problem Description

When pytorch  's dataloader reads data, it sets a larger batchsize and num_workers. Then train for a period of time and report an error:

RuntimeError: Too many open files. Communication with the workers is no longer possible. Please increase the limit using ulimit -n in the shell or change the sharing strategy by calling torch.multiprocessing.se t_sharing_strategy('file_system') at the beginning of your code

Please increase the limit using `ulimit -n`

Solution

After introducing torch at the beginning of the code, add:

1

2

import torch.multiprocessing

torch.multiprocessing.set_sharing_strategy('file_system')

Reference blog:

https://www.cnblogs.com/ltkekeli1229/p/16897522.html 

Guess you like

Origin blog.csdn.net/jacke121/article/details/131443170