Problem Description
When pytorch 's dataloader reads data, it sets a larger batchsize and num_workers. Then train for a period of time and report an error:
RuntimeError: Too many open files. Communication with the workers is no longer possible. Please increase the limit using ulimit -n in the shell or change the sharing strategy by calling torch.multiprocessing.se t_sharing_strategy('file_system') at the beginning of your code
Please increase the limit using `ulimit -n`
Solution
After introducing torch at the beginning of the code, add:
1 2 |
|
Reference blog: