RuntimeError: CUDA out of memory occurs when training yolov5

The general solution to online search information is to reduce the batch_size, but after many experiments, I found that it is not useful, and it overflows to me when it is adjusted to 8!

My case of this is RuntimeError: CUDA out of memory. Tried to allocate 150.00 MiB (GPU 0; 8.00 GiB total capacity; 204.53 MiB already allocated; 5.99 GiB free; 220.00 MiB reserved in total by PyTorch).

It can be seen that there is obviously 5.99GiB of free space, but it just doesn't use it, saying that my memory is overflowing.

Solution:

1. Find the dataloaders.py of utils in the yolov5 folder

 2. Ctrl+F to search for num_workers, and just reduce the nw in num_workers=nw. I adjusted it to 4 here, and the original default is 8.

 Then there is no error reported, and the video memory can be allocated normally, which is really wonderful.

Guess you like

Origin blog.csdn.net/weixin_43945848/article/details/126266421