[relion]ERROR“CudaCustomAllocator out of memory“

RELION manages memory in two ways; “static” and fully dynamic. Static memory is allocated at the start of an iteration and mostly holds large volumes and reconstructions throughout the iteration. Dynamic memory is allocated and released on a per-particle basis. Because the need for dynamic space is unpredictable, RELION currently grabs as much as it can and manages it though a custom allocator, hence “CudaCustomAllocator out of memory”.

This is to clarify that you are running out of dynamic memory, as opposed to large volumes etc; the box-size does not matter. This tends to indicate very noisy/uncertain/difficult data for which RELION to align/classify. RELION is in these cases extremely cautious, calculating extremely many points of possible interest. This level of caution is rarely necessary, but it is default behavior so as to maintain fidelity. You can recognize this situation by very large (>10’000) numbers of significant points of interest, as listed under the label _rlnNumberOfSignificants in the _data.star file.

You can impose a limit on the number of points to examine by using the flag --maxsig. This truncates the detailed search beyond what RELION normally uses. For 2D, --maxsig 50 has worked well in both speeding up and differentiating classes. For 3D --maxsig 2000 is a safe setting which will cause minimal effects while still safeguarding against large dynamic allocations. As suggested by Pablo, you can also reduce the number of threads, but I expect this will have a lesser effect and slow you down, whereas --maxsig tends to speed things up.

Disclaimer: setting --maxsig too low will start to compromise the Bayesian approach, but generally only in the low-res regime, where the maxsig-limit takes any effect.

猜你喜欢

转载自blog.csdn.net/LJL_1003/article/details/110230936