[Solved] Deep learning failed to allocate 4.67G (5019415296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

There are two situations where this problem occurs. One is that the GPU is not specified, resulting in insufficient memory, and the second is that the memory is exceeded due to your own settings.

The first solution to the problem:

The first step needs to specify the GPU, and add code to the code header:

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"

Step 2: Limit the available video memory of the current script, add the first line to the code header, and modify the session statement as in the second line

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
#这个0.33是限制性能百分比

#如果是使用ts2的话,会有不兼容问题,则用下面的代码
gpu_options =tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction = 0.33)
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(gpu_options=gpu_options))

The second solution to the problem:

(6 messages) The GPU is very idle and always prompts GPU out of memeory_zhuiyuan2012's blog-CSDN Blog

Guess you like

Origin blog.csdn.net/qq_43681154/article/details/128884456