Flask deployment TensorRT problem solving (pycuda._driver.LogicError: explicit_context_dependent failed)

Problem Description:

Today, I use Flask to build a server for model reasoning. When using TensorRT for reasoning models, the following errors occur:

line 39, in allocate_buffers
    stream = cuda.Stream()  # pycuda 操作缓冲区
pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context?

 

I checked on the Internet and said that the pycuda.driver was not initialized, which made it impossible to get the context, and then took the following solutions:

import pycuda.driver as cuda
import pycuda.autoinit

I found that I added these two sentences to my code, but I still had the same error. After verification, it is found that the tensorRT reasoning in the flask will cause an error. If the model reasoning is not placed in the flask to start, there will be no error.

After a lot of looking for information and verification. . . It is found that it is a problem when starting flask. If you start flask and use debug=True, an error will occur. If you do not use debug, there will be no error.

 

Solution:

Remove debug=True directly, or change to debug=False.

socketio.run(app, host='127.0.0.1', port=12340, debug=True)

Change to:

socketio.run(app, host='127.0.0.1', port=12340)

 

reference:

https://blog.csdn.net/weixin_42279044/article/details/102819670

 


 

 

Guess you like

Origin blog.csdn.net/u012505617/article/details/111168626