[japronto] About GPU deployment of deep learning models

Deep Learning Deployment

The project can be viewed on github later

japront

Question 1,

There is no problem in the test without adding the model, but after adding the model, cuda does not support multi-process and needs to be enabled by spawn, but the model is loaded in the main process, and each request is processed in the sub-process, and an error occurs when inheriting : It means that the variables are not all variables, but the model is already a global variable, and all parameters cannot be set to the global variable. Is there any other way to solve it?

Guess you like

Origin blog.csdn.net/m0_37661841/article/details/109243262