OCR cross-platform engineering onnxruntime gpu c++ code
There are few codes on the Internet about onnxruntime running the OCR model in the gpu environment. After checking, it is actually written like this when initializing the model.
void DbNet::setNumThread(int numOfThread) {
numThread = numOfThread;
//===session options===
// Sets the number of threads used to parallelize the execution within nodes
// A value of 0 means ORT will pick a default
//sessionOptions.SetIntraOpNumThreads(numThread);
//set OMP_NUM_THREADS=16
// Sets the number of threads used to parallelize the execution of the graph (across nodes)
// If sequential execution is enabled this value is ignored
// A value of 0 means ORT will pick a default
sessionOptions.SetInterOpNumThreads(numThread);
// Sets graph optimization level
// ORT_DISABLE_ALL -> To disable all optimizations
// ORT_ENABLE_BASIC -> To enable basic optimizations (Such as redundant node removals)
// ORT_ENABLE_EXTENDED -> To enable extended optimizations (Includes level 1 + more complex optimizations like node fusions)
// ORT_ENABLE_ALL -> To Enable All possible opitmizations
sessionOptions.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED);
}
void DbNet::initModel(const std::string &pathStr) {
#ifdef _WIN32
std::wstring dbPath = strToWstr(pathStr);
session = new Ort::Session(env, dbPath.c_str(), sessionOptions);
#else
OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0);
session = new Ort::Session(env, pathStr.c_str(), sessionOptions);
#endif
getInputName(session, inputName);
getOutputName(session, outputName);
}
OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0);
This line of code is to run the model on gpu0
sessionOptions.SetInterOpNumThreads(numThread);
This line of code is to set the number of op threads
When crnn is recognized, batch is used as the prediction method
All code project files can be downloaded in my profile