win10 during deployment problems with libtorch c ++, python image corresponding to the pretreatment and c ++

1. The bgr turn rgb

The default is read into opencv bgr order, to be processed into rgb sequence.

Python treatment:

img = img[:, :, (2, 1, 0)]

C ++ handling:

cv::cvtColor(testimg, testimg, CV_BGR2RGB);

2. The image matrix transformation

1) opencv read image matrix format is: (height, width, channels), the channel last is a three-dimensional matrix, i.e. (height, width, channel).

In depth study, because of the different channels convolution, it will take another way: channel first, namely (channels, height, width). To meet this requirement, you can do so:

print(img.shape)

img = img.transpose(2,0,1)

print(img.shape)

Output:

(480,640,3)

(3,480,640)

 

2) learning in the depth of the structures CNN, often as the corresponding image data processing, such as expansion to the image dimensions, such as extended (batch_size, channels, height, width). For this requirement, you can do so:

img = np.expand_dims(img, axis=0)

print(img.shape)

Output:

(1,3,480,640)

 

C ++ to the corresponding code:

auto img_tensor = torch::from_blob(testimg.data, { 1,480,640, 3 }, torch::kFloat32);
img_tensor = img_tensor.permute({ 0,3,1,2 });

3. c ++ model forward return value

The return type for the torch :: jit :: IValue

torch::jit::IValue result = module->forward(inputs);

 

If only one return value can be transferred directly tensor:

auto outputs = module->forward(inputs).toTensor();

 

Note that if there are multiple return values, you need to turn tuple:

auto outputs = module->forward(inputs).toTuple();

torch::Tensor out1 = outputs->elements()[0].toTensor();

torch::Tensor out2 = outputs→elements()[1].toTensor();

 

4. Use GPU

The model and inputs are put on the gpu:

std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[2]);

//put to cuda

module->to(at::kCUDA);

 

// Note that the tensor into the gpu, rather than vector <torch :: jit :: IValue>

std::vector<torch::jit::IValue> inputs;

image_tensor.to(at::kCUDA)

inputs.push_back(image_tensor)

You can specify GPU id: to (torch :: Device (torch :: kCUDA, id))

He published 191 original articles · won praise 104 · views 340 000 +

Guess you like

Origin blog.csdn.net/u013925378/article/details/103385742