MMDeploy SDK usage record (ncnn)

Home: mmpose/projects/rtmpose at main · open-mmlab/mmpose · GitHub

        MMDeploy provides a series of tools to help us deploy algorithms under OpenMMLab to various devices and platforms more easily. Currently, MMDeploy can convert PyTorch models to device-independent IR models such as ONNX and TorchScript. It is also possible to convert an ONNX model to an inference backend model. The combination of the two enables end-to-end model conversion, that is, one-click deployment from the training end to the production end.

1. Model conversion

        We provide both Python and C++ inference API with MMDeploy SDK. To use SDK, you need to dump the required info during converting the model. Just add --dump-info to the model conversion command. (MMDeploy sdk provides python and c++ Reasoning api, before using the api for reasoning, you need to convert the corresponding files on the pc, such as pth->onnx, pth->ncnn...)

        Convert the hand_detection model in rtm to onnx output, and the folder after output is as follows: Because I did not install tensorrt, there is no end2end.engine file, and I will do ncnn deployment.

|----{work-dir}
     |----end2end.onnx    # ONNX model
     |----end2end.bin     # ncnn model
     |----end2end.param   # ncnn model
     |----pipeline.json   #
     |----deploy.json     # json files for the SDK
     |----detail.json     #

2. Model SDK reasoning

1. Define the model position and input

DEFINE_string(det_model, "../model/rtm/rtmdet", "Decection model directory");
DEFINE_string(image, "../images/test.bmp", "Input image path");
DEFINE_string(device, "cpu", R"(Device name, e.g. "cpu", "cuda")");
DEFINE_double(det_thr, .5, "Detection score threshold");

2. Perform inference to obtain the prediction confidence and intercept the palm in the bbox as the input for subsequent key point detection.

    // hand detection
    mmdeploy::Detector detector(mmdeploy::Model{FLAGS_det_model}, mmdeploy::Device{FLAGS_device});

    mmdeploy::Detector::Result dets = detector.Apply(img);

    // get the max socre detection
    mmdeploy_detection_t target_det = dets[0];
    std::cout << "detection max score: " << target_det.score << std::endl;
    if (target_det.score < FLAGS_det_thr) {
        std::cout << "failed detect hand." << std::endl;
        return 0;
    }
    std::cout << "successfully detect hand." << std::endl;

    // crop detection result
    cv::Point2f tl(target_det.bbox.left, target_det.bbox.top);
    cv::Point2f br(target_det.bbox.right, target_det.bbox.bottom);
    cv::Mat det_crop = img(cv::Rect(tl, br)).clone();
    cv::imwrite("../images/det_crop.bmp", det_crop);

 3. Visual bbox

    // visualize detection
    utils::Visualize v;
    auto det_sess = v.get_session(img);
    det_sess.add_det(target_det.bbox, target_det.label_id, target_det.score, target_det.mask, 0);
    cv::imwrite("../images/det_out.bmp", det_sess.get());

 4. To predict the key points of the hand, in the same way, the hand image obtained by crop can be used as the input of rtm-pose.

    // pose detection
    mmdeploy::PoseDetector pose_detector(mmdeploy::Model{FLAGS_pose_model}, mmdeploy::Device{FLAGS_device});
    mmdeploy::PoseDetector::Result pose_dets = pose_detector.Apply(det_crop);
    mmdeploy_pose_detection_t target_pose = pose_dets[0];

        Visualization:

    v.set_skeleton(utils::Skeleton::get("coco-wholebody-hand"));
    auto pose_sess = v.get_session(det_crop);
    pose_sess.add_pose(target_pose.point, target_pose.score, target_pose.length, 0.);
    cv::imwrite("../images/pose_out.bmp", pose_sess.get());

 

Guess you like

Origin blog.csdn.net/weixin_44855366/article/details/131287853