Fatigue driving detection and recognition 3: Android realizes fatigue driving detection and recognition (including source code, real-time detection)

Fatigue driving detection and recognition 3: Android realizes fatigue driving detection and recognition (including source code, real-time detection)

Table of contents

Fatigue driving detection and recognition 3: Android realizes fatigue driving detection and recognition (including source code, real-time detection)

1. Fatigue driving detection and identification method

2. Face detection method

3. Fatigue driving detection and recognition model training

4. Android deployment of fatigue driving detection and recognition model

(1) Convert the Pytorch model to the ONNX model

(2) Convert ONNX model to TNN model

(3) Deployment model on Android

(4) Android test results 

(5) Running the APP crashes: dlopen failed: library "libomp.so" not found

5. Project source code download

6. C++ implements fatigue driving detection and recognition


This is the project " Fatigue Driving Detection and Recognition " series " Android realizes fatigue driving detection and recognition (including source code, real-time detection) ". It mainly shares the fatigue driving detection and recognition model trained in Python and transplanted to the Android platform. We will develop a simple and real-time running fatigue driving detection and recognition Android Demo. The accuracy rate is quite high. The fatigue driving recognition accuracy rate using the lightweight mobilenet_v2 model can also be as high as 97.8682%, which meets the business performance requirements.

The project will teach you to deploy the trained fatigue driving detection and recognition model to the Android platform, including how to convert to ONNX, TNN model, and transplant it to Android for deployment, and implement an Android Demo APP for fatigue driving detection and recognition. The APP can achieve real-time detection and recognition effects on ordinary Android phones. The CPU (4 threads) takes about 30ms and the GPU takes about 25ms, which basically meets the performance requirements of the business.

[ Respect originality, please indicate the source for reprinting ] https://blog.csdn.net/guyuealian/article/details/131834970

Let’s first show the demo effect of fatigue driving detection and recognition in the Android version: 

  

 Android fatigue driving detection and recognition APP Demo experience: https://download.csdn.net/download/guyuealian/88088257


For more articles on the " Fatigue Driving Detection and Recognition " series, please refer to:

  1. Fatigue driving detection and recognition 1: Fatigue driving detection and recognition data set (including download link) https://blog.csdn.net/guyuealian/article/details/131718648
  2. Fatigue driving detection and recognition 2: Pytorch realizes fatigue driving detection and recognition (including fatigue driving data set and training code) https://blog.csdn.net/guyuealian/article/details/131834946
  3. Fatigue driving detection and recognition 3: Android realizes fatigue driving detection and recognition (including source code, real-time detection) https://blog.csdn.net/guyuealian/article/details/131834970

  4. Fatigue driving detection and recognition 4: C++ realizes fatigue driving detection and recognition (including source code, real-time detection) https://panjinquan.blog.csdn.net/article/details/131834980


1. Fatigue driving detection and identification method

There are many implementation schemes for fatigue driving detection and recognition methods. Here is the most conventional method: based on face detection + fatigue driving classification recognition method , that is, first use a general face detection model to detect and locate the human body area, then cut the face detection area according to certain rules, and then train a fatigue driving behavior recognition classifier to complete the fatigue driving detection and recognition tasks ;

The advantage of this is that the existing face detection model can be used for face detection without re-labeling the face detection frame of fatigue driving, which can reduce the cost of manual labeling; and fatigue driving classification data is relatively easy to collect, and the classification model can be optimized in a targeted manner.


2. Face detection method

For the face detection training code of this project, please refer to: https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB 

This is an improved and lightweight face detection model based on SSD. It is very slim. The whole model is only about 1.7M, and it can be detected in real time on ordinary Android phones. There are a lot of ready-made methods for face detection on the Internet, and I don't need to limit my method at all.

​​

For the method of face detection, you can refer to my other blog:

Of course, a face detection model can be trained based on YOLOv5: face detection and pedestrian detection 2: YOLOv5 realizes face detection and pedestrian detection (including data set and training code)


3. Fatigue driving detection and recognition model training

For the training method of the fatigue driving detection and recognition model, please refer to my other blog post "Fatigue Driving Detection and Recognition 2: Pytorch Realizes Fatigue Driving Detection and Recognition (Including Fatigue Driving Dataset and Training Code) " https://blog.csdn.net/guyuealian/article/details/131834946


4. Android deployment of fatigue driving detection and recognition model

At present, there are many deployment methods for the CNN model. You can use deployment tools such as TNN, MNN, NCNN, and TensorRT. I use TNN for Android deployment. The deployment process can be divided into four steps: training model -> converting model to ONNX model -> converting ONNX model to TNN model -> deploying TNN model on Android.

(1) Convert the Pytorch model to the ONNX model

After training the Pytorch model, we need to convert the model to ONNX model for subsequent model deployment.

  • The original project provides a conversion script, you only need to modify model_file to your model path
  •  convert_torch_to_onnx.py implements the script to convert the Pytorch model to the ONNX model
python libs/convert/convert_torch_to_onnx.py
"""
This code is used to convert the pytorch model into an onnx format model.
"""
import sys
import os

sys.path.insert(0, os.getcwd())
import torch.onnx
import onnx
from classifier.models.build_models import get_models
from basetrainer.utils import torch_tools


def build_net(model_file, net_type, input_size, num_classes, width_mult=1.0):
    """
    :param model_file: 模型文件
    :param net_type: 模型名称
    :param input_size: 模型输入大小
    :param num_classes: 类别数
    :param width_mult:
    :return:
    """
    model = get_models(net_type, input_size, num_classes, width_mult=width_mult, is_train=False, pretrained=False)
    state_dict = torch_tools.load_state_dict(model_file)
    model.load_state_dict(state_dict)
    return model


def convert2onnx(model_file, net_type, input_size, num_classes, width_mult=1.0, device="cpu", onnx_type="default"):
    model = build_net(model_file, net_type, input_size, num_classes, width_mult=width_mult)
    model = model.to(device)
    model.eval()
    model_name = os.path.basename(model_file)[:-len(".pth")] + ".onnx"
    onnx_path = os.path.join(os.path.dirname(model_file), model_name)
    # dummy_input = torch.randn(1, 3, 240, 320).to("cuda")
    dummy_input = torch.randn(1, 3, input_size[1], input_size[0]).to(device)
    # torch.onnx.export(model, dummy_input, onnx_path, verbose=False,
    #                   input_names=['input'],output_names=['scores', 'boxes'])
    do_constant_folding = True
    if onnx_type == "default":
        torch.onnx.export(model, dummy_input, onnx_path, verbose=False, export_params=True,
                          do_constant_folding=do_constant_folding,
                          input_names=['input'],
                          output_names=['output'])
    elif onnx_type == "det":
        torch.onnx.export(model,
                          dummy_input,
                          onnx_path,
                          do_constant_folding=do_constant_folding,
                          export_params=True,
                          verbose=False,
                          input_names=['input'],
                          output_names=['scores', 'boxes', 'ldmks'])
    elif onnx_type == "kp":
        torch.onnx.export(model,
                          dummy_input,
                          onnx_path,
                          do_constant_folding=do_constant_folding,
                          export_params=True,
                          verbose=False,
                          input_names=['input'],
                          output_names=['output'])
    onnx_model = onnx.load(onnx_path)
    onnx.checker.check_model(onnx_model)
    print(onnx_path)


if __name__ == "__main__":
    net_type = "mobilenet_v2"
    width_mult = 1.0
    input_size = [112, 112]
    num_classes = 2
    model_file = "work_space/mobilenet_v2_1.0_CrossEntropyLoss/model/best_model_022_98.1848.pth"
    convert2onnx(model_file, net_type, input_size, num_classes, width_mult=width_mult)

(2) Convert ONNX model to TNN model

At present, there are many deployment methods for CNN models, such as TNN, MNN, NCNN, and TensorRT. I use TNN for Android deployment.

TNN conversion tool:

​​

(3) Deployment model on Android

The project implements the Android version of the fatigue driving detection and recognition Demo. The deployment framework uses TNN, which supports multi-threaded CPU and GPU accelerated reasoning, and can be processed in real time on ordinary mobile phones. The Android source code of the project, the core algorithm is implemented in C++, and the upper layer is called through the JNI interface.

If you want to deploy your self-trained classification model in this Android Demo, you can convert the trained Pytorch model to ONNX, then convert it to a TNN model, and then replace the TNN model with your model.

  • This is the project Android source code JNI interface, Java part
package com.cv.tnn.model;

import android.graphics.Bitmap;

public class Detector {

    static {
        System.loadLibrary("tnn_wrapper");
    }


    /***
     * 初始化检测模型
     * @param det_model: 检测模型(不含后缀名)
     * @param cls_model: 识别模型(不含后缀名)
     * @param root:模型文件的根目录,放在assets文件夹下
     * @param model_type:模型类型
     * @param num_thread:开启线程数
     * @param useGPU:是否开启GPU进行加速
     */
    public static native void init(String det_model, String cls_model, String root, int model_type, int num_thread, boolean useGPU);

    /***
     * 返回检测和识别结果
     * @param bitmap 图像(bitmap),ARGB_8888格式
     * @param score_thresh:置信度阈值
     * @param iou_thresh:  IOU阈值
     * @return
     */
    public static native FrameInfo[] detect(Bitmap bitmap, float score_thresh, float iou_thresh);
}

  • This is the Android project source code JNI interface, C++ part
#include <jni.h>
#include <string>
#include <fstream>
#include "src/object_detection.h"
#include "src/classification.h"
#include "src/Types.h"
#include "debug.h"
#include "android_utils.h"
#include "opencv2/opencv.hpp"
#include "file_utils.h"

using namespace dl;
using namespace vision;

static ObjectDetection *detector = nullptr;
static Classification *classifier = nullptr;

JNIEXPORT jint JNI_OnLoad(JavaVM *vm, void *reserved) {
    return JNI_VERSION_1_6;
}

JNIEXPORT void JNI_OnUnload(JavaVM *vm, void *reserved) {

}


extern "C"
JNIEXPORT void JNICALL
Java_com_cv_tnn_model_Detector_init(JNIEnv *env,
                                    jclass clazz,
                                    jstring det_model,
                                    jstring cls_model,
                                    jstring root,
                                    jint model_type,
                                    jint num_thread,
                                    jboolean use_gpu) {
    if (detector != nullptr) {
        delete detector;
        detector = nullptr;
    }
    std::string parent = env->GetStringUTFChars(root, 0);
    std::string det_model_ = env->GetStringUTFChars(det_model, 0);
    std::string cls_model_ = env->GetStringUTFChars(cls_model, 0);
    string det_model_file = path_joint(parent, det_model_ + ".tnnmodel");
    string det_proto_file = path_joint(parent, det_model_ + ".tnnproto");
    string cls_model_file = path_joint(parent, cls_model_ + ".tnnmodel");
    string cls_proto_file = path_joint(parent, cls_model_ + ".tnnproto");
    DeviceType device = use_gpu ? GPU : CPU;
    LOGW("parent     : %s", parent.c_str());
    LOGW("useGPU     : %d", use_gpu);
    LOGW("device_type: %d", device);
    LOGW("model_type : %d", model_type);
    LOGW("num_thread : %d", num_thread);
    ObjectDetectionParam model_param = FACE_MODEL;
    detector = new ObjectDetection(det_model_file,
                                   det_proto_file,
                                   model_param,
                                   num_thread,
                                   device);

    //ClassificationParam ClassParam = FACE_MASK_MODEL;
    ClassificationParam ClassParam = DROWSY_MODEL;
    classifier = new Classification(cls_model_file,
                                    cls_proto_file,
                                    ClassParam,
                                    num_thread,
                                    device);
}

extern "C"
JNIEXPORT jobjectArray JNICALL
Java_com_cv_tnn_model_Detector_detect(JNIEnv *env, jclass clazz, jobject bitmap,
                                      jfloat score_thresh, jfloat iou_thresh) {
    cv::Mat bgr;
    BitmapToMatrix(env, bitmap, bgr);
    int src_h = bgr.rows;
    int src_w = bgr.cols;
    // 检测区域为整张图片的大小
    FrameInfo resultInfo;
    // 开始检测
    if (detector != nullptr) {
        detector->detect(bgr, &resultInfo, score_thresh, iou_thresh);
    } else {
        ObjectInfo objectInfo;
        objectInfo.x1 = 0;
        objectInfo.y1 = 0;
        objectInfo.x2 = (float)src_w;
        objectInfo.y2 = (float)src_h;
        objectInfo.label = 0;
        resultInfo.info.push_back(objectInfo);
    }

    int nums = resultInfo.info.size();
    LOGW("object nums: %d\n", nums);
    if (nums > 0) {
        // 开始检测
        classifier->detect(bgr, &resultInfo);
        // 可视化代码
        //classifier->visualizeResult(bgr, &resultInfo);
    }
    //cv::cvtColor(bgr, bgr, cv::COLOR_BGR2RGB);
    //MatrixToBitmap(env, bgr, dst_bitmap);
    auto BoxInfo = env->FindClass("com/cv/tnn/model/FrameInfo");
    auto init_id = env->GetMethodID(BoxInfo, "<init>", "()V");
    auto box_id = env->GetMethodID(BoxInfo, "addBox", "(FFFFIF)V");
    auto ky_id = env->GetMethodID(BoxInfo, "addKeyPoint", "(FFF)V");
    jobjectArray ret = env->NewObjectArray(resultInfo.info.size(), BoxInfo, nullptr);
    for (int i = 0; i < nums; ++i) {
        auto info = resultInfo.info[i];
        env->PushLocalFrame(1);
        //jobject obj = env->AllocObject(BoxInfo);
        jobject obj = env->NewObject(BoxInfo, init_id);
        // set bbox
        //LOGW("rect:[%f,%f,%f,%f] label:%d,score:%f \n", info.rect.x,info.rect.y, info.rect.w, info.rect.h, 0, 1.0f);
        env->CallVoidMethod(obj, box_id, info.x1, info.y1, info.x2 - info.x1, info.y2 - info.y1,
                            info.category.label, info.category.score);
        // set keypoint
        for (const auto &kps : info.landmarks) {
            //LOGW("point:[%f,%f] score:%f \n", lm.point.x, lm.point.y, lm.score);
            env->CallVoidMethod(obj, ky_id, (float) kps.x, (float) kps.y, 1.0f);
        }
        obj = env->PopLocalFrame(obj);
        env->SetObjectArrayElement(ret, i, obj);
    }
    return ret;
}

(4) Android test results 

Android Demo can achieve real-time detection and recognition effects on the CPU/GPU of ordinary mobile phones; the CPU (4 threads) takes about 30ms, and the GPU takes about 25ms, which basically meets the performance requirements of the business.

      

(5) Running the APP crashes: dlopen failed: library "libomp.so" not found

Reference solution:
Solve dlopen failed: library “libomp.so” not found_PKing666666's blog-CSDN blog_dlopen failed

 For Android SDK and NDK related version information, please refer to: 

 


5. Project source code download

Android project source code download address: Fatigue driving detection and recognition 3: Android realizes fatigue driving detection and recognition (including source code, real-time detection)

The whole set of Android project source code content includes:

  1. Provide the Android version of the face detection model
  2. Provide a complete set of fatigue driving detection and recognition Android Demo source code
  3. Android Demo can detect and identify in real time on ordinary mobile phone CPU/GPU, about 30ms
  4. Android Demo supports picture, video, camera test
  5. All dependent libraries have been configured and can be built and run directly. If there is a crash during operation, please refer to dlopen failed: library “libomp.so” not found  to solve it.

 Android fatigue driving detection and recognition APP Demo experience: https://download.csdn.net/download/guyuealian/88088257

If you need training code for fatigue driving detection and recognition, please refer to: " Fatigue Driving Detection and Recognition 2: Pytorch Realizes Fatigue Driving Detection and Recognition (Including Fatigue Driving Dataset and Training Code) " https://blog.csdn.net/guyuealian/article/details/131834946

6. C++ implements fatigue driving detection and recognition

Reference article: Fatigue driving detection and recognition 4: C++ realizes fatigue driving detection and recognition (including source code, real-time detection) https://panjinquan.blog.csdn.net/article/details/131834980

Guess you like

Origin blog.csdn.net/guyuealian/article/details/131834970