1 Introduction
Through the first two articlesAndroid imports ncnn-android-yolov8-seg: realizing human body recognition and portrait segmentation, < a i=3>Android ncnn-android-yolov8-seg source code analysis: Realizing portrait segmentation, we have run the program and analyzed its source code. Next, in this article, we will try it out in practice, extract the core code of , and use layer's , and is used in the layer to realize human body recognition and portrait segmentation functions. Demo
Java
Camera API
JNI
OpenCV+YOLOv8+NCNN
The effect is as follows. The entire image is the original image of the camera. The upper left corner is the image obtained after we perform portrait recognition and portrait segmentation.(未做镜像处理,所以暂时和原图左右是相反的)
>>> The source code demo of this article can be viewed directly here:
Android implements portrait segmentation based on OpenCV+YOLOv8+NCNN Demo source code download
2. Create a new project
2.1 Create a new main project
2.2 Create a new Native library
2.3 Add MyNcnnLib dependency to the app
implementation(project(mapOf("path" to ":MyNcnnLib")))
2.4 Configure NDK version
Remember to configure the version in local.properties
in the project root directory. The version here needs to be in < /span>BetweenNDK
NDK
NDK16-NDK20
# 这里的路径需修改为你电脑中ndk的具体路径
ndk.dir=C\:\\Developer\\Android_SDK\\ndk\\20.0.5594570
3. Connect to OpenCV+YOLOv8+NCNN
3.1 Import NCNN and OpenCV
My general ncnn-20221128-android-vulkan
和opencv-mobile-4.6.0-android
复法到cpp
文品夹下
3.2 Copy cpp files
General yolo.cpp
, yolo.h
Additional order cpp
Message order
3.3 Configure Cmake
The location of in when setting the configuration, the paths set are also different. They are different, so in Android Studio
and the path location of new versionCMakeLists.txt
Android Studio 3.6
CMakeLists.txt
CMakeLists.txt
InitiallyCMakeLists.txt
cmake_minimum_required(VERSION 3.22.1)
project("myncnnlib")
add_library(${
CMAKE_PROJECT_NAME} SHARED
myncnnlib.cpp)
target_link_libraries(${
CMAKE_PROJECT_NAME}
android
log)
After configurationCMakeLists.txt
cmake_minimum_required(VERSION 3.22.1)
project("myncnnlib")
set(OpenCV_DIR ${
CMAKE_SOURCE_DIR}/opencv-mobile-4.6.0-android/sdk/native/jni)
find_package(OpenCV REQUIRED core imgproc)
set(ncnn_DIR ${
CMAKE_SOURCE_DIR}/ncnn-20221128-android-vulkan/${
ANDROID_ABI}/lib/cmake/ncnn)
find_package(ncnn REQUIRED)
add_library(${
CMAKE_PROJECT_NAME} SHARED
myncnnlib.cpp
yolo.cpp)
target_link_libraries(${
CMAKE_PROJECT_NAME}
ncnn
camera2ndk
mediandk
${
OpenCV_LIBS}
android
log)
4. Create JNI interface
4.1 Create a new JNI interface
InNcnnNativeLib.kt
, add two newJNI
methods
/**
* 初始化NCNN
*
* @return 是否成功
*/
external fun load(mgr: AssetManager, modelid: Int, cpugpu: Int): Boolean
/**
* 人像检测
*
*/
external fun detect(data: ByteArray?, width: Int, height: Int, cameraId: Int): ByteArray
4.2 Add corresponding JNIf method in cpp
Inmyncnnlib.cpp
, add the corresponding JIN
method
extern "C"
JNIEXPORT jboolean JNICALL
Java_com_heiko_myncnnlib_NativeLib_load(JNIEnv *env, jobject thiz, jobject assetManager,
jint modelid, jint cpugpu) {
}
extern "C"
JNIEXPORT jbyteArray JNICALL
Java_com_heiko_myncnnlib_NativeLib_detect(JNIEnv *env, jobject thiz, jbyteArray data_,
jint w, jint h, jint camera_id) {
}
4.3 Declare include
#include <jni.h>
#include <string>
#include <platform.h>
#include <benchmark.h>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
#include "opencv2/opencv.hpp"
#include <string>
#include <iostream>
#include "yolo.h"
static Yolo *g_yolo = 0;
static ncnn::Mutex lock;
4.4 Load model
Copy all the codes in Demo
and loadModel
here
extern "C"
JNIEXPORT jboolean JNICALL
Java_com_heiko_myncnnlib_NativeLib_load(JNIEnv* env, jobject thiz, jobject assetManager, jint modelid, jint cpugpu)
{
if (modelid < 0 || modelid > 6 || cpugpu < 0 || cpugpu > 1)
{
return JNI_FALSE;
}
AAssetManager* mgr = AAssetManager_fromJava(env, assetManager);
__android_log_print(ANDROID_LOG_DEBUG, "ncnn", "loadModel %p", mgr);
const char* modeltypes[] =
{
"n",
"s",
};
const int target_sizes[]