Android JNI 中使用OpenCV (颜色块跟踪)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u011608180/article/details/86522621

Pre:Android NDK 开发:https://blog.csdn.net/u011608180/article/details/85063634

         Android JNI 中使用OpenCV:https://blog.csdn.net/u011608180/article/details/85244329

         HSV颜色空间详解:https://blog.csdn.net/u011608180/article/details/86525766

步骤:

1. 将Android端获取Camera的每一帧图像数据传递到JNI层(byte 数组);

2. 在JNI层中,将每一帧图像数据(byte 数组)转化成OpenCV Mat,并将其由YUV420sp转化为BGR;

3. 在JNI层中,使用OpenCV的 cvtColor() 函数将每一帧数据由BGR颜色空间转换为HSV颜色空间;

4. 通过调整 Hue (色调),Saturation(饱和度),Value(亮度)的值来 inRange() 函数进行 颜色分割 / 二值化 图片,得到Mat mask; 如下代码只是给出一个样例:

inRange(hsv, cv::Scalar(156, 63, 129), cv::Scalar(180, 255, 255), mask);

5. 使用 findContours() 函数寻找mask图像中物体的轮廓,并返回轮廓的相关信息;

vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point());

6.迭代输出符合要求的物体轮廓,并返回int数组到Android层,数组的后四位数据分别是:矩形框的中心点坐标x、y以及矩形框的width和height.

if(!contours.empty()) {
   for(vector<vector<Point>>::iterator it = contours.begin(); it != contours.end(); it++) {
      double area = cv::contourArea(*it);
      if(area > 1000) {
         cv::Rect rect = cv::boundingRect(*it);
         arr[0] = 1;
         arr[1] = rect.x + cvRound(rect.width/2.0);
         arr[2] = rect.y + cvRound(rect.height/2.0);
         arr[3] = rect.width;
         arr[4] = rect.height;
      }
   }
}

以上1、2 、3 步可以参考Pre的第二个链接。

JNI层源代码如下:

JNIEXPORT jintArray JNICALL Java_com_example_robot_vision_library_ComputerVisionInterface_colorFollow
  (JNIEnv *env, jobject thiz, jbyteArray imageData, jint imageWidth, jint imageHeight, jint color)
{
   jintArray jarr = env->NewIntArray(5);
   jint *arr = env->GetIntArrayElements(jarr, NULL);

   jbyte *p_imageData = env->GetByteArrayElements(imageData, NULL);
   if(NULL == p_imageData) {
       env->ReleaseByteArrayElements(imageData, p_imageData, 0);
       return NULL;
   }
   unsigned char *imageCharData = (unsigned char*)p_imageData;

   Mat frame = Mat((int)(imageHeight * 1.5), imageWidth, CV_8UC1, imageCharData);
   cvtColor(frame, frame, CV_YUV420sp2BGR);
   Mat hsv,mask;
   cvtColor(frame, hsv, CV_BGR2HSV);
   //TODO This position hsv data should calibration from the Android every time, not hard code;
   switch(color){
      case 0:
        inRange(hsv, cv::Scalar(156,63,129),cv::Scalar(180,255,255), mask); //red
      break;
      case 1:
        inRange(hsv, cv::Scalar(19, 130, 100), cv::Scalar(34, 255, 255), mask); //yellow
      break;
      case 2:
        inRange(hsv, cv::Scalar(92,100,46),cv::Scalar(117,255,255), mask); //blue
      break;
      //save and test
      //imwrite("/sdcard/DCIM/Camera/mask.jpg", mask);
   }

   vector<vector<Point>> contours;
   vector<Vec4i> hierarchy;
   findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE,Point());
   if(!contours.empty()) {
      for(vector<vector<Point>>::iterator it = contours.begin(); it != contours.end(); it++) {
         double area = cv::contourArea(*it);
         if(area > 1000) {
            cv::Rect rect = cv::boundingRect(*it);
            arr[0] = 1;
            arr[1] = rect.x + cvRound(rect.width/2.0);
            arr[2] = rect.y + cvRound(rect.height/2.0);
            arr[3] = rect.width;
            arr[4] = rect.height;
         }
      }
   }
   env->ReleaseIntArrayElements(jarr, arr, 0);

   env->ReleaseByteArrayElements(imageData, p_imageData, 0);
   return jarr;
}

测试的效果:

 

后记:demo算法待优化的空间很大

转载了一篇博客:目标跟踪算法

下一步:人形机器人自主跟随小球。

猜你喜欢

转载自blog.csdn.net/u011608180/article/details/86522621