机器人视觉项目:视觉检测识别+机器人跟随(11)

任务完成情况

参考这篇博客 OpenCV3特征提取和暴力匹配方法 "surf特征提取+匹配"

这篇博客是在opencv3版本上做特征提取的,3.0版本后的surf使用方法和2.0版本有了不同,所以需要做修改,这里参考了其使用的方法重新写了特征匹配的代码。

修改后的tracking_node代码:

/******************************************************************************
*
* The MIT License (MIT)
*
* Copyright (c) 2018 Bluewhale Robot
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
* Author: Randoms
*******************************************************************************/
​
#include "tracking_node.h"
​
//beginging
#include <iostream>
#include <stdio.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/xfeatures2d/nonfree.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/flann.hpp>
#include <opencv2/xfeatures2d.hpp>
​
//ending
​
using namespace cv;
using namespace std;
using namespace cv::xfeatures2d;
using namespace XiaoqiangTrack;
​
sensor_msgs::Image last_frame;
XiaoqiangTrack::Tracker *tracker = NULL;
Rect2d body_rect;
ros::Publisher image_pub;
ros::Publisher target_pub;
std::mutex update_track_mutex;
int track_ok_flag = 0;
cv::Rect2d previous_body_rect;
cv::Rect2d body_track_rect;
​
sensor_msgs::Image get_one_frame() { return last_frame; }
​
void update_frame(const sensor_msgs::ImageConstPtr &new_frame) //更新帧
{
  last_frame = *new_frame;
  cv_bridge::CvImagePtr cv_ptr =
      cv_bridge::toCvCopy(new_frame, "bgr8"); //图像格式转换
  cv::Mat cv_image = cv_ptr->image;
  if (tracker == NULL)
    return;
  unique_lock<mutex> lock(update_track_mutex);
  previous_body_rect = body_rect;//将检测到的当前rect保存为previous_body_rect
  track_ok_flag = tracker->updateFrame(cv_image, body_rect);
  cv::rectangle(cv_image, body_rect, cv::Scalar(0, 255, 0));
  image_pub.publish(cv_ptr->toImageMsg());
  xiaoqiang_track::TrackTarget target;
  target.x = body_rect.x + body_rect.width / 2;
  target.y = body_rect.y + body_rect.height / 2;
  target.width = body_track_rect.width;
  target.height = body_track_rect.height;
  if (track_ok_flag == 0) {
    // send stop
    target.x = 0;
    target.y = 0;
  }
  target_pub.publish(target);//target.x,target.y是跟踪的点的坐标,kalman...
}
​
int main(int argc, char **argv) {
  ros::init(argc, argv, "xiaoqiang_track_node"); // ros节点初始化
  //在一个节点中开辟多个线程,构造时可以指定线程数如(4),AsyncSpinner有start()和stop()函数
  ros::AsyncSpinner spinner(4);
  spinner.start();
  ros::NodeHandle private_nh("~");
  ros::Publisher talk_pub = private_nh.advertise<std_msgs::String>("text", 10);
  image_pub = private_nh.advertise<sensor_msgs::Image>("processed_image", 10);
  target_pub = private_nh.advertise<xiaoqiang_track::TrackTarget>("target", 10);
  int watch_dog;
  private_nh.param("watch_dog", watch_dog, 2);
  ros::Subscriber image_sub = private_nh.subscribe("image", 10, update_frame);
  PoseTracker *client;
  std::string pose_tracker_type;
  ros::param::param<std::string>("~pose_tracker", pose_tracker_type, "");
  if (pose_tracker_type == "baidu") //判断跟踪类型:baidu track or body track
  {
    client = (PoseTracker *)new BaiduTrack(private_nh);
  } else if (pose_tracker_type == "body_pose") {
    client = (PoseTracker *)new BodyTrack(private_nh);
  } else {
    ROS_FATAL_STREAM("unknown pose tracker type " << pose_tracker_type);
    ROS_FATAL_STREAM("supported pose trakers are body_pose and baidu");
    exit(1);
  }
​
  std::string tracker_main_type;  //定义主跟踪类型
  std::string tracker_aided_type; //辅跟踪
  ros::param::param<std::string>("~tracker_main", tracker_main_type, "");
  ros::param::param<std::string>("~tracker_aided", tracker_aided_type, "");
  tracker = new XiaoqiangTrack::Tracker(tracker_main_type,tracker_aided_type); //设置跟踪器
​
  // 告诉用户站在前面
  std_msgs::String words;
  words.data = "请站在我前面";
  talk_pub.publish(words);
  // 提醒用户调整好距离
  sensor_msgs::Image frame = get_one_frame(); //得到一帧图像
  body_rect.x = -1;
  body_rect.y = -1;
  while (!ros::isShuttingDown()) {
    if (frame.data.size() != 0) {
      cv::Rect2d rect = client->getBodyRect(frame); //通过frame得到人体图像区域
      if (rect.x <= 1 || rect.y <= 1) {
        words.data = "我没有看到人,请站到我前面";
        talk_pub.publish(words);
      } else if (rect.x + rect.width / 2 > 440 ||
                 rect.x + rect.width / 2 < 200) {
        body_rect = rect;
        words.data = "请站到镜头中间来";
        talk_pub.publish(words);
      } else {
        body_rect = rect;
        words.data = "我看到人了,开始追踪";
        talk_pub.publish(words);
        break;
      }
    }
    sleep(1);
    frame = get_one_frame();
  }
​
  /*
  经过分析代码,初步的想法是在这个位置加上特征提取方法和Opencv的特征匹配,思路是:
  -
  特征提取是从一帧图像中提取特征,想要提取的特征可以是ORB,FAST,SIFT,SURF等,上面的
  - frame = get_one_frame()是获取最新的一帧图像,return last_frame
  -
  那么对于这一帧图像抽取想要的特征信息,得到特征点,保存检测到的目标特征,之后用来与再次检测时的图像来做匹配
  -
  当然这里是做特征提取,匹配是在跟踪丢失后,再次启动检测识别时,识别到多个目标,进行匹配
  */
  /*fuck!sorry,i don't wanna say that,but i just lost everything i did the whole day because of clicking the 
  close table without save it! so let me start at the begining...
  */
​
​
  //begining extract feature
  //int minHessian = 2000;
  //int hessianThreshold = 2000;
  //SurfFeatureDetector detector(minHessian);
  //detector = SURF::create(hessianThreshold);
  Ptr<SURF> surf;
  surf = SURF::create(800);
​
  vector<KeyPoint>keypoint1, keypoint2;
​
  frame = get_one_frame(); 
  body_rect = client->getBodyRect(frame); 
  //image1 = resize(frame, rect.x:(rect.x+rect.width), rect.y:(rect.y+rect.higth))
  Mat image1;
/*
  Mat img1,img_1,image1;
  CvRect rectInImage1;
  rectInImage1 = cvRect(body_rect.x, body_rect.y,body_rect.width, body_rect.height);
  CvSize size1;
  size1.width = rectInImage1.width;
  size1.height = rectInImage1.height;
  //img1 = CvCreatImage(size1, frame->depth, frame->nChannels);
  img1 = CvCreatImage(size1, IPL_DEPTH_8U, 3);
  CvSetImageROI(frame, rectInImage1);
  cvCopy(img1, frame);//img1是从frame上提取的目标框区域
  cvtcolor(img1, img_1, CV_RGB2GRAY);
*/
  //cvtcolor(frame, frame, CV_RGB2GRAY)
  //检测特征点
  //detector.detect(img_1, keypoint1)
  surf->detectAndCompute(frame, Mat(), keypoint1, image1); //输出mat存放所有的特征点描述向量
  //ending 
​
  // 告诉用户可以开始走了
  sensor_msgs::Image tracking_frame = get_one_frame();
  cv_bridge::CvImagePtr cv_ptr = cv_bridge::toCvCopy(tracking_frame, "bgr8");
  cv::Mat cv_image = cv_ptr->image;
  tracker->initTracker(cv_image, body_rect); // init
  int repeat_count = 0;
  int watch_dog_count = 0;
  while (ros::ok()) {
    usleep(1000 * 100); // 100ms
    // 如果位置不变,则认为可能丢失
    if (previous_body_rect == body_rect) {
      repeat_count += 100;
      if (repeat_count == 5 * 1000) //rect检测到的数据不变,且超过一定时间,判Lost
      {
        ROS_WARN_STREAM("Target not move, may lost");
        repeat_count = 0;
        track_ok_flag = 0; // heihei,flag=0 -> reset
      }
    } //这里判断跟丢
    else 
    {
      repeat_count = 0;
    }
    if (body_rect.width < 300 && body_rect.height < 300 && track_ok_flag == 2 &&
        body_rect.height > body_rect.width) //确认检测到的rect符合这些要求
    {
​
      watch_dog_count += 100;
      if (watch_dog_count <= watch_dog * 1000) // watch_dog=2
      {
        continue;
      }
    } //这里判断是否正确的给出rect
    watch_dog_count = 0; 
​
    tracking_frame = get_one_frame(); //再次获得newframe
    body_track_rect = client->getBodyRect(tracking_frame); // track
​
    //usr code begining 
  if(track_ok_flag == 0)
  {
  Mat image2;
  /*
  Mat img2, img_2, image2;
  CvRect rectInImage2;
  rectInImage2 = cvRect(body_track_rect.x, body_track_rect.y,
                       body_track_rect.width, body_track_rect.height);
  CvSize size2;
  size2.width = rectInImage2.width;
  size2.height = rectInImage2.height;
  //img2 = CvCreatImage(size2, tracking_frame->depth, tracking_frame->nChannels);
  img2 = CvCreatImage(size2, IPL_DEPTH_8U, 3);
  CvSetImageROI(tracking_frame, rectInImage2);
  cvCopy(img2, tracking_frame);//img2是从tracking_frame上提取的目标框区域
  cvtcolor(img2, img_2, CV_RGB2GRAY);
  */
  //cvtcolor(tracking_frame, tracking_frame, CV_RGB2GRAY);
  //检测特征点
  //detector.detect(img_2, keypoint2);
  surf->detectAndCompute(tracking_frame, Mat(), keypoint2, image2);
​
  //计算特征点描述子
  //SurfDescriptorExtractor extractor;
  //Mat descriptor1, descriptor2;
​
  //extractor.compute(img1, keypoint1, descriptor1);
  //extractor.compute(img2, keypoint2, descriptor2);
​
  //使用flann匹配
  FlannBasedMatcher matcher;
  vector<DMatch>matches;
  matcher.match(image1, image2, matches);//匹配结束
​
  sort(matches.begin(), matches.end());
​
  vector<DMatch>good_matches;
  int ptsPairs = std::min(50, (int)(matches.size() * 0.15));
  for (int i = 0; i < ptsPairs; i++)
  {
    good_matches.push_back(matches[i]); 
    
  }
  double averagematch =0;
  for(int i = 0; i < ptsPairs; i++)
  {
  averagematch = averagematch + (int)(good_matches[i]) / ptsPairs;
  }
  if(averagematch > 10.0)
  continue;
/*
  double max_dist = 0;
  double min_dist = 100;
  for(int i = 0; i < descriptor1.rows; i++)
  {
    double dist = matches[i].distance;
    if(dist < min_dist)
      min_dist = dist;
    if(dist > max_dist)
      max_dist = dist;
  }//得到匹配结果中的最小距离和最大距离
​
  //处理匹配结果:判断当前匹配的对象是否为目标,仅根据最大最小匹配距离,能否进行判断?
  if(!(min_dist < 1.0 || (max_dist - min_dist) <  2.0))// matching failed
  {
    //tracking_frame = tracking_frame;
    //body_track_rect = body_track_rect;
    continue;
  }
*/
}
  //usr code ending
​
    if (body_track_rect.x <= 1 || body_track_rect.y <= 1) //识别到的rect.x / .y小于1后跟踪停止
    {
      tracker->stop();
    }
    else
    {
      {
        unique_lock<mutex> lock(update_track_mutex);
        body_rect = body_track_rect;
        cv_bridge::CvImagePtr cv_ptr =
            cv_bridge::toCvCopy(tracking_frame, "bgr8");
        cv::Mat cv_image = cv_ptr->image;
        if (track_ok_flag == 0) //跟踪标志为0时跟踪复位
        {
          tracker->reset(cv_image, body_rect, true); // watch out the  reset praram
        }
​
        else
          tracker->reset(cv_image, body_rect); /// reset
      }
    }
  }
}

代码中的关于目标框区域裁剪的部分做了注释,因为裁剪的语句中CvCreatImage、CvSetImageROI等都无法使用,说这些函数在当前工作空间中找不到。。

尝试将裁剪的部分注释掉后,还有一些报错,不能懂其中的原因,报的错误长这样:

/home/sz/catkin_hf/src/xiaoqiang_track/src/tracking_node.cpp: In function ‘int main(int, char**)’:
/home/sz/catkin_hf/src/xiaoqiang_track/src/tracking_node.cpp:194:57: error: no matching function for call to ‘cv::xfeatures2d::SURF::detectAndCompute(sensor_msgs::Image&, cv::Mat, std::vector<cv::KeyPoint>&, cv::Mat&)’
   surf->detectAndCompute(frame, Mat(), keypoint1, image1); //输出mat存放所有的特征点描述向量
                                                         ^
In file included from /opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/calib3d.hpp:48:0,
                 from /opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/opencv.hpp:56,
                 from /home/sz/catkin_hf/src/xiaoqiang_track/include/tracking_node.h:31,
                 from /home/sz/catkin_hf/src/xiaoqiang_track/src/tracking_node.cpp:28:
/opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/features2d.hpp:198:26: note: candidate: virtual void cv::Feature2D::detectAndCompute(cv::InputArray, cv::InputArray, std::vector<cv::KeyPoint>&, cv::OutputArray, bool)
     CV_WRAP virtual void detectAndCompute( InputArray image, InputArray mask,
                          ^
/opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/features2d.hpp:198:26: note:   no known conversion for argument 1 from ‘sensor_msgs::Image {aka sensor_msgs::Image_<std::allocator<void> >}’ to ‘cv::InputArray {aka const cv::_InputArray&}’
/home/sz/catkin_hf/src/xiaoqiang_track/src/tracking_node.cpp:256:66: error: no matching function for call to ‘cv::xfeatures2d::SURF::detectAndCompute(sensor_msgs::Image&, cv::Mat, std::vector<cv::KeyPoint>&, cv::Mat&)’
   surf->detectAndCompute(tracking_frame, Mat(), keypoint2, image2);
                                                                  ^
In file included from /opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/calib3d.hpp:48:0,
                 from /opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/opencv.hpp:56,
                 from /home/sz/catkin_hf/src/xiaoqiang_track/include/tracking_node.h:31,
                 from /home/sz/catkin_hf/src/xiaoqiang_track/src/tracking_node.cpp:28:
/opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/features2d.hpp:198:26: note: candidate: virtual void cv::Feature2D::detectAndCompute(cv::InputArray, cv::InputArray, std::vector<cv::KeyPoint>&, cv::OutputArray, bool)
     CV_WRAP virtual void detectAndCompute( InputArray image, InputArray mask,
                          ^
/opt/ros/kinetic/include/opencv-3.3.1-dev/opencv2/features2d.hpp:198:26: note:   no known conversion for argument 1 from ‘sensor_msgs::Image {aka sensor_msgs::Image_<std::allocator<void> >}’ to ‘cv::InputArray {aka const cv::_InputArray&}’
/home/sz/catkin_hf/src/xiaoqiang_track/src/tracking_node.cpp:282:54: error: invalid cast from type ‘__gnu_cxx::__alloc_traits<std::allocator<cv::DMatch> >::value_type {aka cv::DMatch}’ to type ‘int’
   averagematch = averagematch + (int)(good_matches[i]) / ptsPairs;
                                                      ^
xiaoqiang_track/CMakeFiles/tracking_node.dir/build.make:62: recipe for target 'xiaoqiang_track/CMakeFiles/tracking_node.dir/src/tracking_node.cpp.o' failed
make[2]: *** [xiaoqiang_track/CMakeFiles/tracking_node.dir/src/tracking_node.cpp.o] Error 1
CMakeFiles/Makefile2:1400: recipe for target 'xiaoqiang_track/CMakeFiles/tracking_node.dir/all' failed
make[1]: *** [xiaoqiang_track/CMakeFiles/tracking_node.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
​

[1]  detectAndCompute这个函数这里提示no matching function

[2]  averagematch = averagematch + (int)(good_matches[i]) / ptsPairs;这句是将匹配的结果取前50个距离最小匹配特征点再取均值,用到了强制类型转换,问题的出现原因应该是good_matches类型为容器类向量,不能给它类型转换为int,也不能直接与整数进行运算,这个问题现在要解决是要考虑怎么利用匹配的结果,来做判断是否匹配成功的问题,匹配的结果是获得特征点匹配结果的容器类向量数据,怎么用这些数据做判断是否是符合要求的匹配。

猜你喜欢

转载自blog.csdn.net/Synioe/article/details/82836752