LK optical flow tracking

 Personal blog: http://www.chenjianqu.com/

Original link: http://www.chenjianqu.com/show-89.html

An optical flow (Optical Flow) is a method of a pixel over time describe the motion between the images. Over time, a motion pixel in the image will be the same, we want to track its movement. Wherein the portion of the pixel motion calculated called sparse optical flow all the pixels is referred to, calculates the dense optical flow . In sparse optical flow Lucas Kanade optical flow represented, and can be used to track the feature point positions in SLAM.

 

Lucas-Kanade optical flow

    Lucas-Kanade optical flow algorithm is a two differential optical flow estimation algorithm. In LK optical flow, it is considered that the image from the camera change over time. The image can be seen as a function of time:  the I (t) , at time t, gradation of a pixel located (x, y) can be written at  the I (X, Y, t)  . In this manner the image look into a function of position and time, its range is gray pixels in an image. Now consider a fixed point in space, its coordinates in the image at time t of the cable is x, y. Since the movement of the camera, its image coordinates will change. We want to estimate the spatial points in the image position at other times. LK optical flow method has three basic assumptions:

    1.  gradation change : a pixel over time, the brightness value (pixel gradation value) is constant. This is the basic setting of the optical flow. All optical flow method must be met. Gray scale invariance is a strong assumption, which is likely not the actual establishment. In fact, due to the different materials of the object, the pixels will be highlights and shadows; sometimes, the camera will automatically adjust exposure parameters, such that the entire image becomes bright or dark. This gray this time invariant assumptions are not valid, and therefore the results of optical flow is not necessarily reliable.

    2.  Small movement : change of time will not cause dramatic changes in position. Using gray value so as to cause the position change between adjacent frames, the partial derivative is obtained to the position of the gradation. All optical flow method must be met.

    3.  uniform space : Suppose the pixels within a certain window has the same motion. This is a unique LK optical flow assumptions. Because in order to obtain velocity x, y directions, it is necessary to establish a plurality of simultaneous solution of equations. Consistent with the hypothesis can use the space of n pixels to establish a neighborhood of n equations.

1.jpg

    For t time located in (x, y) pixel, we let t + dt time it is moved to (x + dx, y + dy ) at. As the gray unchanged, are:

I (x+dx, y+dy, t+dt) = I (x,y,t). 

    On the left side of the equation Taylor expansion, retaining first-order terms, too:

I (x+dx, y+dy, t+dt) ≈ I (x,y,t) + (∂I/∂x)dx + (∂I/∂y)dy + (∂I/∂t)dt

    Unchanged by the grayscale, i.e., the next time the gradation before gradation is equal to:

 (∂I/∂x)dx + (∂I/∂y)dy + (∂I/∂t)dt = 0

    Both sides divided by dt, too:

(∂I/∂x)(dx/dt) + (∂I/∂y)(dy/dt) = - ∂I/∂t

    dx / dt is the speed of movement of the pixel in the x-axis, and dy / dt is the velocity of the y-axis, referred to as U , V . This is why we want to know the variables.

    ∂I / ∂x  is the image gradient in the x direction at the point, ∂I / ∂y gradient in the y-direction, referred to as Ix is , Iy . Image gradient calculation.

    ∂I / ∂t  is the image grayscale variation amount of time, referred to as the I , i.e., the gray-scale variation amount between the two points.

    Original type written in matrix form:

2.jpg

    We want to calculate the motion of the pixels U, v , equation with two variables but only one equation, so the introduction of consistent assumptions space. Consider a size of w * w window, the pixels within the window has the same motion, so we w * w total of equations:

3.jpg

    So the whole equation is:

4.jpg

    This is an overdetermined system of linear equations for u, v, the solution can be used to obtain the least squares method:

5.jpg

    Thus obtained pixel image in velocity between U , V . take the discrete time t, the position can be estimated for a block of pixels in several images appear.

 

Improved optical flow LK

    原始的LK光流假设:灰度不变、小运动、空间一致都是较强的假设,并不容易得到满足。考虑物体的运动速度较大时,算法会出现较大的误差,那么我们希望能减少图像中物体的运动速度。假设当图像为400×400时,物体速度为[16 16],那么图像缩小为200×200时,速度变为[8,8]。缩小为100*100时,速度减少到[4,4]。在源图像缩放后,原算法又变得适用了。所以光流可以通过生成原图像的金字塔图像,逐层求解,不断精确来求得。简单来说上层金字塔(低分辨率)中的一个像素可以代表下层的两个。每一层的求解结果乘以2后加到下一层。主要的步骤有三步:建立金字塔基于金字塔跟踪迭代过程

6.jpg

OpenCV代码

    OpenCV中使用CalcOpticalFlowPyrLK()这个函数实现LK光流,参数如下:

void calcOpticalFlowPyrLK( InputArray prevImg, InputArray nextImg,
                           InputArray prevPts, CV_OUT InputOutputArray nextPts,
                           OutputArray status, OutputArray err,
                           Size winSize=Size(21,21), int maxLevel=3,
                           TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01),
                           int flags=0, double minEigThreshold=1e-4);
                           
prevImg – 第一个8位输入图像或者通过 buildOpticalFlowPyramid()建立的金字塔
nextImg – 第二个输入图像或者和prevImg相同尺寸和类型的金字塔
prevPts – 二维点向量存储找到的光流;点坐标必须是单精度浮点数
nextPts – 输出二维点向量(用单精度浮点坐标)包括第二幅图像中计算的输入特征的新点位置;当OPTFLOW_USE_INITIAL_FLOW 标志通过,向量必须有和输入一样的尺寸。
status – 输出状态向量(无符号char);如果相应的流特征被发现,向量的每个元素被设置为1,否则,被置为0.
err – 输出错误向量;向量的每个元素被设为相应特征的一个错误,误差测量的类型可以在flags参数中设置;如果流不被发现然后错误未被定义(使用status(状态)参数找到此情形)。
winSize – 在每个金字塔水平搜寻窗口的尺寸。
maxLevel – 基于最大金字塔层次数。如果设置为0,则不使用金字塔(单级);如果设置为1,则使用两个级别,等等。如果金字塔被传递到input,那么算法使用的级别与金字塔同级别但不大于MaxLevel。
criteria – 指定迭代搜索算法的终止准则(在指定的最大迭代次数标准值(criteria.maxCount)之后,或者当搜索窗口移动小于criteria.epsilon。)
flags – 操作标志,可选参数:
OPTFLOW_USE_INITIAL_FLOW – 使用初始估计,存储在nextPts中;如果未设置标志,则将prevPts复制到nextPts并被视为初始估计。
OPTFLOW_LK_GET_MIN_EIGENVALS – 使用最小本征值作为误差度量(见minEigThreshold描述);如果未设置标志,则将原始周围的一小部分和移动的点之间的 L1 距离除以窗口中的像素数,作为误差度量。
minEigThreshold – 算法所计算的光流方程的2x2标准矩阵的最小本征值(该矩阵称为[Bouguet00]中的空间梯度矩阵)÷ 窗口中的像素数。如果该值小于MinEigThreshold,则过滤掉相应的特征,相应的流也不进行处理。因此可以移除不好的点并提升性能。

 

    C++代码如下:

CMakeLists.txt

cmake_minimum_required(VERSION 2.6)
project(lk_flow)

set( CMAKE_BUILD_TYPE Release )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )

find_package( OpenCV )
include_directories( ${OpenCV_INCLUDE_DIRS} )

add_executable(lk_flow main.cpp)
target_link_libraries( lk_flow ${OpenCV_LIBS} )

install(TARGETS lk_flow RUNTIME DESTINATION bin)

 

main.cpp

#include <iostream>
#include <fstream>
#include <list>
#include <vector>
using namespace std; 
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/video/tracking.hpp>

using namespace cv;

int main( int argc, char** argv )
{
    //特征点 因为要删除跟踪失败的点,使用list
    list<Point2f> keypoints;      
    Mat color, last_color;
    color = imread("1.png");
    Mat showFlowImg = imread("9.png");
    
    // 对第一帧提取FAST特征点
    vector<KeyPoint> kps;
    Ptr<FastFeatureDetector> detector = FastFeatureDetector::create();
    detector->detect(color, kps);
    for(auto kp:kps)
        keypoints.push_back( kp.pt );
    last_color = color;
    
    //进行光流跟踪
    for ( int index=2; index<10; index++ ){
        //读取图片文件
        color = imread(to_string(index)+".png");
        
        vector<Point2f> next_keypoints; 
        vector<Point2f> prev_keypoints;
        for(auto kp:keypoints)
            prev_keypoints.push_back(kp);
        //用LK跟踪特征点
        vector<unsigned char> status;
        vector<float> error; 
        calcOpticalFlowPyrLK(last_color, color, prev_keypoints, next_keypoints, status, error );
        
        // 把跟丢的点删掉,把新的位置赋给keypoints
        int i=0; 
        for(auto iter=keypoints.begin(); iter!=keypoints.end(); i++){
            if(status[i] == 0 ){
                iter = keypoints.erase(iter);
                continue;
            }
            
            //画跟踪的线条
            Point2f beforePoint=*iter;
            Point2f afterPoint=next_keypoints[i];
            line(showFlowImg, beforePoint,afterPoint, Scalar(0,240,0), 1);
    
    
            *iter = next_keypoints[i];
            iter++;
        }
        
        cout<<"tracked keypoints: "<<keypoints.size()<<endl;
        //如果全部跟丢了
        if (keypoints.size() == 0){
            cout<<"all keypoints are lost."<<endl;
            break; 
        }
        
        // 画出 keypoints
        Mat img_show = color.clone();
        for (auto kp:keypoints)
            circle(img_show, kp, 5, Scalar(0, 240, 0), 1);
        imshow("corners", img_show);
        waitKey(0);
        last_color = color;
    }
    
    imshow("showFlowImg", showFlowImg);
    waitKey(0);
    
    return 0;
}

    特征点:

7.jpg

    特征点的轨迹:

8.jpg

    程序运行结果显示,图像中大部分特征点能够顺利跟踪到,但某些特征点会丢失。丢失的特征点或是被移出视野外,或是被其它物体挡住,如果不提取新的特征点,那么光流的跟踪会越来越少。

    位于物体角点处的特征更加稳定,边缘处的特征会沿着边缘滑动,因为沿着边缘移动时特征块的内容基本不变,因此被程序认为是同一个地方。而其它地方的特征点则会频繁跳动。因此最好提取的是角点,其次是边缘点。

    LK光流点不需要计算和匹配描述子,但是本身也需要一定的计算量。另外LK光流跟踪能直接得到特征点的对应关系,不太会误匹配,但是光流必需要求相机的运动是微小的。

 

 

参考文献

[0]高翔.视觉SLAM14讲

[1] was Happy. Detailed one optical flow (LK optical flow). Https://www.cnblogs.com/riddick/p/10586662.html   .2019-03-25

[2] novice knowledge porter .OpenCV3 learning (11.2) LK optical flow principle and opencv achieve.  Https://blog.csdn.net/qq_30815237/article/details/87208319   .2019-02-13

Published 74 original articles · won praise 33 · views 10000 +

Guess you like

Origin blog.csdn.net/qq_37394634/article/details/104430589