OpenCV actual combat (28) - optical flow estimation

0. Preface

When the camera takes a picture, the captured brightness pattern is projected onto the image sensor to form an image. In video sequences, we usually need to capture the motion pattern, that is, 3Dthe projection of the motion of different scene elements on the image plane, and 3Dthe image of this projected motion vector is called the motion field ( motion field). However, we cannot directly measure the motion of scene points from the camera sensor 3D, what we observe is only a frame-by-frame motion of the brightness pattern, this motion of the brightness pattern is called optical flow ( optical flow). A sports field is not exactly the same as optical flow, a simple example is shooting an object that does not change significantly, e.g. if the camera moves in front of a white wall, there will be no optical flow; another classic example is the illusion of motion created by a rotating rod :

turn light

In the case shown above, the motion field is the motion vector in the horizontal direction as the vertical cylinder rotates around its main axis. However, visually this movement appears as the red and blue strips moving upwards, which is what optical flow exhibits. Despite these differences, optical flow can generally be considered an efficient approximation of a motion field. In this section, we will learn how to estimate optical flow for image sequences.

1. Optical flow estimation principle

Optical flow estimation means quantifying the motion of luminance patterns in a sequence of images. So, considering a frame of video at a given moment, if looking at ( x , y ) on the current frame (x, y)(x,y ) , we want to know where that point has moved in subsequent frames. The coordinates of the point moving over time can be expressed as( x ( t ) , y ( t ) ) (x(t), y(t))(x(t),y ( t )) , our goal is to estimate the velocity( dxdt , dydt ) at that point (\frac {dx}{dt}, \frac {dy}{dt})(dtdx,dtdy) . This can be achieved by looking at the corresponding frame of the sequence, i.e.I ( x ( t ) , y ( t ) , t ) I(x(t), y(t), t)I(x(t),y(t),t ) to obtain that particular point at a given point in timettthe brightness of t . According to the constant image brightness assumption, we can assume that the brightness of the point does not change with time:
d I ( x ( t ) , y ( t ) , t ) dt = 0 \frac {dI(x(t), y(t), t)} {dt}=0dtd I ( x ( t ) ,y(t),t)=0
According to the chain rule, the following equation can be obtained:
d I dxdxdt + d I dydydt + d I dt = 0 \frac {dI} {dx}\frac {dx} {dt}+\frac {dI} {dy} \frac{dy}{dt}+\frac{dI}{dt}=0dxd Idtdx+dyd Idtdy+dtd I=0
This equation is called the luminance constant equation, which combines the optical flow component (iexxxyyDerivative of y with respect to time) is associated with the image derivative. Thisis exactly the same equation we derived in thefeature point tracking
However, this equation (consisting of two unknowns) is insufficient to compute optical flow at pixel locations. Therefore, we need to add an additional constraint. A common practice is to assume that the optical flow is smooth, which means that adjacent optical flow vectors should be similar. This constraint is based on the Laplacian of optical flow: ∂ 2 ∂ x 2 dxdt
+ ∂ 2 ∂ y 2 dydt \frac {\partial^2}{\partial x^2}\frac {dx} {dt}+\frac {\partial^2}{\partial y^2}\frac {dy} {dt}x22dtdx+y22dtdy
Therefore, the goal is to find the optical flow field that minimizes the deviation from the brightness constancy equation and the flow vector Laplacian.

2. Implementation of optical flow algorithm

We can cv::DualTVL1OpticalFlowsolve dense optical flow estimation problems using the class, which is built as cv::Algorithma subclass of the common base class.

(1) First create cv::DualTVL1OpticalFlowan instance of the class and get its pointer:

    // 创建光流算法
    cv::Ptr<cv::optflow::DualTVL1OpticalFlow> tvl1 = cv::optflow::createOptFlow_DualTVL1();

(2) Use the created object to call the method of calculating the optical flow field between two frames:

    cv::Mat oflow;
    tvl1->calc(frame1, frame2, oflow);

An image that evaluates to 2Da vector ( cv::Point) representing the per-pixel displacement between two frames. In order to display the results, we have to display these vectors, so we need to create a function that generates an image map for the optical flow field.

(3) To control the visibility of the vector, we use two parameters. The first is a stride value that is defined so that only vectors at a certain number of pixels are displayed, the stride value makes room for the vector's display. The second parameter is to expand the vector length to have a more pronounced scale factor. Each optical flow vector drawn is a simple line ending with a circle to represent an arrow. Therefore, the mapping function is as follows:

// 在图像上绘制光流矢量
void drawOpticalFlow(const cv::Mat &oflow,  // 光流
            cv::Mat &flowImage,             // 结果图像
            int stride,                     // 矢量步长
            float scale,                    // 向量的乘数 
            const cv::Scalar &color) {
    
          // 颜色
    // 为图像分配内存
    if (flowImage.size()!=oflow.size()) {
    
    
        flowImage.create(oflow.size(), CV_8UC3);
        flowImage = cv::Vec3i(255, 255, 255);
    }
    for (int y=0; y<oflow.rows; y+=stride) {
    
    
        for (int x=0; x<oflow.cols; x+=stride) {
    
    
            // 获取矢量
            cv::Point2f vector = oflow.at<cv::Point2f>(y, x);
            // 绘制直线
            cv::line(flowImage, cv::Point(x, y), 
                    cv::Point(static_cast<int>(x+scale*vector.x+0.5),
                            static_cast<int>(y+scale*vector.y+0.5)),
                    color);
            cv::circle(flowImage, cv::Point(static_cast<int>(x+scale*vector.x+0.5),
                            static_cast<int>(y+scale*vector.y+0.5)),
                    1, color, -1);
        }
    }
}

(4) We use the following two frames:

sample frame
(5) Using the above frame, visualize the estimated optical flow field by calling the drawing function:

    // 绘制光流图像
    cv::Mat flowImage;
    drawOpticalFlow(oflow, flowImage, 8, 2, cv::Scalar(0, 0, 0));

optical flow field
In this section, we explain that the optical flow field can be estimated by minimizing a function incorporating the brightness constancy constraint and the smoothness function. This method is called Dual TV L1the method. It has two main components; the first is to use a smoothness constraint, which aims to minimize the absolute value of the optical flow gradient, this choice reduces the influence of the smooth term, especially in discontinuous regions, e.g., the optical flow of moving objects The vector is quite different from the optical flow vector in the background; the second is to use the first-order Taylor approximation, the brightness constancy constraint, this linearization is helpful for iterative estimation of the optical flow field, but the linear approximation is only valid for small displacements.
In this section we have used Dual TV L1the method with default parameters, using setterthe and gettermethods it is possible to modify parameters that may have an impact on the quality of the solution and the speed of computation. For example, one can modify the number of scales used in the pyramid estimation or specify a strict stopping criterion; another important parameter is the weight associated with the brightness constancy constraint versus the smoothness constraint, e.g. if we need the importance of the brightness constancy constraint Reduced by two times, then a smoother optical flow field can be obtained:

    // 获得更平滑的光流
    tvl1->setLambda(0.05);

smooth optical flow field

3. Complete code

For the complete code of the header file ( videoprocessor.h), refer to the video sequence processing section. flow.cppThe complete code of the main function file ( ) is as follows:

#include <string>
#include <iostream>
#include <sstream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/video/tracking.hpp>
#include <opencv2/optflow.hpp>
#include "videoprocessor.h"

// 在图像上绘制光流矢量
void drawOpticalFlow(const cv::Mat &oflow,  // 光流
            cv::Mat &flowImage,             // 结果图像
            int stride,                     // 矢量步长
            float scale,                    // 向量的乘数 
            const cv::Scalar &color) {
    
          // 颜色
    // 为图像分配内存
    if (flowImage.size()!=oflow.size()) {
    
    
        flowImage.create(oflow.size(), CV_8UC3);
        flowImage = cv::Vec3i(255, 255, 255);
    }
    for (int y=0; y<oflow.rows; y+=stride) {
    
    
        for (int x=0; x<oflow.cols; x+=stride) {
    
    
            // 获取矢量
            cv::Point2f vector = oflow.at<cv::Point2f>(y, x);
            // 绘制直线
            cv::line(flowImage, cv::Point(x, y), 
                    cv::Point(static_cast<int>(x+scale*vector.x+0.5),
                            static_cast<int>(y+scale*vector.y+0.5)),
                    color);
            cv::circle(flowImage, cv::Point(static_cast<int>(x+scale*vector.x+0.5),
                            static_cast<int>(y+scale*vector.y+0.5)),
                    1, color, -1);
        }
    }
}

int main() {
    
    
    cv::Mat frame1 = cv::imread("3.png", 0);
    cv::Mat frame2 = cv::imread("4.png", 0);
    // 组合显示
    cv::Mat combined(frame1.rows, frame1.cols + frame2.cols, CV_8U);
    frame1.copyTo(combined.colRange(0, frame1.cols));
    frame2.copyTo(combined.colRange(frame1.cols, frame1.cols+frame2.cols));
    cv::imshow("Frames", combined);
    // 创建光流算法
    cv::Ptr<cv::optflow::DualTVL1OpticalFlow> tvl1 = cv::optflow::createOptFlow_DualTVL1();
	std::cout << "regularization coeeficient: " << tvl1->getLambda() << std::endl;
	std::cout << "Number of scales: " << tvl1->getScalesNumber() << std::endl;
	std::cout << "Scale step: " << tvl1->getScaleStep() << std::endl;
	std::cout << "Number of warpings: " << tvl1->getWarpingsNumber() << std::endl;
	std::cout << "Stopping criteria: " << tvl1->getEpsilon() << " and " << tvl1->getOuterIterations() << std::endl;
    cv::Mat oflow;
    tvl1->calc(frame1, frame2, oflow);
    // 绘制光流图像
    cv::Mat flowImage;
    drawOpticalFlow(oflow, flowImage, 8, 2, cv::Scalar(0, 0, 0));
    cv::imshow("Optical Flow", flowImage);
    // 获得更平滑的光流
    tvl1->setLambda(0.05);
    tvl1->calc(frame1, frame2, oflow);
    // 绘制光流图像
    cv::Mat flowImage2;
    drawOpticalFlow(oflow, flowImage2, 8, 2, cv::Scalar(0, 0, 0));
    cv::imshow("Smoother Optical Flow", flowImage2);
    cv::waitKey();
}

summary

Optical flow estimation ( Optical Flow estimation) has important applications in the fields of video understanding, action recognition, target tracking, panoramic stitching, etc. In various video analysis tasks, it reflects the motion information inside the video and is an important visual clue. In this section, the basic principles of optical flow estimation are introduced, and cv::DualTVL1OpticalFlowthe class is used to solve the problem of dense optical flow estimation.

series link

OpenCV actual combat (1) - OpenCV and image processing foundation
OpenCV actual combat (2) - OpenCV core data structure
OpenCV actual combat (3) - image area of ​​interest
OpenCV actual combat (4) - pixel operation
OpenCV actual combat (5) - Image operation detailed
OpenCV actual combat (6) - OpenCV strategy design mode
OpenCV actual combat (7) - OpenCV color space conversion
OpenCV actual combat (8) - histogram detailed
OpenCV actual combat (9) - image detection based on backprojection histogram Content
OpenCV actual combat (10) - detailed explanation of integral image
OpenCV actual combat (11) - detailed explanation of morphological transformation
OpenCV actual combat (12) - detailed explanation of image filtering
OpenCV actual combat (13) - high-pass filter and its application
OpenCV actual combat (14) ——Image Line Extraction
OpenCV Actual Combat (15) ——Contour Detection Detailed
OpenCV Actual Combat (16) ——Corner Point Detection Detailed
OpenCV Actual Combat (17) —— FAST Feature Point Detection
OpenCV Actual Combat (18) —— Feature Matching
OpenCV Actual Combat (19) )——Feature Descriptor
OpenCV Actual Combat (20)——Image Projection Relationship
OpenCV Actual Combat (21)—Based on Random Sample Consistent Matching Image
OpenCV Actual Combat (22)——Homography and Its Application
OpenCV Actual Combat (23)——Camera Calibrate
OpenCV actual combat (24) - camera pose estimation
OpenCV actual combat (25) - 3D scene reconstruction
OpenCV actual combat (26) - video sequence processing
OpenCV actual combat (27) - tracking feature points in the video

Guess you like

Origin blog.csdn.net/LOVEmy134611/article/details/131628911