OpenCV background modeling, image tracking

Background modeling

Introduction

Background modeling : Simply put, in video surveillance, we are interested in moving people, but not in the environment and other still life in the video. Therefore, the moving human being we are concerned about is the foreground, and the relatively static environment is the background. The method for obtaining the motion area is called background modeling.

Frame Difference

Frame difference method: As the object moves, the value of the target area is changing. The value of the static area will not change. By subtracting the picture values ​​of the two frames of the video, the difference in the relatively static area is very small, and there will be a certain difference in the moving area. By setting the threshold, the target can be detected. Formula: D n (x, y) = ∣ fn (x, y) − fn − 1 (x, y) ∣ D_n(x,y)=|f_n(x,y)-f_{n-1}(x ,y)|

Dn(x,and )=fn(x,and )fn1(x,y)
R n ( x , y ) = { 255 , D ( x , y ) > T 0 , e l s e R_n(x, y)= \begin{cases} 255, D(x, y)>T\\ 0,else \end{cases} Rn(x,and )={ 255,D(x,and )>T0,else
Where n is the picture of the current frame and n-1 is the picture of the previous frame. The difference between the two frames is compared with the threshold T, the part greater than the threshold is 255, and the part less than the threshold is 0. Code:

  • Captured two frames from the video
  • The above formula is realized by code (Note: There are 3 color channels in color pictures). The first frame: the second frame:

    Insert picture description here

    Insert picture description here
# 导入opencv库
import cv2

# 加载图片
img1 = cv2.imread("C:/Users/98046/Desktop/video/image1.jpg") # 第一帧
img2 = cv2.imread("C:/Users/98046/Desktop/video/image2.jpg") # 第二帧

# 计算图片大小
height = img1.shape[0]
width = img1.shape[1]
print(img1.shape)
# 求差值
val = img1 - img2

# 根据阈值, 获取差值图像
for i in range(height):
    for j in range(width):
        if val[i, j][0] > 40 and val[i, j][1] > 40 and val[i, j][2] > 40:
            val[i, j][0] = 255
            val[i, j][1] = 255
            val[i, j][2] = 255
        else:
            val[i, j][0] = 0
            val[i, j][1] = 0
            val[i, j][2] = 0

cv2.imshow("val", val)
cv2.waitKey(0)
cv2.destroyAllWindows()

Running result: People in
Insert picture description here
motion have already obtained it. Although the frame difference method is simple, it is prone to noise and holes.

Gaussian mixture model method

principle

The Gaussian mixture model is too cumbersome, the following is the explanation of the Great God.
Gaussian mixture model method , click to view.

Code

import cv2
import numpy as np

# 测试视频
cap = cv2.VideoCapture("C:/Users/98046/Desktop/test.avi")
#cap = cv2.VideoCapture(0)
# 形态学操作
print(cap)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
fgbg = cv2.createBackgroundSubtractorMOG2()

print(kernel)
while (1):
    ret, frame = cap.read()
    fgmask = fgbg.apply(frame)
    # 形态学开运算去噪声
    fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
    # 寻找视频中的轮廓
    countours, hierarchy = cv2.findContours(fgmask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    for c in countours:
        # 计算个轮廓的周长
        p = cv2.arcLength(c, True)
        if p > 200:
            x, y, w, h = cv2.boundingRect(c)
            # 画出这个矩形
            cv2.rectangle(frame, (x, y), (x+w, y+h),(0, 255, 0), 2)
    cv2.imshow("imshow", frame)
    cv2.imshow("fgmask", fgmask)
    k = cv2.waitKey(150) & 0xff
    if k == 27:
        break

cap.release();
cv2.destroyAllWindows();

Running results: 1. Picture demonstration: 2. Video demonstration: Click to watch

Insert picture description here

Tested video file

Insert picture description here

Follow the reply "background modeling" to get the test video and source code .

Guess you like

Origin blog.csdn.net/weixin_44736584/article/details/105067620