MeanShift image segmentation and video background separation in Opencv (python implementation)

1. Principle of MeanShift

(1) Strictly speaking, this method is not used to segment the image, but smoothing filtering at the color level; (
2) It will neutralize the colors with similar color distribution, smooth the color details, and erode those smaller areas (3) It takes any point P on the image as the center, the radius is sp, and
the color amplitude is sr for continuous iteration;

pyrMeanShiftFiltering(src, sp, sr, dst=None, maxLevel=None, termcrit=None):

Src: input original image;
Sp: double-precision radius, the larger the value, the greater the degree of blur;
Sr: the range of color amplitude change, the larger the range of change, the larger the area that is connected together.
Dst: output image;
maxLevel: default value is 1;
Termcrit: termination criterion: when to stop meanshift iteration.

import os
import cv2
import numpy as np

img=cv2.imread('images/lenna.png')
img=cv2.resize(src=img,dsize=(450,450))
#图像分割
dst=cv2.pyrMeanShiftFiltering(src=img,sp=20,sr=30)
#图像分割(边缘的处理)
canny=cv2.Canny(image=dst,threshold1=30,threshold2=100)
#查找轮廓
conturs,hierarchy=cv2.findContours(image=canny,mode=cv2.RETR_EXTERNAL,method=cv2.CHAIN_APPROX_SIMPLE)
#画出轮廓
cv2.drawContours(image=img,contours=conturs,contourIdx=-1,color=(0,255,0),thickness=3)

cv2.imshow('img',img)
cv2.imshow('dst',dst)
cv2.imshow('canny',canny)
cv2.waitKey(0)
cv2.destroyAllWindows()

if __name__ == '__main__':
    print('Pycharm')

insert image description here
Canny edge detection algorithm:
https://mydreamambitious.blog.csdn.net/article/details/125116318
image search findHomography:
https://mydreamambitious.blog.csdn.net/article/details/125385752


2. Separation of the front and back of the video

(1) MOG2 removes the background

Improved on the basis of createBackgroundSubtractorMOG;

Foreground or background segmentation algorithm based on mixed Gaussian model

createBackgroundSubtractorMOG2(history=None, varThreshold=None, detectShadows=None):

History: How long is the reference frame needed for modeling, the default value is 200;
varThreshold: Determine whether the background model can describe the pixels well.
detectShadows: shadow detection;

import os
import cv2
import numpy as np

#打开摄像头
cap=cv2.VideoCapture('video/University_Traffic.mp4')
#创建前景分离对象
bgsegment=cv2.createBackgroundSubtractorMOG2()

while cap.isOpened():
    OK,frame=cap.read()
    if OK==False:
        break
    frame=cv2.resize(src=frame,dsize=(500,500))
    fgmask=bgsegment.apply(frame)
    cv2.imshow('img',fgmask)

    if cv2.waitKey(1)&0xFF==27:
        break

cap.release()
cv2.destroyAllWindows()

if __name__ == '__main__':
    print('Pycharm')

insert image description here

It can be seen from the video frame that MOG2 produces a lot of noise, so an improved method is proposed:
GMG background removal method:
static background image estimation and Bayesian segmentation of each pixel are more resistant to noise;

Guess you like

Origin blog.csdn.net/Keep_Trying_Go/article/details/125451244