Eliminate the delay of Python OpenCV displaying the camera picture

1. Problem description

When using Python to display the camera image through OpenCV, if some processing is performed on the video frame, a problem is often encountered. The displayed image is a few seconds or more slower than the real scene seen by the eyes, and the experience for the user is not good.
insert image description here

The difference between picture delay and stuttering: stuttering refers to the phenomenon of stuttering when the video is playing, usually when the playback rate is less than 10 frames per second. Camera stuttering usually causes lag as well.

2. Causes of screen delay

In video processing applications, it usually takes some time to process image frames, and the OpenCV low-level read frame buffer queue will save unread images, and the read() method reads the frames in the cache, not the current frame of the camera. frame. Latency occurs when there are many frames in the cache.

Solution

Customize a VideoCapture interface class for unbuffered video reading to replace OpenCV's VideoCapture class.
Development steps:
1) Create a queue queue
2) Start a sub-thread to read camera video frames in real time, always save the last frame in the queue, and delete old frames.
3) On display, read frames from the queue of the new interface class.

Implementation code

import cv2
import queue
import threading
import time

# 自定义无缓存读视频类
class VideoCapture:
    """Customized VideoCapture, always read latest frame"""
    
    def __init__(self, name):
        self.cap = cv2.VideoCapture(name)
        self.q = queue.Queue(maxsize=3)
        self.stop_threads = False    # to gracefully close sub-thread
        th = threading.Thread(target=self._reader)
        th.daemon = True             # 设置工作线程为后台运行
        th.start()

    # 实时读帧,只保存最后一帧
    def _reader(self):
        while not self.stop_threads:
            ret, frame = self.cap.read()
            if not ret:
                break
            if not self.q.empty():
                try:
                    self.q.get_nowait() 
                except queue.Empty:
                    pass
            self.q.put(frame)

    def read(self):
        return self.q.get()
    
    def terminate(self):
        self.stop_threads = True
        self.cap.release()
        
# 测试自定义VideoCapture类
cap = VideoCapture(0)
while True:
	frame = cap.read()
    time.sleep(0.10)   # 模拟耗时操作,单位:秒    
    cv2.imshow("frame", frame)
    if chr(cv2.waitKey(1)&255) == 'q':
        cap.terminate()
        break

Even if the processing time of the video frame is too long and there is a freeze, because the new class will discard the unprocessed frame and always read the current frame of the camera, so the delay is eliminated, and the picture is still real-time.

In actual application, it can be optimized on the basis of this example code.

Guess you like

Origin blog.csdn.net/captain5339/article/details/128857313
Recommended