opencv video stream embedded in wxpython framework

 The previous blogs shared the environment and source code for building face recognition and emotion judgment , but there is no UI, and the interface is ugly. When you open it, a video frame pops up from opencv. As a Virgo, I looked very uncomfortable, so I decided to make a UI, a little more formal and good-looking, no matter what, this is a small software, and it is no longer running the source code.

After looking around on the Internet, I found that this is a vacancy, it seems that no one is doing this, the only one that is somewhat similar is making a video player with wxpython. It is still a little bit different from this real-time video display of opencv, but it also has a guiding role.

Version used: python-3.6.3 (anaconda) opencv-3.4.1 wxpython-4.0.1

Running process:

1. When running the program, first display the cover page

2. After the user clicks [start], opencv starts to read the video screen, dlib starts processing, and makes emotional judgments

3. After the user clicks [close], the video ends and the screen returns to the cover page. wait to click start again

 

1. Instantiate the frame and add controls

    def __init__(self,parent,title):
        wx.Frame.__init__(self,parent,title=title,size=(600,600))
        self.panel = wx.Panel(self)
        self.Center()

        #Cover image 
        self.image_cover = wx.Image(COVER, wx.BITMAP_TYPE_ANY).Scale(350,300 )
         #Display the image on the panel 
        self.bmp = wx.StaticBitmap(self.panel, -1 , wx.Bitmap(self.image_cover ))

        start_button = wx.Button(self.panel,label='Start')
        close_button = wx.Button(self.panel,label='Close')

        self.Bind(wx.EVT_BUTTON,self.learning_face,start_button)
        self.Bind(wx.EVT_BUTTON,self.close_face,close_button)

This initialization program adds a picture, two buttons, and binds their respective action functions to the buttons. Here, the wx.Image method is used to read a photo in any format, hand it over to staticbitmap, and start displaying the image on the panel.

Second, the interface layout based on GridBagSizer

There are many methods for interface layout, here we use gridbagsizer for interface layout.

It abstracts a panel into a grid, which we can imagine in excel. The size of the grid is enlarged and reduced according to its own settings. But the total area remains the same.        #Interface layout based on GridBagSizer

        #First instance an object 
        self.grid_bag_sizer = wx.GridBagSizer(hgap=5,vgap=5 )
         #Note that pos is the first ordinate and then the abscissa 
        self.grid_bag_sizer.Add(self.bmp, pos=(0, 0), flag=wx.ALL | wx.EXPAND, span=(4, 4), border=5 )
        self.grid_bag_sizer.Add(start_button, pos=(4, 1), flag=wx.ALL | wx.ALIGN_CENTER_VERTICAL, span=(1, 1), border=5)
        self.grid_bag_sizer.Add(close_button, pos=(4, 2), flag=wx.ALL | wx.ALIGN_CENTER_VERTICAL, span=(1, 1), border=5)

        self.grid_bag_sizer.AddGrowableCol(0,1)
        self.grid_bag_sizer.AddGrowableRow(0,1)
self.panel.SetSizer(self.grid_bag_sizer) #The interface automatically adjusts the window to adapt to the content self.grid_bag_sizer.Fit(self)

Use the Add method to add controls to the grid, pos is the coordinates, span is the number of rows and columns that the control needs to span, these two parameters can basically determine the position and size of a control.

The following fit method is to ensure that the control will be moved proportionally as the panel is dragged to size.

3. Button action

Now that the position and format of the controls are neatly arranged, the following is their action binding.

Click start, opencv will start to read the video screen, and then hand over each frame of picture to dlib for face recognition and feature point calibration, and then display it.

It turned out to be displayed by calling the imshow method of opencv. The specific sentence is: cv2.imshow("camera", im_rd), then what should we do if we want this frame to be displayed in the frame of wxpython?

Wouldn't it be enough to replace the cover page just now with every frame of opencv!

cv2.imshow("camera", im_rd) is to display each frame of its picture in a loop, but the method is changed. In fact, the principle is the same. Displaying each frame of pictures in a loop becomes a video screen;

So at the end of the day, the problem is that of displaying a picture.

It still seems a little wrong. The format of the picture we display is JPG picture, so what format is this im_rd?

Remember this piece of code:

            # cap.read() #Return 
            two values: #A boolean     value true/false, used to judge whether the reading of the video is successful/whether it has reached the end of the video #Image     object, the three-dimensional matrix of the image 
            flag, im_rd = self.cap.read ()
            
            

It is a three-dimensional matrix.

At this time, we need to use the following method to transform the data of this frame, display,

            # Now convert a frame of picture BGR captured by opencv to RGB, and then display the picture in the frame of the UI 
            height, width = im_rd.shape[:2 ]
            image1 = cv2.cvtColor(im_rd, cv2.COLOR_BGR2RGB)
            pic = wx.Bitmap.FromBuffer(width,height,image1)
             #Display the picture on the panel 
            self.bmp.SetBitmap(pic)
            self.grid_bag_sizer.Fit(self)

In this way, the data of each frame of the video can be displayed on the panel panel of wxpython.

Fourth, simple multi-threading

After I was able to hit start to start showing the video inside the frame, I tried to drag the frame, but the program stuck! No response? ? ? what? ?

After checking the information, we know that our program has only one thread. At this time, the program is performing video display in the while infinite loop, occupying the only thread. If we perform UI operations, the program will be crashed.

The solution is to create a new thread and let the button's action function execute in this new sub-thread. In the main thread, we perform UI operations, such as frame dragging, zooming in and out.

    def learning_face(self,event):
         """ Using multi-threading, sub-threads run background programs, and the main thread updates the UI in the foreground, so that they will not affect each other. """ 
        import _thread #Create
         sub -threads, the button calls this method, 
        _thread .start_new_thread(self._learning_face, (event,))

At this time, after we click the button start, we start to execute this method, first create a child thread, and then call the previous method in this method. It is a further encapsulation of the previous method.

This way there will be no jams.

 

Complete program code: https://gitee.com/Andrew_Qian/codes/acm80fr6ekjwgpz27bthd35

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325113226&siteId=291194637