Jupyter remote server uses local camera and WebRTC to realize chat room and real-time video processing

Jupyter remote use camera, WebRTC chat room, real-time video processing

foreword

Use the ipywebrtc component to obtain the local video stream and transmit it to the remote Jupyter server. After the server processes the video, it will be sent back to the local, and finally ipywidgets.Imagedisplayed by the component.

Effect experience

It is recommended to use Chrome browser

Show results
Go to the official example and enable the camera to see it in action.
If you want to experience the demonstration code more deeply, you can go to Binder to open any .ipynbfile and run it step by step.


Preparation

The components need to be installed first ipywebrtc. There are two methods. The simple way is to pipinstall directly (requires jupyter version 5.3 and above):

pip install ipywebrtc

The second is to install the latest preview version through github:

git clone https://github.com/maartenbreddels/ipywebrtc
cd ipywebrtc
pip install -e .
jupyter nbextension install --py --symlink --sys-prefix ipywebrtc
jupyter nbextension enable --py --sys-prefix ipywebrtc

If you are using jupyter lab, just run the following statement in the terminal:

jupyter labextension install jupyter-webrtc

Instructions

After completing the preparations, you first need to reference ipywebrtcthe library in the Jupyter file, and then create a stream. For available streams, please refer to the component introduction section below. Here is an CameraStreamexample to use the local front camera:

from ipywebrtc import CameraStream
camera = CameraStream.facing_user(audio=False)
camera

If there is no accident, the Chrome browser will pop up a window asking whether to allow the webpage to use the camera. After choosing to allow, you can see the video captured by the camera in the output area.

If Chrome does not pop up a prompt, but displays it Error creating view for media stream: Only secure origins are allowed, it means that the browser thinks that the current website is not safe ( httpsthe connection is not used), so the camera is disabled. The easiest solution is to add it at the end of the "Target" column of the Chrome shortcut --unsafely-treat-insecure-origin-as-secure="http://host_ip:port"and restart it ( host_ip:portchange it to your own server address)

But at this time, the video is only displayed locally and not uploaded to the server where the Python kernel is located (assuming you are using a remote server), so there is no way to get the video content in the Python context.
So next, we need to create one ImageRecorderto record the stream and send it as a picture to the Python kernel on the server:

from ipywebrtc import ImageRecorder
image_recorder = ImageRecorder(stream=camera)
image_recorder

Run this code, the component will be displayed ImageRecorder, click the camera icon on the component, you can capture streamthe screen from. After that, you can get the picture in Python
ImageRecorder
by accessing image_recorder.image.valueand converting it into a format:Pillow

import PIL.Image
import io
im = PIL.Image.open(io.BytesIO(image_recorder.image.value))

If you don’t need to process it and only need to preview it, you can display it directly image_recorder.image. It is a ipywidgets.Imagecomponent that itself has the function of displaying pictures.
If you need to use opencv-pythonimage processing, you can canvas = numpy.array(im)[..., :3]obtain cv2an image array that can be processed, and matplotlibdisplay the image in after processing, but I recommend using ipywidgets.Imagecomponents:

from ipywidgets import Image
out = Image()
out.value = cv2.imencode('.jpg', cv2.cvtColor(canvas, cv2.COLOR_BGR2RGB))[1].tobytes() # canvas出处见上文说明部分
out

Or if you don't want to introduce it again cv2, you can also follow the official way:

from ipywidgets import Image
out = Image()
im_out = PIL.Image.fromarray(canvas)
f = io.BytesIO()
im_out.save(f, format='png')
out.value = f.getvalue()
out

process video

So far we have only introduced how to grab a picture, process it and display it, all of which can be easily learned through official documents. So how can we continuously capture each frame in a video, process it and then display it one by one? ImageRecorderThe following will introduce the method of continuous crawling that I have summarized .

The reason why continuous capture is still used ImageRecorderinstead of VideoRecorderis because VideoRecorderthe captured video must have a start and an end, and only after the end can the entire captured video clip be processed, which does not meet the "real-time" requirement.

After analyzing the author's code, I found that there is no such function as or , and recorder.record()after the front end grabs the next frame of image, it will notify the Python kernel to set the attribute to , and then after grabbing a picture, the attribute will automatically change to . Therefore, I want to try a loop mechanism, which is automatically set to grab the next frame of pictures after processing the previous frame of pictures each time . However, after experimentation, using loops or loops does not work. This is probably because the front end of Jupyter is rendered by Javascript. Simply changing attributes in the back-end Python environment without notifying the front-end cannot continue to crawl.recorder.take_photo()ImageRecorderImageRecorderrecordingTruerecordingFalserecordingTrue
whilefor

This reason is just my personal guess. Due to time reasons, I didn’t look at the front-end files and underlying logic carefully, so my understanding may be incorrect. Please correct me in the comment area.

Finally, referring to the post of martin Renou on Github , I thought of the following mode to complete continuous real-time video processing:

import io
import PIL.Image
import numpy as np
from ipywidgets import Image, VBox, HBox, Widget, Button
from IPython.display import display
from ipywebrtc import CameraStream, ImageRecorder

VIDEO_WIDTH = 640 # 窗口宽度,按需调整
VIDEO_HEIGHT = 480 # 窗口高度,按需调整

camera = CameraStream(constraints=
                      {
    
    'facing_mode': 'user',	
                       'audio': False,	
                       'video': {
    
     'width': VIDEO_WIDTH, 'height': VIDEO_HEIGHT}	
                       })	# 另一种CameraStream创建方式,参考下文组件介绍部分
image_recorder = ImageRecorder(stream=camera)
out = Image(width=VIDEO_WIDTH, height=VIDEO_HEIGHT)

FLAG_STOP = False	# 终止标记

def cap_image(_):	# 处理ImageRecord抓取到的图片的过程
    if FLAG_STOP:
        return	# 停止处理
    im_in = PIL.Image.open(io.BytesIO(image_recorder.image.value))
    im_array = np.array(im_in)[..., :3]
    canvas = process(im_array)	# process是处理图像数组的函数,这里没写出来,各位按处理需要自己写即可
    im_out = PIL.Image.fromarray(canvas)
    f = io.BytesIO()
    im_out.save(f, format='png')
    out.value = f.getvalue()
    image_recorder.recording = True	# 重新设置属性,使ImageRecorder继续抓取

# 注册抓取事件,参考我另一篇Blog:https://qxsoftware.blog.csdn.net/article/details/86708381
image_recorder.image.observe(cap_image, names=['value'])

# 用于停止抓取的按钮
btn_stop = Button(description="Stop",
                  tooltip='click this to stop webcam',
                  button_style='danger')
# btn_stop的处理函数
def close_cam(_):
    FLAG_STOP= True
    Widget.close_all()
btn_stop.on_click(close_cam) # 注册单击事件
# Run this section and Press the Camera button to display demo
display(VBox([HBox([camera, image_recorder, btn_stop]), out]))

After running this code in Jupyter, there will be a local camera preview box, a ImageRecordergrab box, a red Stop button, and a Imagecomponent with no image yet.
Clicking ImageRecorderthe camera button above will activate cap_imagethe function, and then the processed image will be Imagedisplayed in the component, and the process will be repeated until Stop is clicked.
Only after clicking the camera, the further Jupyter Cells can be accessed normally image_recorder.image, otherwise an error will be reported OSError: cannot identify image file.

The most critical part here is the statement added in the event registration ImageRecorder.imagefunction . As for why it is invalid to add this sentence outside the registration function, we need to study the connection between the front and back ends. It should be noted that the error in the registration function will not cause an interruption, so if the component does not display the image after running , it may be an error, which can be verified by extracting the content in the new cell.observerimage_recorder.recording = True
cap_imageImagecap_imagecap_image


Component introduction

ipywebrtc, the available streaming media are:

  • VideoStream: You can VideoStream.from_file(path)get local video, or VideoStream.from_url(url)get network video

  • CameraStream: Obtain the media stream through the local camera device or webcam (webcam). There are two ways to create it:

    • The first:
       	camera = CameraStream(constraints=
       	                      {
          
          'facing_mode': 'user',	# 'user表示前置摄像头,'environment'表示后置摄像头
       	                       'audio': False,	# 是否同时获取音频(需要硬件支持)
       	                       'video': {
          
           'width': 640, 'height': 480 }	# 获取视频的宽高
       	                       })
    
    • The second type:
      front_camera = CameraStream.facing_user(audio=False): Front camera
      back_camera = CameraStream.facing_environment(audio=False): Rear camera
  • AudioStream: audio stream, VideoStreamcreated in the same way as for

  • WidgetStream: By WidgetStream(widget=target)specifying , the output of any instance widgetcan be created as a media streamipywidget

These media streams all inherit from MediaStreamthe class

In addition to the streaming media component, there is also a recorder component, which is used to record the stream obtained by the front-end Javascript and send it to the Python kernel, which has been ImageRecorderintroduced in detail in the example, and VideoRecorderthe usage is similar, please refer to the official document .
Finally, there are ipywebrtc.webrtcthe components in . After testing, there are still some bugs. You can refer to the Chat video chat room .


More content (such as ipyvolumnseries) will be updated in the future.
Originally published on CSDN, please indicate the source for reprinting: https://qxsoftware.blog.csdn.net/article/details/89513815 . If you have any questions or ideas, please leave a comment below~~

Guess you like

Origin blog.csdn.net/liuqixuan1994/article/details/89513815