How to use WebRTC to take pictures and implement filters

Before we start taking pictures, we need to learn a little knowledge, non-coded frames and coded frames.

1. Basic knowledge

1.1 Non-coded frames

First of all, we need to know that video has the concept of frame rate. Common movies and TVs are 24 frames. Everyone will say how many frames are played in games, and the same is true for playing videos. When you want to play a video, the player will continuously obtain frames from the video file according to a certain period of time. This frame is a non-encoded frame, also called a decoded frame. For example, for a 20-frame video, one frame is obtained every 50ms, so It looks coherent. The same is true for playing video from the camera, except that the frames of the camera are all decoded frames, and there is no need to decode them again.

1.2 Coded frame

Regarding the coded frame, as the name implies, the frame compressed by the encoder (such as H264/H265, VP8/VP9) is called the coded frame . In H264, the three types of frame data are I frame , P frame , B frame

  • I frame : key frame. The compression rate is low, and it can be decoded into a complete image separately.

  • P frame : Reference frame. The compression rate is high, and the decoding depends on the previously decoded data.

  • B-Frame : Front and back reference frames. The compression rate is the highest, and the decoding not only depends on the previously decoded frame, but also depends on the P frame behind it. To decode the B frame, not only the previous cached picture must be obtained, but also the decoded picture, and the final picture can be obtained by superimposing the front and back pictures with the current frame data. The B frame has a high compression rate, but the CPU load will be relatively large when decoding.

2. Obtain media video stream

First of all, we need to use the API learned in the previous article to get the video and play it in the video element. The following is the code related to vue3

<script setup lang="ts">
import { ref } from 'vue';
​
const videos = ref<HTMLVideoElement>();
const getVideo = async () => {
  videos.value!.srcObject = await navigator.mediaDevices.getUserMedia({
    // 采用默认的设备进行采集,并限制最小分辨率为360p,理想为720p
    video: {
      width: { min: 640, ideal: 1280 },
      height: { min: 360, ideal: 720 },
    }
  });
}
</script>
​
<template>
  <div>
    <video ref="videos" width="300" style="aspect-ratio: 16 / 9;" muted autoplay playsinline></video>
    <button @click="getVideo">获取视频</button>
  </div>
</template>

 

Through the above code, the video stream can be successfully obtained and played. How can we take a screenshot next? Before we used canvas to get the video stream, in fact, we can use drawImage of canvas to draw the video frame.

3. Canvas draws the picture (photographing)

First, let's add a canvas and a button to take a photo

const canvas1 = ref<HTMLCanvasElement>();
let ctx = ref<CanvasRenderingContext2D>()
const takePhoto = async () => {
  ctx.value = canvas1.value?.getContext('2d')!;
  ctx.value?.drawImage(videos.value!, 0, 0, 640, 360)
}
    
<div>
  <button @click="takePhoto">拍照</button>
  <canvas ref="canvas1" width="640" height="360"></canvas>
</div>

It is not clear that the api of drawImage can view the document. We only need to pass the video element into the first parameter of the drawImage API and draw it.

 4. How to realize the filter

First of all, we need to know that css has a filter attribute , which can add a filter attribute. We first define some css filter styles

<style>
.none {
  filter: none;
}
/** 模糊度 */
.blur {
  filter: blur(5px);
}
/** 对比度 */
.contrast {
  filter: contrast(2);
}
/** 亮度 */
.brightness {
  filter: brightness(2);
}
/** 灰度 */
.grayscale {
  filter: grayscale(1);
}
/** 颜色反转 */
.invert {
  filter: invert(1);
}
</style>

Next, make a selection box so that the user can choose.

<div>
  <select @change="handleSelectFilter">
    <option value="none">None</option>
    <option value="blur">Blur</option>
    <option value="contrast">Contrast</option>
    <option value="brightness">Brightness</option>
    <option value="grayscale">Grayscale</option>
    <option value="invert">Invert</option>
  </select>
</div>
​
let canvasFilter = ref('none');
const handleSelectFilter = (value: Event) => {
  canvasFilter.value = (value.target as HTMLOptionElement).value;
}

  Isn't it easy. Of course, if you want to superimpose these styles, it is also possible, and you can also make a numerical option to control each attribute more precisely. No more examples here. We took the photo and adjusted the color, but how to save it locally or upload it to the server?

[Learning address]: FFmpeg/WebRTC/RTMP/NDK/Android audio and video streaming media advanced development

[Article Benefits]: Receive more audio and video learning packages, Dachang interview questions, technical videos and learning roadmaps for free. The materials include (C/C++, Linux, FFmpeg webRTC rtmp hls rtsp ffplay srs, etc.) Click 1079654574 to join the group to receive it~

5. Get image data (download)

The blob can be obtained through the toBlob method of canvas, or the dataURL can be obtained through toDataURL. Then download or upload to the server through a link.

// 保存
const save = () => {
  // 方法一
  canvas1.value?.toBlob((blob) => {
    const a = document.createElement('a');
    a.href = URL.createObjectURL(blob!);
    a.download = '图片';
    a.click();
    a.remove();
  })
​
  // 方法二
  const a = document.createElement('a');
  a.href = canvas1.value?.toDataURL()!;
  a.download = '图片';
  a.click();
  a.remove();
}

We will find that the filter we set does not take effect, that is because this filter is implemented with css. If you need to generate a filter effect on the downloaded picture, you need to convert the RGB of the picture, this will not be expanded here, and those who are interested can find it by themselves. The complete code is attached below

<script setup lang="ts">
import { ref } from 'vue';
​
// 获取音视频
const videos = ref<HTMLVideoElement>();
const getVideo = async () => {
  videos.value!.srcObject = await navigator.mediaDevices.getUserMedia({
    // 采用默认的设备进行采集,并限制最小分辨率为360p,理想为720p
    video: {
      width: { min: 640, ideal: 1280 },
      height: { min: 360, ideal: 720 },
    }
  });
}
​
// 拍照
const canvas1 = ref<HTMLCanvasElement>();
let ctx = ref<CanvasRenderingContext2D>()
const takePhoto = async () => {
  ctx.value = canvas1.value?.getContext('2d')!;
  ctx.value?.drawImage(videos.value!, 0, 0, 640, 360)
}
​
// 滤镜
let canvasFilter = ref('none');
const handleSelectFilter = (value: Event) => {
  canvasFilter.value = (value.target as HTMLOptionElement).value;
}
​
// 保存
const save = () => {
  // 方法一
  canvas1.value?.toBlob((blob) => {
    const a = document.createElement('a');
    a.href = URL.createObjectURL(blob!);
    a.download = '图片';
    a.click();
    a.remove();
  })
​
  // 方法二
  // const a = document.createElement('a');
  // a.href = canvas1.value?.toDataURL()!;
  // a.download = '图片';
  // a.click();
  // a.remove();
}
</script>
​
<template>
  <div>
    <video ref="videos" width="300" style="aspect-ratio: 16 / 9;" muted autoplay playsinline></video>
    <button @click="getVideo">获取视频</button>
  </div>
  <div>
    <button @click="takePhoto">拍照</button>
    <canvas ref="canvas1" width="640" height="360" :class="canvasFilter"></canvas>
  </div>
  <div>
    <select @change="handleSelectFilter">
      <option value="none">None</option>
      <option value="blur">Blur</option>
      <option value="contrast">Contrast</option>
      <option value="brightness">Brightness</option>
      <option value="grayscale">Grayscale</option>
      <option value="invert">Invert</option>
    </select>
  </div>
  <button @click="save">保存图片</button>
</template>
​
<style>
.none {
  filter: none;
}
/** 模糊度 */
.blur {
  filter: blur(5px);
}
/** 对比度 */
.contrast {
  filter: contrast(2);
}
/** 亮度 */
.brightness {
  filter: brightness(2);
}
/** 灰度 */
.grayscale {
  filter: grayscale(1);
}
/** 颜色反转 */
.invert {
  filter: invert(1);
}
</style>

Original link: How to use WebRTC to take pictures and implement filters - Nuggets

Guess you like

Origin blog.csdn.net/irainsa/article/details/130020946