Native Js Canvas removes video green screen background

Js remove video background


Note: Removing the video background here does not mean removing the background from the video file.

If you need to deduct the background from the video and export it, you can use ffmpegthe library. It is only for playback, so this method is used.

Since the canvas in uniapp is encapsulated and uniapp drawImagecannot draw video frames, it is not applicable in uniapp.


The implementation process is to use canvas to capture the video frame by frame, process the captured images, and then display the processed images in canvas.

Finally, the timer is used for high-speed processing and replacement to form the video playback effect. The effect is as shown below⬇

Insert image description here

There will still be some green screen pixels at the edges, which can be optimized through other processing.


principle

First, use the canvas drawImagemethod to draw the current frame of the video into the canvas.

Then use the method to obtain an array composed of the values getImageData​​​​of all pixels of the current canvas.rgba

The value obtained is that the value of [r,g,b,a,r,g,b,a,...]each group rgbais one pixel, so the length of the array obtained is the number of pixels in the canvas * 4

The effect is achieved by determining rgbwhether the value of each group is a green screen pixel, and then setting alphathe value of its transparent channel to 0.


code

Because the canvas will be affected by cross-domain effects and the canvas will be contaminated, you first need to download the test video to the local

Test video address

<template>
    <div class="videoBgRemove">
        <video id="video" src="/images/example.mp4" loop autoplay muted ref="video" style="width: 240px;height: 135px;"></video>
        <canvas id="output-canvas" width="240" height="135" willReadFrequently="true" ref="canvas"></canvas>
    </div>
</template>

<script setup>
import {
      
      ref, onMounted} from 'vue';

const video = ref(null);
const canvas = ref(null);
const ctx = ref(null);
const canvas_tmp = ref(null);
const ctx_tmp = ref(null);

const init = () => {
      
      
    ctx.value = canvas.value.getContext('2d');

    // 创建的canvas宽高最好与显示图片的canvas、video宽高一致
    canvas_tmp.value = document.createElement('canvas');
    canvas_tmp.value.setAttribute('width', 240);
    canvas_tmp.value.setAttribute('height', 135);
    ctx_tmp.value = canvas_tmp.value.getContext('2d');

    video.value.addEventListener('play', computeFrame);
}

const computeFrame = () => {
      
      
    if (video.value) {
      
      
        if (video.value.paused || video.value.ended) return;
    }
    // 如果视频比例和canvas比例不正确可能会出现显示形变, 调整除的值进行比例调整
    ctx_tmp.value.drawImage(video.value, 0, 0, video.value.clientWidth / 1, video.value.clientHeight / 1);

    // 获取到绘制的canvas的所有像素rgba值组成的数组
    let frame = ctx_tmp.value.getImageData(0, 0, video.value.clientWidth, video.value.clientHeight);

    // 共有多少像素点
    const pointLens = frame.data.length / 4;

    for (let i = 0; i < pointLens; i++) {
      
      
        let r = frame.data[i * 4];
        let g = frame.data[i * 4 + 1];
        let b = frame.data[i * 4 + 2];

        // 判断如果rgb值在这个范围内则是绿幕背景,设置alpha值为0 
        // 同理不同颜色的背景调整rgb的判断范围即可
        if (r < 100 && g > 120 && b < 200) {
      
      
            frame.data[i * 4 + 3] = 0;
        }
    }

    // 重新绘制到canvas中显示
    ctx.value.putImageData(frame, 0, 0);
    // 递归调用
    setTimeout(computeFrame, 0);
}

onMounted(() => {
      
      
    init();
})
</script>

You can see that there are still green pixels flickering at the edges. Using algorithms to process them will have better results, but the corresponding resource consumption will also increase, causing the frame rate to drop.

The following shows feathering and color transition through some algorithms.

optimization

emergence

After filtering through the above rgbvalues, there are still some green screen pixels that rgbcannot be processed because their values ​​are close to the color of the characters.

Expanding rgbthe filtering range of values ​​will lead to hollowing out of character pixels, so we need to process the edge pixels.

  1. Get the pixels within the 3x3 range of the processed pixel
假设 x 为我们需要处理的像素值, 获取周围的所有像素 -> 1, 2, 3, 4, 6, 7, 8, 9
[
    [1, 2, 3],
    [4, x, 6],
    [7, 8, 9],
]
  1. Count the number of transparent channels in all surrounding pixels that are 0
假设透明通道为 0 的是 1, 2, 3
[
    [0  , 0  ,  0 ],
    [255, x  , 255],
    [255, 255, 255],
]
  1. Recalculate alphathe value of the processed pixel

Since xthere are 3 transparent pixels around , then the value xof is ,alpha(255 / 8) * (8 - 3)

It is equivalent xto dividing 255 into several parts according to how many pixels there are around it. Every time a pixel around alphait is 0, one part is subtracted.

After calculation, assign the result to the value xof alpha.

Notice:

Because the modification of the previous pixel during traversal will affect the surrounding values ​​obtained by the subsequent pixel.

对 x 的修改会影响 y 的计算
[
    [1, 2, 3 , 4 ],
    [5, x, y , 8 ],
    [9, 1, 11, 12],
]

rgbTherefore, it is necessary to make a deep copy of the data after the first filtering, and the obtained value is based on the copied value.


color transition

When calculating alphathe value, rgbthe values ​​of each channel of the surrounding pixels are summed to calculate the average value.

alphaWhen modifying the value of the processed pixel, rgbthe value is modified together.

The final processing results are as follows
Insert image description here


Code optimization)

<template>
    <div class="videoBgRemove">
        <video id="video" src="/images/example.mp4" loop autoplay muted ref="video" style="width: 240px;height: 135px;"></video>
        <canvas id="output-canvas" width="240" height="135" willReadFrequently="true" ref="canvas"></canvas>
    </div>
</template>

<script setup>
import {
      
      ref, onMounted} from 'vue';

const video = ref(null);
const canvas = ref(null);
const ctx = ref(null);
const canvas_tmp = ref(null);
const ctx_tmp = ref(null);

const init = () => {
      
      
    ctx.value = canvas.value.getContext('2d');

    // 创建的canvas宽高最好与显示图片的canvas、video宽高一致
    canvas_tmp.value = document.createElement('canvas');
    canvas_tmp.value.setAttribute('width', 240);
    canvas_tmp.value.setAttribute('height', 135);
    ctx_tmp.value = canvas_tmp.value.getContext('2d');

    video.value.addEventListener('play', computeFrame);
}

const numToPoint = (num, width) => {
      
      
    let col = num % width;
    let row = Math.floor(num / width);
    row = col === 0 ? row : row + 1;
    col = col === 0 ? width : col;
    return [row, col];
}

const pointToNum = (point, width) => {
      
      
    let [row, col] = point;
    return (row - 1) * width + col
}

const getAroundPoint = (point, width, height, area) => {
      
      
    let [row, col] = point;
    let allAround = [];
    if (row > height || col > width || row < 0 || col < 0) return allAround;
    for (let i = 0; i < area; i++) {
      
      
        let pRow = row - 1 + i;
        for (let j = 0; j < area; j++) {
      
      
            let pCol = col - 1 + j;
            if (i === area % 2 && j === area % 2) continue;
            allAround.push([pRow, pCol]);
        }
    }
    return allAround.filter(([iRow, iCol]) => {
      
      
        return (iRow > 0 && iCol > 0) && (iRow <= height && iCol <= width);
    })
}


const computeFrame = () => {
      
      
    if (video.value) {
      
      
        if (video.value.paused || video.value.ended) return;
    }
    // 如果视频比例和canvas比例不正确可能会出现显示形变, 调整除的值进行比例调整
    ctx_tmp.value.drawImage(video.value, 0, 0, video.value.clientWidth / 1, video.value.clientHeight / 1);

    // 获取到绘制的canvas的所有像素rgba值组成的数组
    let frame = ctx_tmp.value.getImageData(0, 0, video.value.clientWidth, video.value.clientHeight);

    //----- emergence ----------
    const height = frame.height;
    const width = frame.width;
    const pointLens = frame.data.length / 4;


    for (let i = 0; i < pointLens; i++) {
      
      
        let r = frame.data[i * 4];
        let g = frame.data[i * 4 + 1];
        let b = frame.data[i * 4 + 2];
        if (r < 100 && g > 120 && b < 200) {
      
      
            frame.data[i * 4 + 3] = 0;
        }
    }

    const tempData = [...frame.data]
    for (let i = 0; i < pointLens; i++) {
      
      
        if (frame.data[i * 4 + 3] === 0) continue
        const currentPoint = numToPoint(i + 1, width);
        const arroundPoint = getAroundPoint(currentPoint, width, height, 3);
        let opNum = 0;
        let rSum = 0;
        let gSum = 0;
        let bSum = 0;
        arroundPoint.forEach((position) => {
      
      
            const index = pointToNum(position, width);
            rSum = rSum + tempData[(index - 1) * 4];
            gSum = gSum + tempData[(index - 1) * 4 + 1];
            bSum = bSum + tempData[(index - 1) * 4 + 2];
            if (tempData[(index - 1) * 4 + 3] !== 255) opNum++;
        })
        let alpha = (255 / arroundPoint.length) * (arroundPoint.length - opNum);
        if (alpha !== 255) {
      
      
            // debugger
            frame.data[i * 4] = parseInt(rSum / arroundPoint.length);
            frame.data[i * 4 + 1] = parseInt(gSum / arroundPoint.length);
            frame.data[i * 4 + 2] = parseInt(bSum / arroundPoint.length);
            frame.data[i * 4 + 3] = parseInt(alpha);
        }
    }

    //------------------------
    ctx.value.putImageData(frame, 0, 0);
    setTimeout(computeFrame, 0);
}

onMounted(() => {
      
      
    init();
})
</script>

Guess you like

Origin blog.csdn.net/Raccon_/article/details/132732976