YYEVA dynamic player--a new solution for perfect presentation of dynamic elements

pictureAuthor | Tornado

Introduction : With the development of computer vision, animation has become more and more cool. When improving image quality, various motion solutions also need to take into account file size and performance. The transparent MP4 solution enables the designer to make animation WYSIWYG, which fully utilizes the designer's imagination and creativity. Baidu YYEVA dynamic effect player is a set of lightweight, high-performance, cross-platform dynamic effect solutions based on transparent MP4. It supports the insertion of dynamic elements, provides a complete tool chain, and provides a one-stop solution from the design side. resource export, online preview, and client-side rendering SDK.

The full text is 3736 words, and the estimated reading time is 10 minutes.

01 Introduction of YYEVA

YYEVA has implemented a complete set of tool chain, including: AE plug-in at the resource output end, online preview tool, client-side rendering SDK, transparent MP4 resources, and dynamic business elements can be inserted.

  • YYEVA is a lightweight, high-performance, cross-platform, dynamic MP4 resource solution

  • YYEVA includes a complete tool chain from design tools AE plug-ins, online preview tools, and client-side rendering SDKs

  • Based on the affine matrix operation, the position information of each frame of the layer is obtained

  • Integrated MP4 (avc/hevc) encapsulation protocol and other related functions

  • Highly extensible, can restore all the details of the designer

  • Support Web, Android, iOS, etc.

YYEVA has been used in many projects, among which YYEVA, Tieba, Baidu, Kankan and other scenes use YYEVA to achieve complex dynamic effects, and are also provided to the project team of external companies for use

Click the link to view the case demo video: https://mp.weixin.qq.com/s/bXgauBqtwUlT8IRXPLr45w

Open source project address: https://github.com/yylive/yyeva

YYEVA official website: https://yyeva.netlify.app/

02 YYEVA's Path of Exploration

2.1 Centralized implementation of animation

picture

1. Results-Oriented Recording

This method is to record the image of each frame of the animation, and restore the animation effect according to the RGBA of the image during playback. And only the final result is recorded, the animation elements of the design cannot be restored, so it is difficult to modify the animation elements, and inserting elements is also more complicated

  • advantage:

WYSIWYG can restore all design effects; no need to develop support for specific special effects; the number of elements and motion complexity have little impact on playback performance

  • shortcoming:

Large file size; not easy to support dynamic insertion or replacement of elements

2. Process Oriented Recording

This method is to record the animation creation process, and at the playback end, calculate the motion trajectory of each element according to the process, and restore the effect; real-time calculation is required to restore the animation, and the more complex animation, the greater the amount of calculation, such as the calculation related to filters and Bezier curves Very performance-intensive.

  • advantage:

Small in size, can be stretched at will without affecting the quality; easy to dynamically insert elements

  • shortcoming:

Poor performance, consumes CPU and GPU; poor support for complex animations, the more complex animations, the easier it is to freeze

Comparing the realization principles of the above two animations, in order to achieve the WYSIWYG effect and give full play to the designer's imagination, YYEVA adopts a result-oriented animation method---transparent MP4 scheme

Compared with the sequential frame scheme, transparent MP4 has the advantage of higher compression rate, thus solving the problem of large file size. We have developed a YYEVA toolchain that supports dynamic insertion or replacement of elements

2.2 Video animation

With H264 encoding, the color sampling standard of MPEG-4 is YUV, YUV is the superposition of luminance and chrominance components, and does not support alpha channel. Therefore, how to make MP4 video support transparency, the common way in the industry is to use two channels to store separately RGB data and Alpha data of the video. Due to the advantages of video animation WYSIWYG and support for more complex special effects, it is currently widely used in various scenes of YY, and has become the preferred solution for YY animation playback.

picture

If the animation resolution is 500x500, the resolution of Mp4 is 1000x500, of which 500x500 on the left is RGB data, and 500x500 on the right is Alpha data. Player is combined into RGBA texture and then rendered and displayed.

2.3 Mixing MP4 motion

In MP4 motion effects, add some business elements such as nicknames, avatars, pictures, etc. The common practice is to superimpose a native View when MP4 is playing, or use MP4 + SVGA/Y2A to achieve. In this way, there are often problems such as resource download, synchronous playback, and maintenance of multiple sets of resources.

picture

2.4 YYEVA solution

The video frame and description information are combined into one MP4 resource and rendered synchronously, which solves the problem of resource download, synchronous playback, and maintenance of multiple sets of resources.

picture

The Json description information of YYEVA is as follows:

picture

  • description: resolution, plugin version, rgb area position, alpha area position

  • effect: type, key

  • datas: renderFrame、outputFrame

1. YYEVA describes dynamic elements

Describe several major elements of an animation: time, position, deformation

  • Time: Describe the frame index that needs to appear through the frameIndex of Json

  • Position: Describe the position and size of the element on the canvas through Json's RenderFrame

  • Deformation: Get the mask of element deformation through Json's OutputFrame

2. Demonstration of mask mask

Shapes can be recorded in 2 ways

  • record shape description of the graph

  • save the graph in its entirety

picture

picture

03 YYEVA implementation plan

The frame diagram of YYEVA is as follows:

picture

The flowchart of the toolchain is as follows:

picture

3.1 YYEVA plug-in

The YYEVA solution analyzes the relevant layer information made by the designer according to the specification through an extension program developed on AE, and exports a YYEVA resource through the layer analysis module, h264 module, and resource synthesis module of the YYConveterMP4 plug-in.

picture

1. Layer parsing module

picture

  • Normative testing

  • Whether the selected composition contains transparent area layers: The resources currently processed by the plugin are based on YY transparent MP4 resources, so the source material must be a transparent MP4 resource. That is, RGB/Alpha is separated from left to right

  • Whether the selected composition contains the Mask mask area: the text mask area uses mask_text as the composite name; the image mask area uses mask_image as the composite name

  • Whether to include the YYConverterMP4 template, which is used by the following H264 module to convert MP4

  • Layer handling

  • Calculate affine matrix: Matrix

  • Calculate Mask Position: RenderFrame

  • Extract Mask Compositing

  • Calculate the RenderFrame of each frame of all layers under the Mask composition

  • Copy an output composition, and adjust the position after reducing the Alpha area by 0.5 times. At the same time, it is divided into three parts: rgb, alpha, mask

  • Copy all valid mask layers to the output composition, adjust to the mask area, and calculate the OutputFrame

  • Resize the area of ​​the output composition

  • Combine the RenderFrame and OutputFrame calculated above into one Json data

  • copy synthesis

After making a copy of the formulated output composition, adjust the position after reducing the alpha area by 0.5 times, and divide it into three parts at the same time: rgb, alpha, and mask.

picture

2. H264 module

  • Create a specified YYConverterMP4 template on AE, and output the unified rendering queue into avi format

  • Integrate the ffmpeg command line tool inside the plug-in, and invoke the command line tool through child_process. The ffmpeg executable files used by MAC/Windows are different (.app and .exe files)

  • When using ffmpeg to convert, unify the output image quality and volume by encapsulating the CRF parameters of three gears

  • It is realized by selecting different encoders to output H264/H265 resources

3. Resource synthesis module

  • Packing data
    According to the basic information of outputFrame, inputFrame, and video, form a json

  • data compression coding

    Package and encode json into base64, and add the prefix and suffix yyeffectmp4json[[ Json ]] yyeffectmp4json

  • export to AVI

    Use the created template YYConverterMP4, add the copied composition to the rendering queue, and output avi resources

  • output to MP4

    Use the H264 module to encode into h264/h265 resources

  • Write data to MP4

    Write the json data to the metadata section of the h264/h265 resource using ffmpeg

3.2 YYEVA client rendering

The overall architecture of YYEVA rendering SDK is shown in the following figure:

picture

After the client reads a YYEVA resource, it will go through the following rendering process:

  • Use the resource parsing module to parse the Json data integrated in the Metadata segment

  • Then through the element parsing module, the dynamic Json data is modalized into corresponding objects.

  • Through the audio and video module, decode the audio and video tracks of MP4 resources

  • Render the video track + Mask dynamic elements to the screen one by one through the rendering module

The entire rendering process is as follows:

picture

1. Extract the description information

picture

There are two ways to extract the Metadata information of MP4 as shown in the figure

  • ffmpeg demultiplexes MP4 resources and parses metadata

  • character match

When the plugin writes json data to MP4, we wrap the data in a layer of yyeffectmp4json[[jsondata]]yyeffectmp4json. This operation is mainly to quickly extract jsondata according to character matching.

2. Implementation logic of client-side rendering

picture

3. RGB+Alpha mixing

picture

4. Mask mixing

picture

5. Rendering diagram

picture

04 Conclusion

YYEVA is currently open source, including AE plug-ins, client-side rendering SDK (web/iOS/android), and friends who have other needs and ideas are welcome to leave a message in the comment area or submit valuable issues directly to the code repository.

Welcome everyone to star, your attention is our biggest motivation.

In the era of the current live broadcast business, dynamic playback solutions are also constantly developing and progressing, and various new dynamic effect solutions are emerging one after another. It is foreseeable that we will continue to explore and optimize in the future. Dynamic element rotation, support for more AE features, combined with AI technology, provide users with a better experience.

————END————

Recommended reading:

A brief analysis of the asset system architecture of Baidu's trading platform

Understand APP speed test from zero to one

Flow logs can easily cope with complex scenarios of "1 billion IP pairs" and realize network traffic visualization of ultra-large-scale hybrid clouds

Baidu App Android Startup Performance Optimization - Tools

Application of digital human technology in live broadcast scenarios

Baidu engineers teach you to play with design mode (factory mode)

Super model engineering practice polishing, Baidu Intelligent Cloud releases cloud native AI 2.0 solution

Efficiency improvement practice of front-end and back-end data interface collaboration

{{o.name}}
{{m.name}}

Supongo que te gusta

Origin my.oschina.net/u/4939618/blog/5580745
Recomendado
Clasificación