Audio and video foundation -01

General knowledge of this sort will give you audio and video-based. Of course, the short article does not make you instantly become experts in the field of audio and video, but this knowledge has been covering the entry-point audio and video knowledge. We will give you the following content

  • The basic concept of audio and video
  • Process audio and video playback
  • Audio and video codec
  • Audio and video encapsulation format
  • Common audio and video transmission protocol

1.1 The basic concept of audio and video

First, we need to understand some of the main common audio and video technology as well as the concept of simple principle.

1.1.1 Sample Rate

Sampling refers to the physical signal into a digital signal process. Sampling frequency defined extracted from the continuous signal and second number of samples composed of discrete signal in hertz (Hz). Image, the sampling frequency is the sampling frequency of converting an analog signal into a digital signal, i.e. how many sampling points per unit time. Get the sound, the sampling frequency can describe the sound quality of the sound file, pitch, sound measure, standard quality sound files. The higher sampling frequency, i.e., the shorter the sampling time interval, the more samples of data per unit time is obtained by the computer, indicates the signal waveform is the more accurate.

1.1.2 bit rate

Refers to the bit rate of the analog signal into a digital signal, the binary data amount per unit time, is one measure of the quality of audio and video. Bits per second (bps or bit / s). The larger the bit rate per unit time, the higher the accuracy, the more close to the original document out of the document processing, the quality of audio and video files is also higher. 1000 bit / s = 1 kbit / s ( one thousand per second)
1000-kbit / S =. 1 Mbit / S (one million or one megabyte per second)
1000 Mbit / S =. 1 Gbit / S (one gigabit or one billion per second)
to get the sound, the lowest rate possible to distinguish the voice of 800bps, call quality is generally 8kbps, FM broadcasts generally 96kbps, MP3 files range between 8-320kbps (usually 128kpbs), 16 CD is generally 1411kbps.
The audio bit rate calculation formula: [bit rate] (kbps) = [samples] quantizer (kHz) × [] bit depth (bit / samples) × [] number of channels (typically 2)

In the video, bit rate and bit rate it is often called. 16kbps for the videophone quality, 128-384kpbs a commercial quality video conferencing, VCD generally 1.25Mbps, DVD to 5Mkpbs, Blu-ray Disc is 40Mbps. The formula is: [] rate (kbps) = [] File Size (KB) * 8 / [time] (sec). 1G such as a movie, duration 60 minutes, then it's rate was 1 x 1024 x 1024 x 8/3600 = 2300Kbps / s.

1.1.3 Frame Rate

Frame, to be understood as a collection of still images. Frame rate refers to the number of frames displayed per second (Frames per Sercond, referred FPS). Since the physical structure of the human eye, when the frame rate is higher than 16 pictures per second (of 16 fps), it will produce a visual accommodation appears that a coherent picture. For example, we look at the cartoons, is the same principle, the animators will draw out a scene, and then at a certain frequency switching, it creates a coherent animation scene. In general, the frame rate of the movie is 23.97fps, television as 25fps.

1.1.4 Resolution

Resolution There are two Categories: display resolution with pixel resolution. It refers to the display resolution display to show the number of pixels. Pixel resolution refers to the number of pixels included in the image in inches. Described resolution units: (dpi dots per inch), lpi (lines per inch) and PPI (pixel per inch). lpi is a description of optical resolution scale, and dpi / ppi different meanings. dpi printing industry more generally used, while ppi common in the computer field.
Common resolution size:
DV: the X-480 720
720P: the X-720 1280
1080P: 1080 the X-1920
2K: 1152 2048 the X-
4K: 2160 the X-4096

1.2 audio and video playback process

The entire audio and video playback processes can remove some of the following main processes: audio and video capture -> Pretreatment -> audio and video coding -> audio and video processing / Distribution -> audio and video decoding -> post-processing -> rendering, playing . Well, audio and video streaming throughout the uplink and downlink in which specific process, every step of specifically what to do? We live common beauty, filter, which is in the process step of the process? The following will start one by one to explain in detail.

1.2.1 uplink and downlink

Live scene for the source of video and audio stream, we generally divided into uplink and downlink, uplink refers to the audio and video capture end of the picture acquired by the acquisition device (camera, microphone), by encoded up to the server, in general we call anchor end of the upstream end. Uplink downlink assignment in the video stream is processed or after forwarding server, transmitted to the CDN or the viewer side.

1.2.2 audio and video capture

This process is primarily use of camera / microphone to capture the video / audio signals and sound signals from the physical image into digital signals. Take the video, if the camera is set resolution 640 × 480, a frame rate of 30 frames / s, then the screen size is about 50KB each, then the camera to capture the second data bit rate after conversion into digital signals It was: 50 × 30 / s = 1500kbps = 1.5Mbps.

1.2.3 pre-treatment

Pretreatment intermediate between collection and encoding, in accordance with the type of division may be divided into pre-processing before the audio and video processing, audio processing comprises a pre noise reduction, echo cancellation, voice detection, automatic gain, etc., before the video processing including beauty, dermabrasion , contrast setting, image, watermark. Generally, in the upstream direction before the processing is completed in the client, because it consumes large cpu resources, server-side on a larger cost, poor performance.

1.2.4 audio and video coding

After the aforementioned audio and video capture audio and video stream stream bare, i.e. not coded data compression processing. Here another example, such an example, a 720p (1280x720), 30 frames per second, a 60-minute movie, which occupies the disk size: 12Bx1280x720x30x60x100 = 1.9T. If you do not go through compression, transmission of the video will no doubt be a very long time to 100M broadband telecommunications, download takes about 43 hours. It is necessary to compress the original video stream bare, is the purpose of the means by audio and video encoding, the encoding is to compress.
Video coding is usually lossy compression, the goal is to reduce the volume a little clarity while allowing greatly reduced, reduced clarity can be achieved imperceptible to the human eye or almost impossible to distinguish levels. The main method is to remove redundant information inside video, not dramatic changes for many scenes, there are a lot of adjacent frames duplicate information, analysis and removal by an inter prediction method, the intra prediction can be removed in the same frame duplicate information, as well as the prospects of the picture the audience more concerned about some of the high bit-rate coding, while the background section do low bit rate encoding, and so on, depending on these different compression algorithms. The opposite is decoded decompressed, restored to their original pixels. Common audio and video codec algorithm earlier - chapters already mentioned, do not start here go into detail.

1.2.5 audio and video processing / distribution

Audio and video data encoded by some uplink transmission protocol (rtp, rtmp, rtsp, etc.) to the audio and video processing / distribution server, the server may be mixed to achieve multi-channel audio and video according to a specific service scenario, transcoding, transfer protocol , forwarded to the specific downstream segment.

1.2.6 audio and video decoding

When the viewer receives audio and video streams, the browser is how the data is rendered into a screen play with the sound of it?  The above is the core chrome Chromium received audio and video data flow processing. When we play with HTML5 video, create the next DOM right, it instantiates a WebMediaPlayer by initializing a video label, the Player by going to request the multimedia data, the settlement agreement is to figure Internet to DataSource this process, get audio and video package format, such as MP4, FLV like. Followed by decapsulation, this process is generally referred to DEMUX, get audio and video tracks. Take the video track, the inside there are I frames, B frames (may not be), P-frame. Wherein the I frame is a key frame (I frame, Intra frame), it contains the complete information of the frame image. Other frame difference is only recorded and the reference frame, called the inter prediction frame (Inter frame), it is relatively small, there is a forward predictive frame predicted P frame and B-frame bidirectional prediction frame, P frame is a reference to the preceding decoded image and B bidirectional reference frame. With the corresponding decoder to decode the audio and video, the original data. Demux solution used here is chrome inside the built-in third-party open source FFmpeg decoding module.

1.2.7 rendering, playback

The decoded data is passed to the appropriate renderer for rendering objects to draw and let the video tag to display or sound card for playback.

1.3 encapsulation format of audio and video

Lead: a so-called video package, the coding is good audio, video or subtitles, and the like script file combined according to the respective specifications, to generate a package file format.

1.3.1 encapsulation format

Encapsulation format, which is already well compressed encoded video and audio tracks in a file according to a certain format, i.e. just a shell, or we may regard it as a combination of video and audio of the container . Put a little image, video equivalent of rice, while the audio equivalent of dishes, this time encapsulation format is a bowl, it can be used in full bloom food containers. Of course, different bowls (encapsulation) have different characteristics. Here are some common packaging format. In actual fact, when you consider some dishes, such as ribs, relatively large, bowl fit, have to change the pot. Some relatively hot meal, can not be placed in plastic containers, personal preference of course there is a certain relationship. Therefore, the choice of the container, substantially on its video / audio compatibility, and a suitable range.

AVI container (suffix .AVI)

AVI is Microsoft's technology for launch in 1992, against Apple Quicktime, despite belonging to the international academic community has been recognized as AVI obsolete technology, but because the windows versatile, easy to understand and develop API, are still widely used.

The advantage of this video format image quality is good. Since the non-destructive AVI alpha channel can be saved, we often use. Too many shortcomings, the volume is too large, but even worse is the compression standard is not uniform, the most common phenomenon is the high version of Windows Media Player can not play in AVI format video editing early coding, and low version of Windows Media Player can not play yet AVI format video encoded using the latest editing, so we conduct a number of AVI format video playback often occur because of problems caused by the video can not play or even be able to play, but there is only the sound can not be without some images, adjust the playback progress and play strange problems.

Matroska format (suffix .MKV)

It is a new multimedia container format, the encapsulation format of the plurality of different video encoding and 16 or more different formats of different audio and subtitle language encapsulated into a Matroska Media file. It is also an open multimedia container format wherein the code. Matroska also can provide a very good interactive features, and more than MPEG convenient, powerful.

QuickTime File Format format (suffix .MOV)

US Apple has developed a video format, the default player is Apple's QuickTime.

Having a higher compression ratio and more perfect video resolution, etc., and can save the alpha channel. You may have noticed that every time you install EDIUS, we have to install Apple's QuickTime. Its purpose is to support the mounting JPG format image and video formats MOV introduced.

MPEG format

MPEG format (file extension may be .MPG .MPEG .MPE .DAT .VOB .ASF .3GP .MP4, etc.): it English called Moving Picture Experts Group, is an International Organization for Standardization (ISO) approved media package, supported by the majority of the machine. Stores and diverse, can adapt to different environments. MPEG control feature-rich, there may be a plurality of video (i.e., angle), track, subtitle (subtitle bitmap) and the like.

WMV format (suffix .WMV .ASF)

Its English called Windows Media Video, one kind of independent coding and can watch video in real time directly in the online file compression format is Microsoft's.

The main advantage of WMV format, including: local or network playback, rich relationship between flow and scalability. WMV format to be played on the site, you need to install Windows Media Player (abbreviated WMP), very convenient, and now has almost no use of the site.

Flash Video format (suffix .FLV)

One popular network that extends out from the Adobe Flash format video encapsulation. With video-rich website, this format has been very popular.

1.4 audio and video codec

Audio and video distribution network, is divided into the following phases: Recording - coded - transmission - decoding - play audio and video codec which play a significant role, the packet size can be optimized, the sound acquisition apparatus typically acquired through the media after the video is encoded audio data, can greatly compressed data, and the image quality is not affected, usually high user demand for real-time live picture, understand the underlying principles of coding helps optimize live fluency. This section describes common audio and video codecs.

1.4.1 common audio encoding format

Audio coding in order to convert the audio data into PCM audio sample stream, to optimize the network transmission efficiency. Common formats: FLAC, APE, WAV, Opus, MP3, WMA, AAC.

FLAC, APE, WAV format belonging to lossless encoding, compression rate, typically require a higher sound quality for music and other content; Opus, MP3, WMA, AAC format belongs to lossy compression, compression rate conducive to network transmission; wherein Opus, OGG belong completely free open source code format. Characteristics of different encoding formats are not the same, different encoding for different usage scenarios, there is no absolute merits.

The following details the common encoding format:
1, FLAC

FLAC stands for Free Lossless Audio Codec, Chinese literally translated as a free lossless audio compression coding. That is when the lossless audio data file is read from the audio CD on your compressed into FLAC format, you can then restore the FLAC file formats, and exactly the same before and after reduction audio file compression, without any loss.

FLAC is a free audio compression coding, which can be characterized by lossless compression of audio files. Unlike other lossy compression, such as MP3, AAC, without any loss of quality after compression, has now been supported by many software and hardware audio products, many popular music player lossless audio formats are the default FLAC.

Features: lossless compression format, larger, but good compatibility, fast encoding speed, wide player support.

2, APE

APE is a common lossless audio compression encoding format, APE is the file extension after the encoding, the name of the encoding format is Monkey's Audio.

Monkey's Audio compression ratio is higher than other common lossless audio compression format, up and down about 55%, but slightly slower codec. Playback position in the search, if the file compression ratio is too high, the computer will be delayed with poor phenomenon. In addition, because it does not provide error handling function, if the file corruption occurs, the data after the damage location may be lost.

Monkey's Audio is a free open-source software, but its not free software license but quasi-free software (Semi-free Software) and marginalized. Because it means that many GNU / Linux based distributions of Linux or other operating systems can only be based on free software can not be income. Other than using the more liberal license lossless audio encoder (such as FLAC), supported by other software are also less.

Features: lossless compression format, format smaller volume than other lossless compression encoding speed Pianman.

3, WAV

WAV stands for Waveform Audio File Format, developed by Microsoft Corporation is a sound file format, also known as wave sound file, is the earliest digital audio format is widely supported Windows platforms and applications.

WAV format supports a number of compression algorithms, support for multiple audio digits, sampling frequency and channel, using a sampling frequency of 44.1kHz, 16-bit quantization bits, therefore WAV and CD sound quality is almost the same, but the WAV format for storage space requirements too Great ease of exchange and dissemination.

Features: lossless compression format, the volume is very large.

4,Opus

Opus is a lossy audio coding format, developed by the Foundation Xiph.Org, then standardized by the Internet Engineering Task Force IETF, the Internet instant voice transmission suitable for low latency, 6716 file standard format defined in RFC. Opus format is an open format without restrictions on any patents or use.

Opus integrates two audio coding technologies: voice coding SILK-oriented and low-latency CELT. Opus can seamlessly adjust the height of the bit rate. It uses linear predictive coding at lower bit rates within the transform coding in the encoder uses a high bit rate time (also encoding a combination of both high and low bit rate at the junction). Opus has a very low algorithmic delay (default is 22.5 ms), voice call is very suitable for low delay coding, such as real time audio stream, and sync on the voice memo like internet, Opus furthermore may be reduced through the encoded code rate, reaching lower algorithm delay, the minimum can go to 5 ms. A plurality of auditory blind test, Opus than MP3, AAC and other common formats, has lower latency, and better sound compression ratio.

In WebRTC implementation, support mandatory Opus, which is also the default audio encoding format.

Features: Lossy compression; dynamically adjust the bit rate, frame size and audio bandwidth; open free, no patent restrictions.

5, MP3

MP3 stands for Moving Picture Experts Group Audio Layer III. Because this compression is called the full name of MPEG Audio Layer3, so it referred to as MP3.

MP3 technology is the use of MPEG Audio Layer 3, the compression rate of music 1:12 or even 1:10, File compressed into a smaller capacity, in other words, the file can be compressed to a higher sound quality in case of loss of a small small degree. But also very nice to keep the original sound quality.

It is because MP3 small size, high sound quality MP3 format features make almost become synonymous with online music.

Features: lossy compression, compression rate, file size small, the maximum bit rate of 320K

6, WMA

WMA stands for Windows Media Audio, Microsoft is the force developed by an audio encoding format. Generally the same sound quality WMA and MP3 audio files of the former small volume, and may be added to prevent copying by DRM (Digital Rights Management) scheme, or the addition of playbacks and playback time limit, or even to limit playback machine can be effectively prevent piracy.

Features: Supports DRM anti-piracy, higher compression ratio compared to MP3 format

7, AAC

AAC stands for Advanced Audio Coding, it is based on digital audio lossy compressed MPEG-2 audio coding standard patent, jointly developed by Fraunhofer IIS, Dolby Laboratories, AT & T, Sony, Nokia and others. After the introduction of MPEG-4 standard, the original basis with the PNS (Perceptual Noise Substitution) technology, and provides a variety of expansion tools. In order to distinguish the conventional MPEG-2 AAC, also known as MPEG-4 AAC. It is designed as a successor to MP3, under the same bit rate, AAC compared to MP3 can usually achieve better sound quality.

AAC provides more up to 8 kHz up to 96 kHz sampling rate, the higher bit depth (8, 16, 24, 32 bit), and support any number of channels between 1-48

Features: one of the best lossy format, compression rate

1.4.2 Common video encoding format

The video encoding essentially to video pixel data into a compressed video stream, in order to reduce the size of the video, so as to facilitate transport and storage network. Common formats are: MPEG-2, H.264, H.265, VP8, VP9.

1, MPEG-2

MPEG is MPEG-2 group published in 1994, international video and audio compression standards, video coding is the second part of the MPEG-2 standard, which typically comprises a plurality of video GOP (Group Of Pictures), each GOP comprising a plurality of frame (frame). Frame type (frame type) typically comprises I- frame (I-frame), P- frame (P-frame) and B- frame (B-frame). Wherein intra-frame coded I- frame, the estimation, B- bidirectional estimation of the forward frame using a P- frame. I frames are generally known as a key frame, comprising a complete image.

I picture using intra-frame coding mode, i.e., using only spatial correlation within a frame image without using time correlation. I using intra-frame compression, motion compensation is not used, since the I-frame does not depend on other frames, it is a random access point, while the reference frame is decoded. I-frame is used to acquire the initialization and channel receiver, and the insertion and program switching, a relatively low compression ratio I frame image. I is a periodic frame image in the image sequence appears, the occurrence frequency selected by the encoder.

P and B frames using inter picture coding mode, i.e., while utilizing correlation in space and time. P using only the predicted frame image prior to the time, can improve the compression efficiency and image quality. P picture frame portion may comprise intra-coded, i.e., each macro block can be forward predicted P frames, and may be intra-coded.

B bidirectional temporal prediction frame image, can greatly improve the compression ratio. Notably, since the B frame image using the next frame as a reference, the transmission order and display order of picture frames in MPEG-2 encoded stream of symbols are different.

2, H.264

H.264, also known as MPEG-4, Part 10, Advanced Video Coding (English: MPEG-4 Part 10, Advanced Video Coding, abbreviated as the AVC MPEG-4) is a block-oriented video coding standard based on motion compensation . By 2014, it has become a high-definition video recording, compression and release of one of the most popular formats. The final draft of the first edition of the standard was completed in May 2003. The coding standards are widely used in network streaming media data transmission, widely used in the domestic flow property of the coding format.

3, H.265

High-efficiency video coding (High Efficiency Video Coding, referred to as the HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard, is regarded as ITU-T H.264 / MPEG-4 AVC standard successor. Beginning in 2004, or known as ITU-T H.265 start developed by MPEG (ISO / IEC Moving Picture Experts Group) and VCEG (ITU-T Video Coding Experts Group) as the MPEG-H Part. The first edition of HEVC / H.265 video compression standard to be accepted as the International Telecommunication Union (ITU-T) is the official standard in April 13, 2013.

HEVC is considered not only to enhance the video quality, but also to achieve the compression ratio H.264 / MPEG-4 AVC twice as (equivalent to a reduction in picture quality at the same bit rate to 50%), support for 4K Ultra HD TV sharpness even to (UHDTV), up to the highest resolution 8192 × 4320 (8K clarity).

4, VP8

Is open free of encoding format, the software is obtained after Google acquired On2 company aimed at providing free encoding formats available to use HTML5, typically packaged spread webM container. The coding few companies used.

5, VP9

VP9 is a Google company in order to replace the old format and VP8 video encoding and dynamic image experts group (MPEG) led High Efficiency Video Coding (H.265 / HEVC) competition to develop free, open source video encoding format. VP9 generally packaged together in Opus audio encoding format WebM

Compared to H.265, many browsers support VP9 video formats, as of June 2018, about 4/5 of the browser (including mobile devices) VP9 support WebM video encoding and packaging containers, such as: Chrome, Microsoft Edge , Firefox, Opera and other browsers have built-in VP9 decoder, you can play VP9 video format in HTML5 player. Windows 10 operating system, also built WebM splitter and VP9 decoder.

1.5 audio and video transmission protocol

Lead: the server (such as anchor of the live picture) audio and video, how to transmit to the viewer (client), related transfer details of communication between the (codec format for transmission, transmission, etc.) which are audio and video transmission protocol definition.

Currently transmitted on the network audio / video (abbreviation A / V) multimedia information such as mainly downloading and streaming two schemes.

DOWNLOADING transmission

We know that the general volume of audio and video files are large, limiting network bandwidth, downloads often takes a long time to spend. Therefore, this approach is also a great delay. And we need to wait until the entire audio and video files after downloading all to use the player to watch.

Streaming (Streaming Protocol)

When streaming, such as audio, video or animated by a continuous time-based media audio and video server to the user's computer, real-time transmission, users do not have to wait until the entire file is downloaded all, but only after a few seconds or ten seconds of the start-up delay that is can be viewed. When the sound and other time-based media player on the client, and the remainder will continue to download the file from the server in the background. Flow into not only the start delay times, 100 times shortened, and does not require much cache capacity. Streaming avoid the user must wait for the entire file all downloaded from the Internet to view the shortcomings. Defined how audio and video data is streamed streaming protocol.

RTP / RTCP / RTSP protocol family foundation

This protocol suite is the first video transmission protocol. Wherein RTSP protocol for video on demand session control, e.g. demand request initiates a SETUP request, the PLAY specific playback operation, the PAUSE request, the parameter is also supported by the video jump PLAY request. RTP protocol for the transmission of a particular video data stream. The RTCP protocol is a control means C, in addition to the video stream data, or a control packet loss rate or the like. The RTSP protocol suite is built on top of TCP, RTP, RTCP built on top of UDP. But there may be together in the same connection on a TCP transport interleave manner by RTP and RTSP. RTSP protocol suite, is that so many places to consider the first to be put forward, still with the early features. Such as the use of UDP, taking into account the efficiency of the transmission and video protocols themselves have a certain tolerance for packet loss. But the UDP protocol, obviously can not be used for larger networks, but also to penetrate the problem under complex network router. From the perspective of the video, the advantages of RTSP protocol suite that can be controlled to a video frame, it is possible to carry real-time high application. This advantage is the biggest advantage over the HTTP method. H.323 video conferencing protocol, the underlying general use RTSP protocol. RTSP protocol complexity Group focused on the server side, because the video file server needs to parse, Seek to a specific video frame, but may also need to be double-speed playback (that is, the old DVD with the kind of double speed, 4-speed playback function), double-speed playback function is unique to the RTSP protocol, other protocols can not support video. Drawback is the complexity of the server side is relatively high, it is also more complicated to achieve.

Note that, WebRTC audio and video transmission is based on RTP protocol

RTMP

Time Messaging Protocol, Real Time Messaging Protocol) RTMP is Adobe Systems' audio, video, and data transfer between Flash Player and the development of open server protocol. There are three variants: 1, working plaintext protocol on top of TCP and uses port 1935; 2, RTMPT encapsulated in HTTP requests, through firewalls; 3, RTMPS the RTMPT similar, but using a HTTPS connection; protocol is the RTMP Flash is used for objects, video, audio transmission. This protocol built on top of TCP or HTTP protocol polling. RTMP protocol as a container for containing data packets, these data may be either data AMF format, video and audio data may be in FLV. A single network connection can transmit multiple streams via different channels, these channels are packets transmitted in a packet of a fixed size.

HLS

HTTP Live Streaming (HLS) Apple (Apple Inc.) implement HTTP-based streaming media transmission protocol, which enables streaming of live and on-demand, mainly used in iOS system, to provide sound for iOS devices (such as iPhone, iPad) video broadcast and on-demand programs. HLS-demand, is a common segment substantially HTTP demand, except that it is very small segment.

With respect to the common live streaming protocols, such as RTMP protocol, RTSP protocol, MMS protocol, etc., HLS live the biggest difference is that the client gets to live, not a complete data stream. HLS protocol on the server side will be broadcast data stream is stored as a continuous, very short duration of media files (MPEG-TS format), and the client continues to download and play these small files, because the server will always be the latest live the new generation of small data file, so the client just keep the play in order to obtain the file from the server, to achieve the live broadcast. Thus, basically that, HLS-demand technology is the way to achieve live. Since the data transmitted through the HTTP protocol, it is no need to consider the problem firewall or proxy, and the length of a short segment of the file, the client can select and switch fast rate, in order to accommodate the different bandwidths play conditions. But this HLS technical characteristics, determines its general delay will always be higher than normal live streaming protocol. 

to sum up

If you want to enter the field of Web audio and video, more than the basic concepts it must be understood that each concept can be brought out individually and from the very depth direction to study. 5G With the advent of the era, I believe the future of the Web will have a more extensive audio and video application scenarios, master audio and video knowledge, to prepare for the future. Not only the future of front-end development of knowledge HTML, JavaScript, I believe audio and video related development also must master the skills.

Published 755 original articles · won praise 464 · Views 2.47 million +

Guess you like

Origin blog.csdn.net/u010164190/article/details/105336485