Multimedia technology knowledge points (including real questions)

Key Points of Multimedia Technology Review

1. According to the definition of CCITT, what are the types of multimedia?

1). Sensory media: act directly on people's sense organs and make people feel directly.
2). Representation media: It is a kind of media artificially constructed for the purpose of processing, processing and transmitting sensory media, that is, various codes.
3). Display media: refers to a type of media that converts between sensory media and electrical signals used for communication transmission, that is, the interface between sensory media and computers. It can be divided into two types: input display media and output display media.
4). Storage medium: it is used to store the presentation medium, and the computer can process and recall the information codes stored in the storage medium at any time.
5). Transmission medium: the physical carrier of transmission, that is, the physical carrier used to transmit the media from one place to another.

2. Several characteristics of multimedia

Multidimensionality—refers to the diversification of media information processed by computers, which makes the interaction between humans and computers no longer limited to sequential, monotonous, and narrow areas, but has room for full freedom.
Integration - the integration of media types. It includes two aspects: on the one hand, it refers to the synchronous combination of various media information organically by multimedia technology to form a complete multimedia information; on the other hand, it refers to the integration of different media devices to form a multimedia system.
Interactivity—human-computer dialogue is a key feature of multimedia technology. In a multimedia system, in addition to being able to control the operation freely, you can also do whatever you want in the comprehensive processing of the media. (Human-Computer Interaction)
Digitization - Media exists in digital form.
Real-time - sound, dynamic images (video) change with time.

3. Which video coding technology are VCD and DVD playback systems based on?

VCD: MPEG-1 coding technology; DVD: MPEG-2 coding technology

4. Nyquist sampling theorem

In order to restore the original signal without distortion after sampling, the sampling frequency must be greater than twice the highest frequency of the signal spectrum. Commonly used
audio sampling frequencies are: 8kHz (digital phone), 11.025kHz (AM), 22.05kHz (FM), 44.1 kHz (CD), 48kHz (studio, digital tape DAT)
There are currently three ways to measure the quality of sound. One is to use the bandwidth of the sound signal to measure the quality of the sound, the second is the signal-to-noise ratio, and the third is the subjective quality measure.

5. Resolution

Resolution refers to the ability of the A/D converter to distinguish the input analog signal. Theoretically speaking, an A/D converter with n-bit binary output should be able to distinguish 2n different magnitudes of the input analog voltage, and can distinguish the minimum difference of the input analog voltage (1/2n of the full-scale input).
For example, the output of the A/D converter is a 12-bit binary number, and the maximum input analog signal is 10V, then its resolution is

7. Conversion time

转换时间是指A/D转换器从接到转换启动信号开始,到输出端获得稳定的数字信号所经过的时间。
A/D转换器的转换速度主要取决于转换电路的类型,不同类型A/D转换器的转换速度相差很大。

①The conversion speed of the double-integral A/D converter is the slowest, which takes about several hundred milliseconds;
②The conversion speed of the successive approximation A/D converter is faster, which takes tens of microseconds;
③The parallel comparison A/D conversion The conversion speed of the converter is the fastest, only tens of nanoseconds.

8. Conversion error

它表示A/D转换器实际输出的数字量和理论上输出的数字量之间的差别。常用最低有效位的倍数表示。例如,转换误差≤   。就表明实际输出的数字量和理论上应得到的输出数字量之间的误差小于最低位的半个字。
例:某信号采集系统要求用一片A/D转换集成芯片在1s内对16个热电偶的输出电压分数进行A/D转换。已知热电偶输出电压范围为0~25mV(对应于0~450℃温度范围),需分辨的温度为0.1℃,试问应选择几位的A/D转换器?其转换时间为多少?

9. What is the audio and video compression standard with independent intellectual property rights in my country?

AVS standard

10. Compressed Indicators

(1) Compression ratio: the ratio of output data to input data.
(2) Compression quality: Compression is divided into lossy compression and lossless compression; lossy compression adopts subjective (feeling) and objective methods (signal-to-noise ratio, etc.).
(3) Compression and decompression speed: closely related to compression and decompression algorithms
(4) Compression and decompression standardization

11. Data redundancy (data compressibility) (very likely to test, understand)

Spatial redundancy (in the same image, the surface physical properties of regular objects and regular backgrounds are correlated, that is, there is spatial coherence between the colors of sampling points on the same scene surface) Image grayscale or color and other characteristics are basically the same Visual area
Temporal redundancy (redundancy often contained in sequence images (TV images, animation) and voice data, there is often temporal and spatial correlation between a group of continuous pictures) ① Adjacent frames
( Inter-frame) or repetition and gradual changes between adjacent registers
② Human visual persistence characteristics,
information entropy redundancy (codeword redundancy in coded symbol sequence)
other redundancy (such as structural redundancy, knowledge redundancy, visual redundancy)

12. Common coding and classification

Entropy coding technology: mainly uses the entropy redundancy (statistical redundancy) of data to achieve the purpose of compression.
Entropy coding methods commonly used in data compression standards:
Huffman coding (required for major calculation questions)
Arithmetic coding (may be tested for major calculation questions)
Run-length encoding

13. Huffman coding and arithmetic coding

Huffman编码(必考)

① Calculate the probability P(Xi) of each symbol in the source symbol set {Xi}
to establish the initial Huffman code table.
② According to the calculated value of P(Xi),
arrange the symbol set {Xi} into a binary
tree
in descending order of probability
. Items, and represented by intermediate nodes
Among them, the probability value of the bottom leaf node is larger on the left and smaller on the right
b. Calculate the sum of the two minimum probability items successively along the binary tree,
and merge them into a compound item
successively as the intermediate node; until the last The item reaches the root node
c of the top layer. It can be verified: the probability value of the root node must be 1; that is,
ΣP(Xi)=1

③ Generate codewords from top to bottom along the Huffman tree
a. Traverse the tree from the root node,
assign binary code (1, 0) value to the path of each node in sequence from left to right
b. Take out the value from the root node to each node Combining codes on the path of a leaf node
to obtain the code word of the leaf node; this is the encoding result of the source symbol Xi
c. The code word and code length Li of each source symbol can be filled in the Huffman code table
④ Calculate The average code length La of the source symbol set {Xi}, and compare it with the minimum code length of {Xi}
Process and result:

Arithmetic coding (may be tested)
Arithmetic coding: In the JPEG extension system, it replaces Huffman coding. Advantages:
① It is used in occasions where the source probability distribution is relatively uniform; it is complementary to Huffman coding;
② The data compression efficiency is about 5% higher than that of Huffman coding.
⑴ Basic principle of arithmetic coding Basic
idea: Binary coding based on recursive probability interval division. Specific process:
① The probability of occurrence of the source symbol sequence {Xi|i=1,2,…,n} is
represented by the interval on the real number interval [0, 1] (the value range of Xi);
② According to the symbol probability Size to allocate symbol intervals,
so that [0, 1] gradually narrows as the number of iteration calculations increases;
③ The final range to be obtained is the value range of the code that replaces the {Xi} symbol string

14. Three subjective attributes of sound—pitch, timbre, and intensity

15.PCM, DPCM, APCM, ADPCM (Understanding)

PCM(脉冲编码调制):概念上最简单、理论上最完善的编码系统,它的原理框图下图所示 。

  其中防失真滤波器可以视为低通滤波器,用来滤去声音频带以外的信号,波形编码器可看作采样器,量化器可视为量化阶生成器。
  量化一般分为两种,均匀与非均匀。非均匀量化中μ律压扩算法主要由北美和日本地区采用,A律压扩算法主要是中国大陆和欧洲地区。
 增量调制(DM); 它是一种预测编码技术,是PCM编码的一种变形。DM是对实际的采样信号与预测的采样信号之差的极性进行编码,将极性变成“0”和“1”这两种可能的取值之一。

DPCM: A data compression technique that uses the information redundancy between samples to encode. The idea of ​​differential pulse code modulation is to estimate (estimate) the amplitude of the next sample signal based on past samples. This value is called the predicted value, and then quantize and encode the difference between the actual signal value and the predicted value, thereby reducing Represents the number of bits per sample signal. It differs from pulse code modulation (PCM) in that PCM directly quantizes and encodes the sampled signal, while DPCM quantizes and encodes the difference between the actual signal value and the predicted value, and stores or transmits the difference instead of the absolute value of the amplitude .
APCM: A waveform coding technique that changes the size of the quantization step according to the magnitude of the input signal. This kind of adaptation can be instantaneous adaptation, that is, the size of the quantization step changes every few samples, or non-instantaneous adaptation, that is, the size of the quantization step changes only after a long time.
There are two ways to change the size of the quantization step: forward adaptation and backward adaptation. The former is to estimate the level of the input signal according to the root mean square value of the unquantized sample value, so as to determine the size of the quantization step, and encode the level as side information (side information) and transmit it to the receiving end. The latter is to extract the quantization order information from the past samples just output by the quantizer.

ADPCM: Adaptive differential pulse code modulation (an idea of ​​minimum prediction difference and adaptive quantization order)
combines the adaptive characteristics of APCM and the differential characteristics of the DPCM system, and is a waveform coding with better performance. Its core idea is: ① Use adaptive thinking to change the size of the quantization step, that is, use a small quantization step (step-size) to encode a small difference, use a large quantization step to encode a large difference, ② use Past sample values ​​estimate the predicted value for the next input sample such that the difference between the actual sample value and the predicted value is always minimized. Its coding simplified block diagram is shown below.

16. Sub-band coding (SBC G.722) and linear predictive coding (LPC) (just understand)

SBC:
Basic idea: Divide the frequency band of the input audio signal into several consecutive subbands, and use a separate coding scheme to encode the audio signal in each subband. At the receiving end, the codes of each subband are decoded separately, and then combined to restore the original audio signal.
Pros: Each sub-band is processed separately according to energy and feel

G.722 combines sub-band coding with phase ADPCM, first divides into two sub-bands and then ADPCM.
LPC:
Basic idea: The vocal tract is an inert cavity, and it is impossible to change suddenly, so the voice signal has a short-term correlation. By analyzing the speech waveform to generate the parameters of the channel excitation and transfer function, the coding of the sound waveform is converted into the coding of these parameters, which greatly reduces the amount of sound data.

17. Common color spaces

(1) RGB
(2) HSI (hue, saturation, brightness)/HSV
(3) CMY Cyan (Cyan), Magenta (Magenta), Yellow (Yellow) K: Key Plate (blacK) (Black) (very Possible test)
The conversion formula (not normalized) is:
C=255-R
M=255-G
Y=255-B
When normalized, 255 becomes 1, and the value is between 0 and 1.
(4) YUV, YIQ, YCbCr color spaces are developed for TV systems.
YUV is suitable for PAL and SECAM systems,
YIQ is suitable for NTSC systems,
and YCbCr is suitable for digital TV.
In order to be compatible with black and white and color TV signals, brightness Y and chrominance U and V are separated.
Application: RGB: display signal
HSI: human eye recognition
YUV: TV signal
CMY: color printing

18. Image format and some standards (just understand)

位图,矢量图对比。
BMP,GIF,TIFF,PNG,JPEG等等

GIF features: Interlaced display (when downloading, it will be displayed in a coarser resolution to see the whole picture), GIF can save multiple images as one file, so as to realize animation.

声音标准一般以G开头


JPEG (Joint Photographic Experts Group) JBIG
(Joint Bilevel Image Group)
International Electronics Committee ISO/IEC
MPEG (Motion Picture Experts Group)
international telecommunications composed of IEC and ISO Alliance (ITU-T):
H.261, H.263, H.264

19. JPEG Algorithm (Mandatory test, short answer may be involved if you choose to fill in the blank)

(1) Overview: algorithm block diagram

Forward Discrete Cosine Transform (FDCT).
quantization.
Zigzag scan.
The direct current coefficient (DC) is encoded using differential pulse code modulation (DPCM).
The AC coefficients (AC) are encoded using run-length encoding (RLE).
entropy coding

(2) Forward discrete cosine transform:
For each individual image component, the entire component image is divided into 8×8 image blocks, and used as a two-dimensional discrete cosine transform. Through the DCT transformation, the energy is concentrated on a few coefficients. After F(i, j) is transformed by DCT, F(0, 0) (the first value in the upper left corner) is the DC coefficient, and the others are AC coefficients.
(3) Quantization
Quantization is to quantize the frequency coefficient after FDCT transformation. The purpose of quantization is to reduce the magnitude of non-'0' coefficients and increase the number of '0' valued coefficients. Quantization is the biggest cause of image quality degradation.
For the lossy compression algorithm, the JPEG algorithm uses a uniform quantizer for quantization, and the quantization step is determined according to the position of the coefficient and the hue value of each color component.
Different color components use different quantization tables.
(4) Zigzag encoding
The quantized coefficients need to be rearranged in order to increase the number of continuous "0" coefficients, that is, the run length of "0". The method is to arrange according to the zigzag style, as shown in the figure below. This turns an 8*8 matrix into a 1*64 vector, with lower frequency coefficients placed on top of the vector.

(5) DC coefficient (DC)
The DC coefficient obtained after encoding 8 * 8 image blocks through DCT transformation has two characteristics, one is that the value of the coefficient is relatively large, and the other is the change of the DC coefficient value of adjacent 8 * 8 image blocks Not much (with some correlation). According to this feature, the JPEG algorithm uses differential pulse modulation coding (DPCM) technology to encode the difference (Delta) of quantized DC coefficients between adjacent image blocks, Delta=DC(0, 0)i-DC(0
, 0)i-1

(6) AC coefficient (AC) encoding
quantized AC coefficient is characterized by many "0" coefficients in the 1*64 vector, and many "0" are continuous, so use very simple and intuitive run-length encoding (RLE) to They are encoded.

(7) Entropy coding
In order to further compress data, it is necessary to perform entropy coding based on statistical characteristics on the codewords of DC code and AC run-length coding. The entropy coding suggested by JPEG is Huffman coding and adaptive binary arithmetic coding.
Entropy coding can be divided into two steps:
convert DC code and AC run-length code into intermediate symbol sequence.
Assign variable-length codewords to these symbols.

JPEG progressive coding method: spectrum selection method, bitwise approximation method
Image coding is completed in multiple scans. This kind of encoding takes a long time to transmit, and the image received by the receiving end is a progressive process from rough to clear after multiple scans. Between the output of the quantizer and the input of the entropy coding, a buffer enough to store the quantized DCT coefficients needs to be added, and the DCT coefficients stored in the buffer are scanned multiple times and encoded in batches.
Two progressive methods:
Spectrum selection method: During scanning, only the coefficients of certain frequency bands among the 64 DCT quantization coefficients are encoded and transmitted, and then other frequency bands are encoded and transmitted until all coefficients are transmitted. For example, it can be grouped (0,1,2), (3,4,5), ... (61,62,63) and so on.
Bitwise approximation method: Segmented progressive coding along the effective bit direction of the DCT quantization coefficient. For example, each coefficient can be segmented into 7654 bits, 3 bits, ..., 0 bits; first encode and transmit the highest 4 bits, and then encode and transmit the rest of the segments.

JPEG layered coding: Images are coded at multiple spatial resolutions. When the channel rate is slow and the resolution of the display at the receiving end is not high, only low-resolution image decoding is required, and high-resolution decoding is unnecessary.

20. MPEG image subsampling algorithm (very likely) P202

4:4:4 Each pixel is represented by 3 samples on average, that is, each point has Y, Cb, and Cr components
4:2:2 Each pixel is represented by 2 samples on average, and there are 8 consecutive 4 points horizontally Samples, 8/4=2
4:1:1 On average, each pixel is represented by 1.5 samples, and there are 6 samples in 4 consecutive horizontal points, 6/4=1.5 4:2:0 On average, each
pixel is represented by 1.5 samples Indicates that there are 6 samples of 4 points of horizontal and vertical 2*2 6/4=1.58/4=2

21. MPEG video compression technology (required) P272

Main ideas:
(1) Intra-frame image data compression: DCT-based compression is used to reduce the spatial redundancy within each frame of image. The algorithm is roughly the same as the JPEG algorithm, which is equivalent to static image compression.
(2) Inter-frame image data compression: 16×16 macroblock motion compensation technology is used to eliminate the temporal redundancy of the frame sequence (and a small part of the inter-frame spatial redundancy).
Three types of image
I frames: Intrapictures (I): the position of random access, the compression ratio is not large;
P frames: predicted frames (Predetected pictures, P): use the previous frame, I or P, to refer to prediction;
B frame: interpolation frame is bidirectional prediction (Bidirectional Prediction Pictures, B): the compression effect is remarkable, and the previous and subsequent information is required when predicting. B cannot be used as a prediction reference frame for other frames.
I-frame compression coding (spatial domain compression)
I-frame only has intra-frame coding, no inter-frame motion estimation, and no need to refer to other frames, so I-frame has a synchronization function, and the price paid is efficiency.
Three stages (that is, the basic steps similar to the JPEG algorithm):
DCT;
transform coefficient quantization (quantization, Z-word scanning, run-length coding) to compress most of the data, requiring a quantizer and encoder to output a channel transmission rate Matched bitstream;
entropy encoding;

P-frame compression coding
P-frame image coding is based on the image macroblock (macroblock) as the basic unit, first calculate the difference between the corresponding pixel values ​​​​in the two macroblocks, perform color space conversion on the obtained difference, and pass sub-sampling Get the Y, Cr and Cb component values, and then encode the difference according to the JPEG compression algorithm, and the calculated motion vector is also Huffman encoded. That is, the difference between the macroblock to be coded currently and the macroblock of the reference image and the motion vector of the macroblock are coded.
B-frame compression coding
is to code the difference between the pixel values ​​of the frames before and after it.

Time-domain redundancy reduction—macroblock motion compensation method
Motion compensation is to predict and compensate the current partial image through the previous partial image, and it is an effective method to reduce the redundant information of the frame sequence.
Reduce frame sequence redundant information (time domain), using motion compensation of 16×16 macroblocks:
algorithm based on 16×16 macroblocks: each macroblock calculates a 2D motion vector, the macroblock is used as the prediction unit, the current macroblock The block is the displacement of the previous macroblock, and the displacement content includes the motion direction and motion range. Using displacement information and previous images, the current image can be predicted. The prediction error of 16×16 needs to be coded and transmitted for image restoration at the decoding end.

Note: When transmitting, the I frame, P frame, and B frame should be transmitted first, so that the receiver can calculate the relevant B frame based on this.

22. Network Basics

交换技术:
线路交换网络(circuit-switched networks):在开始通信之前通信双方由线路交换中心建立物理连接,维持连接的时间长短取决于信息交换的需要,独占线路。	

Packet-switched networks: After the sender divides the long message into smaller packets, it uses the store and forward method to forward the packets to the output link.

TCP/UDP协议基础知识。
一般来说,应用层协议运行在操作系统之上,而传输层协议集成在操作系统之中。因此,当设计网络应用时,设计人员必需要指定其中的一种网络传输协议,网络多媒体应用通常使用UDP协议。

23.QoS (Quality of Service) (required)

QoS是分布式多媒体信息系统为了达到应用要求的能力所需要的一组定量的和定性的特性,它用一组参数表示:

Typical ones are throughput, delay, delay jitter and error rate, service availability, etc.
Throughput: Network throughput refers to the effective network bandwidth, which is usually defined as the transmission rate of the physical link minus various transmission overheads, as well as overheads such as network conflicts, bottlenecks, congestion, and errors. It reflects the maximum limit capacity delay of the network
. Delay is an important parameter to measure network performance. There are mainly propagation delay, sending delay, processing delay, queuing delay and so on.
Delay jitter: Refers to delay variation.
Error Rate is an important performance indicator that reflects the reliability of network transmission. Bit error rate, frame error rate, and packet error rate are used to calculate error rates at different network protocol levels.
Service Availability: Within a given time frame, the ratio of the time that the network can provide the service to the given time.
IETF has proposed two kinds of QoS assurance mechanisms, one is the guaranteed service provided by RSVP, integrated service (IntServ); the other is the differentiated service defined in differentiated service (DiffServ, DS). Guaranteed services are connection-oriented and are implemented through mechanisms such as QoS negotiation, admission control, reserved bandwidth, and real-time scheduling. Differentiated services have connectionless characteristics, and are mainly implemented through buffer management and priority scheduling mechanisms, without the need for QoS negotiation and bandwidth reservation controls. DiffServ directly uses the relevant fields of existing IPv4 and IPv6.
The advantages of differentiated services over integrated services (IntServ): first, the router does not need to maintain information about each connection, and has low requirements on system resources; For IP network equipment, the processing is simpler; again, the ToS field in the IP packet is used for priority marking without additional labels. This method has good compatibility and is easy to implement. In addition, priority classes need not expand as the network expands.

24. Multimedia communication protocol (very likely)

与多媒体应用密切相关的协议包括网络层的IPv6,传输层的RSVP,和应用层的RTP、RTCP、RTSP等。

Real-time Transport Protocol RTP (Real-time Transport Protocol): Application: RTP is widely used in streaming media communications, telephony, video conferencing, and television.
RTP provides end-to-end transport for real-time applications, but does not provide any quality of service guarantees.
After the multimedia data blocks are compressed and encoded, they are first sent to RTP to be encapsulated into RTP packets, and then loaded into the UDP user datagram of the transport layer, and then handed over to the IP layer.
Real-Time Transport Control Protocol RTCP:
RTCP is a protocol used in conjunction with RTP. The main function of RTCP is to provide applications with session quality or broadcast performance quality information. Each RTCP packet does not encapsulate audio data or TV data, but encapsulates statistical reports of the sending end and/or receiving end. These information include the number of information packets sent, the number of lost information packets and the jitter of information packets, etc. These feedback information are very useful to the sending end, receiving end or network administrator.
RTCP does not specify what an application should do with this feedback information, that is entirely up to the application developer.

Real-time Streaming Protocol RTSP
RTSP protocol works in client-server mode, it is a multimedia playback control protocol, which enables users to control when playing real-time data from the Internet. RTSP describes the interoperability with RTP. RTSP is a protocol that controls RTP sessions, making it possible to control and on-demand real-time streaming data.
Its working principle is shown in the figure below and page P469: (very likely to test)

QoS guarantees require a mechanism that allows applications to reserve resources on the Internet. Resource Reservation Protocol (RSVP) is such a standard. The RSVP protocol allows applications to reserve bandwidth for their data streams. The host uses this protocol to request a specific amount of bandwidth from the network according to the characteristics of the data flow, and the router also uses the RSVP protocol to forward the bandwidth request. In order to implement the RSVP protocol, there must be software for implementing the RSVP protocol in the receiving end, the sending end, and the router.

Features of RSVP
RSVP is a transport layer protocol.
RSVP is a signaling protocol.
RSVP is a protocol initiated by the receiving end
. Note: the RSVP standard does not specify how the network reserves resources for data streams. This protocol only allows applications to propose to reserve the necessary link bandwidth. of an agreement. Once a request is made to reserve resources, it is actually the router on the Internet that reserves the bandwidth for the data flow, and lets the router interface maintain various data flow information packets passing through this interface.

25. Hypertext and hypermedia (required)

文本最显著的特点是它在信息组织上是线性的和顺序的,超文本的信息网络是一个有向图结构,类似于人脑的联想记忆结构,它采用一种非线性的网状结构来组织块状信息,没有固定的顺序。
超文本是一种信息管理技术,它以节点作为基本单位。抽象地说,它实际是一个信息块;具体地说,它可以是某一字符文本集合,也可是屏幕中某一大小的显示区。节点的大小由实际条件决定,在信息组织方面,则是用链把节点构成网状结构,即非线性结构。
超媒体:超媒体是超文本和多媒体在信息浏览环境下的结合。它是对超文本的扩展,除了具有超文本的全部功能以外,还能够处理多媒体和流媒体信息。

26. For multimedia application examples (IVS), see the last chapter ppt

要点:电子眼+电子脑的智能。

Composition:
video collection
, video compression, decompression
, video transmission
, video storage
, video analysis
, video retrieval

技术领域:计算机视觉,智能分析技术,网络技术。

2020 real
question questions and answers:
1. The process steps of JPEG, this time only the purpose of quantization is tested, what method is used to encode the DC coefficient and AC (it is recommended to remember all the steps) 2. ADPCM
3.
What are the three types of frames in MPEG , What are the characteristics of each, and whose compression rate is the highest
4. JPEG2 progressive methods
5. The characteristics of TCP and UDP, and which protocol to choose for multimedia video transmission, and why?
6. Matrix transformation of color space YCbCr

Calculation questions:
1.
Nyquist sampling theorem, PCM sampling rate.
2. Calculate the data capacity within 1 minute
3. Is this PCM applicable to all frequencies in nature?

Two
Huffman coding (using an example of Huffman coding, drawing, filling in the form, average code length)

Three
1. The difference between the intelligent monitoring system and the traditional monitoring system, the composition of the intelligent monitoring system, and the technical problems faced
2. Talk about the knowledge composition in the textbook and what each part talks about

2021 Zhenti
questions and answers
1. Huffman coding, run-length coding, and arithmetic coding ideas.
2. Description of layered transmission algorithm.
3. What are the three types of frames in MPEG, what are their characteristics, and which one has the highest compression rate?
4. The characteristics of TCP and UDP, and which protocol to choose for multimedia video transmission, and why?
5. The difference between PCM, sub-band coding, and PLC 6.
What method is used to encode and sample the DC coefficient and AC coefficient of JPEG.

1. Nyquist sampling theorem, the sampling rate of PCM.
2. Calculate the data capacity within 1 minute
3. Is this PCM applicable to all frequencies in nature?

1. The difference between the intelligent monitoring system and the traditional monitoring system, the composition of the intelligent monitoring system, and the technical problems faced
2. Talk about the knowledge composition in the textbook and what each part talks about

2022 real questions

  1. The process steps of JPEG, this time only considers the purpose of quantization, what method is used to encode the DC coefficient and the AC (it is recommended to remember all the steps)
  2. Matrix Transformation of Color Space YCbCr
  3. What are the three types of frames in MPEG, what are their characteristics, and whose compression rate is the highest
  4. The characteristics of TCP and UDP, and which protocol to choose for multimedia video transmission, and why?
  5. ADPCM
  6. JPEG2 progressive methods
  7. Three kinds of frame transmission order and actual order
  8. Nyquist sampling theorem, the sampling rate of PCM. Calculate the data capacity within 1 minute. Is this PCM applicable to all frequencies in nature?
  9. Huffman coding (using an instance of Huffman coding, drawing pictures, filling in tables, average code length)
  10. The difference between the intelligent monitoring system and the traditional monitoring system, the composition of the intelligent monitoring system, and the technical problems faced
  11. Multimodal basic ideas, principles, typical applications, key technologies
  12. Talk about the knowledge composition in the textbook and what each part talks about

Guess you like

Origin blog.csdn.net/weixin_55085530/article/details/127813691