The old driver interprets Shannon's theorem, Nyquist's theorem, coding and modulation

Engineers will always consider a question: how much data can be transmitted on the channel, or what is the limit transmission rate on the specified channel. This is the problem of channel capacity. For example, in the xDSL system, the transmission medium we use is a telephone line with a bandwidth of only a few megabytes, and data with a bandwidth of several megabytes, tens of megabytes or even tens of megabytes needs to be transmitted on it. Is the transmission reliable on the twisted pair? Or from another point of view, how high a data rate (b/S) can be used to reliably transmit information on a physical channel with a given bandwidth (Hz)?

As early as 1924, AT&T engineer Nyquist (Henry Nyquist) realized that in any channel, the rate of symbol transmission has an upper limit, and derived a calculation formula to calculate the noise-free, finite The maximum data transfer rate of a bandwidth channel, which is today's Nyquist Theorem. Since this theorem is only limited to calculating the maximum data transmission rate of the channel in a noise-free environment, it still cannot effectively calculate the maximum data transmission rate of the channel in a noisy environment, so in 1948, Shannon (Claude Shannon) put Nyquis Te's work is further extended to the case where the channel is interfered by random noise, that is, the maximum data transmission rate of the channel is calculated in the case of random noise interference, which is today's Shannon's theorem. These two theorems are introduced separately below.

1. Nyquist Theorem

Nyquist proved that for an ideal channel with a bandwidth of W Hz, its maximum symbol (signal) rate is 2W baud. This limitation is due to the presence of intersymbol interference. If the transmitted signal contains M state values ​​(the number of states of the signal is M), then the maximum data transmission rate (channel capacity) that the WHz channel can carry is:

C =2×W×log2M(bps)

Assuming that the signal transmitted in the channel with a bandwidth of W Hz is a binary signal (that is, there are only two physical signals in the channel), then the maximum data transmission rate that the signal can carry is 2Wbps. For example, if a voice channel with a bandwidth of 3KHz is used to transmit digital data through a modem, according to the Nyquist theorem, the sending end can only send up to 2×3000 symbols per second. If the number of states of the signal is 2, each signal can carry 1 bit of information, then the maximum data transmission rate of the voice channel is 6Kbps; if the number of states of the signal is 4, then each signal can carry 2 bits of information, then The maximum data transfer rate of the voice channel is 12Kbps.

Therefore, for a given channel bandwidth, the data transmission rate can be increased by increasing the number of different signal units. However, this will increase the burden on the receiving end, because every time the receiving end receives a symbol, it no longer just distinguishes one of two possible signal values, but must distinguish one of M possible signals. Noise on the transmission medium will limit the actual value of M.

2. Shannon's Theorem

Nyquist considers an ideal channel without noise, and Nyquist's theorem states that doubling the channel bandwidth doubles the data transfer rate, all other things being equal. But for noisy channels, the situation will quickly deteriorate. Now let's consider the relationship between data transfer rate, noise, and bit error rate. The presence of noise corrupts one or more bits of data. If the data transfer rate is increased, each bit takes less time, so noise will affect more bits, and the bit error rate will increase.

For noisy channels, we hope to improve the receiver's ability to receive data correctly by increasing the signal strength. The parameter to measure the quality of the channel is the Signal-to-Noise Ratio (S/N), which is the ratio of the signal power to the noise power presented at a specific point in the channel. Usually the signal-to-noise ratio is measured at the receiving end, since that's where we process the signal and try to remove the noise. If S represents the signal power and N represents the noise power, then the signal-to-noise ratio is expressed as S/N. For convenience, people generally use 10log10 (S/N) to represent the signal-to-noise ratio, and the unit is decibel (dB). The higher the value of S/N, the better the quality of the channel. For example, if the S/N is 1000, the signal-to-noise ratio is 30dB; if the S/N is 100, the signal-to-noise ratio is 20dB; if the S/N is 10, the signal-to-noise ratio is 10dB.

For the transmission of digital data through a noisy channel, the signal-to-noise ratio is very important, because it sets an upper limit of the data transmission rate achievable by the noisy channel, that is, for a channel with a bandwidth of W Hz and a signal-to-noise ratio of S/N , its maximum data transmission rate (channel capacity) is:

C = W×log2(1+S/N)(bps)

For example, for a voice channel with a bandwidth of 3KHz and a signal-to-noise ratio of 30dB (S/N is 1000), no matter how many level signals are used to send binary data, the data transmission rate cannot exceed 30Kbps. It is worth noting that Shannon's theorem only gives a theoretical limit, and the rate that can be achieved in practical applications is much lower. One of the reasons is that Shannon's theorem only considers thermal noise (white noise), but does not consider factors such as impulse noise.

Shannon's theorem gives the error-free data transmission rate. Shannon also demonstrated that it is theoretically possible to achieve an error-free data rate using an appropriate signal encoding, assuming the actual data rate of the channel is lower than the error-free data rate. Sadly, Shannon doesn't show how to find this encoding. It is undeniable that Shannon's theorem does provide a standard for measuring the performance of actual communication systems.

3. Coding and Modulation

After talking about the above two theorems, let's talk about the explanation of coding and modulation.

source and sink

Source and sink are two professional terms in the network. In fact, source and sink can be simply understood as the sender and receiver of information. The process of information dissemination can generally be described as: source→channel→sink. In the process of traditional information dissemination, there are strict restrictions on the qualifications of information sources, usually referring to radio stations, television stations and other institutions, which adopt a central structure. In a computer network, there are no special restrictions on the qualifications of information sources, any computer in any network can be a source of information, and of course any computer in any network can also be a destination.

Due to the limitations of the transmission medium and its format, the signals of the two parties in communication cannot be transmitted directly. They must be processed in a certain way to make them suitable for the characteristics of the transmission medium, so that they can be transmitted to the destination correctly.

Modulation refers to the use of analog signals to carry digital or analog data; while encoding refers to the use of digital signals to carry digital or analog data.

At present, there are mainly two types of transmission channels: analog channels and digital channels. The analog channels are generally only used to transmit analog signals, while the digital channels are generally only used to transmit digital signals. Sometimes it is necessary to use digital channels to transmit analog signals, or to use analog channels to transmit digital signals. At this time, we need to convert the transmitted data to the data type that can be transmitted by the channel, that is, analog signals and digital signals. Signal conversion, which is the main content of coding and modulation. Of course, the problem of how to send analog data and digital data through channels is also an important content of coding and modulation. Next, we will introduce the modulation and coding of data from four aspects: analog signal transmission using analog channel, analog signal transmission using digital channel, digital signal transmission using analog channel and digital signal transmission using digital channel.

1. Analog signals are transmitted using an analog channel

Sometimes analog data can be sent directly on the analog channel, but this is not commonly used in network data transmission, people still modulate the analog data and then send it through the analog channel. The purpose of modulation is to modulate an analog signal onto a high-frequency carrier signal for long-distance transmission. Currently, existing modulation methods mainly include amplitude modulation (Amplitude Modulation, AM), frequency modulation (Frequency Modulation, FM) and phase modulation (Phase Modulation, PM).

2. Analog signals are transmitted using digital channels

To transmit an analog signal on a digital channel, the analog signal must first be converted into a digital signal. The process of this conversion is the process of digitization. The process of digitization mainly includes two steps: adoption and quantization. Common methods for encoding analog signals to digital channels mainly include: Pulse Amplitude Modulation (PAM), Pulse Code Modulation (PCM), Differential Pulse Code Modulation (Differential PCM, DPCM) and incremental Pulse Code Modulation (Delta Modulation, DM).

3. Digital signals are transmitted using analog channels

The process of transmitting a digital signal using an analog channel is a modulation process, which is a process in which the digital data represented by a digital signal (binary 0 or 1) is used to change the characteristics of an analog signal, that is, the process of modulating binary data onto an analog signal.

A sine wave can be defined by 3 properties: amplitude, frequency and phase. When we change any of these properties, we have another form of the wave. If the original wave represents a binary 1, then the deformation of the wave can represent a binary 0; and vice versa. Any of the 3 properties of the wave can be changed in this way, giving us at least 3 mechanisms for modulating digital data into an analog signal: Amplitude-Shift Keying (ASK), frequency shifting Keying (Frequency-Shift Keying, FSK) and Phase-Shift Keying (Phase-Shift Keying, PSK). In addition, there is a mechanism that combines amplitude and phase changes called Quadrature Amplitude Modulation (QAM). Among them, quadrature amplitude modulation has the highest efficiency, and it is also a technology often used in all modems today.

4. Digital signals are transmitted using digital channels

If a digital signal is transmitted on a digital channel, the digital signal needs to be encoded first. For example, this is commonly used when transferring data from a computer to a printer. In this way, the digital signal must first be encoded, that is, the binary 0 and 1 digital signals generated by the computer are converted into a series of voltage pulses that can be transmitted on the wire. Coding the source can reduce the data rate and improve the efficiency of information volume, and coding the channel can improve the anti-interference ability of the system.

At present, the common data encoding methods mainly include non-return-to-zero coding, Manchester coding and differential Manchester coding.  

(1) Non-Return to Zero (NRZ, Non-Return to Zero): Binary numbers 0 and 1 are represented by two levels, and -5V is commonly used to represent 1, and +5V is used to represent 0. The disadvantage is that there is a DC component, and a transformer cannot be used in transmission; there is no self-synchronization mechanism, and external synchronization must be used during transmission.

(2) Manchester code (Manchester Code): 0 and 1 are represented by voltage changes, and a jump occurs in the middle of each symbol. A high → low transition represents 0, and a low → high transition represents 1 (note: there is an opposite description of this part in a certain tutorial, which is also correct). A jump occurs in the middle of each symbol, and the receiving end can extract this change as a synchronization signal. This encoding is also called Self-Synchronizing Code. Its disadvantage is that it requires double the transmission bandwidth (that is, the signal rate is twice the data rate).

(3) Differential Manchester encoding: jumps still occur in the middle of each symbol, and 0 and 1 are represented by whether there is a jump at the beginning of the symbol. A transition means 0, and no transition means 1 (note: there is an opposite description about this part in some kind of tutorial, which is also correct).

Source: Organized from the library

—END—

Link: The old driver interprets Shannon's theorem, Nyquist's theorem, coding and modulation-RFASK radio frequency question

Questions about RFASK

RF Questions is a technical question-and-answer learning platform upgraded on the basis of the "Microwave RF Network" series of original technology columns. It mainly focuses on RF chips, microwave circuits, antennas, radars, satellites and other related technical fields. , antennas, radar and other industries, providing high-quality, original technical questions and answers, column articles, RF courses and other learning content. For more information, please visit: RFASK RF Questions - RF Technology R&D Service Platform | Technical Questions and Answers, Column Articles, RF Courses

Guess you like

Origin blog.csdn.net/qizu/article/details/130840072