Maximum Likelihood Decoding and Viterbi Convolutional Decoding Algorithms

This column contains the core knowledge of information theory and coding, organized by knowledge points, and can be used as a reference for teaching or learning. The markdown version has been archived to [Github repository: https://github.com/timerring/information-theory ] or public account [AIShareLab] to reply to Information Theory .

convolutional decoding

maximum likelihood decoding

If all input information sequences are equally probable, then by comparing each conditional probability, it is also called the likelihood function P ( Z ∣ U ( m ) ) \mathbf{P}\left(\mathbf{Z} \mid \mathbf{U }^{(\mathbf{m})}\right)P(ZU( m ) ), the decoder with the minimum error probability can be obtained, whereZ \mathbf{Z}Z is the receiving sequence,U ( m ) \mathbf{U}^{(\mathrm{m})}U( m ) is the sequence that may be sent. If the following formula is satisfied, the decoder will chooseU ( m ′ ) \mathbf{U}^{\left(\mathrm{m}^{\prime}\right)}U(m )
P ( Z ∣ U ( m ′ ) ) = max ⁡ P ( Z ∣ U ( m ) ) over all U ( m ) \mathbf{P}\left(\mathbf{Z}\mid\mathbf{U} ^{\left(\mathbf{m}^{\prime}\right)}\right)=\max \mathbf{P}\left(\mathbf{Z}\mid \mathbf{U}^{(\mathbf {m})}\right) \text { over all } \mathbf{U}^{(\mathbf{m})}P(ZU(m))=maxP(ZU(m)) over all U( m )
For the BSC channel, the aboveformula is equivalent to selecting the codeword U ( m ′ ) \mathbf{U}^{\left(\mathrm{m}^{\prime}\right )}U(m' ). The decoder always starts from all possible sending sequencesU ( m ) \mathbf{U}^{(\mathrm{m})}U( m ) Select the sequenceU ( m ′ ) \mathbf{U}^{\left(\mathrm{m}^{\prime}\right)} with the smallest distance fromU(m)

For Gaussian channels, the distance is the Euclidean distance.

Suppose the code sequence received by the receiver is 1011010010010010, how to decode it?

Calculate the probabilities of all possible paths in the figure below, then compare and select the maximum value.

How many paths are there? ≤ 2 8 \leq 2^{8}28 (Exhaustive method, need to compare2 l 2^{l}2l , where l is the code sequence length)

Convolutional Decoding-Viterbi Convolutional Decoding Algorithm

The Viterbi decoding algorithm was proposed by Viterbi in 1967. The essence of the Viterbi algorithm is maximum likelihood decoding, but it uses the special structure of the coding trellis graph, thus reducing the computational complexity. Compared with the complete comparative decoding, its advantage is that it makes the decoder more complicated. The property is no longer a function of the number of symbols contained in the codeword sequence.

The algorithm consists in computing the similarity, or distance, between the path to each state on the trellis graph at time t and the received sequence. What the Viterbi algorithm considers is to remove the path on the grid graph that cannot be the object of maximum likelihood selection, that is, if there are two paths to the same state, the path with the best metric is selected, which is called the survival path .

Such routing operation will be carried out for all states, and the decoder will continue to go deep on the trellis graph, and realize the decision by removing the path with the least possibility. Rejecting impossible paths earlier reduces decoding complexity. Note that choosing the optimal path can be expressed as choosing the codeword with the largest likelihood metric, or choosing the codeword with the smallest distance .

Assuming a BSC channel, the Hamming distance is a suitable distance metric.

The essence of the Viterbi decoding algorithm can be summarized as: addition, comparison and selection.

  • add: add distance(probability, branch metric);
  • Ratio: comparison of cumulative distances (probability, cumulative measure);
  • Selection: select the path with small distance (high probability) as the survival path

The Viterbi decoding algorithm is based on trellis graphs. When decoding, first divide the received sequence into n groups, and then calculate the Hamming distance between each group and the output of each branch in the corresponding trellis graph.

For the (2,1,3) convolutional code shown in the figure below, if the received sequence is: 11 01 10 11 00 10 11, find the decoding result.

The decoding path, the decoding result is: 10011

When the input is: 10011, the encoding result is 11 01 11 11 10 10 11

Compare the received sequence 11 01 10 11 00 10 11

The 2 bits were wrong, and they were corrected during the decoding process.

The distance characteristics of the convolutional code: free distance: the distance from the 0 state back to the 0 state

d free  = 5 t = [ ( d f − 1 ) / 2 ] d_{\text {free }}=5 \quad t=\left[\left(d_{f}-1\right) / 2\right] dfree =5t=[(df1)/2]

references:

  1. Proakis, John G., et al. Communication systems engineering. Vol. 2. New Jersey: Prentice Hall, 1994.
  2. Proakis, John G., et al. SOLUTIONS MANUAL Communication Systems Engineering. Vol. 2. New Jersey: Prentice Hall, 1994.
  3. Zhou Jiongpan. Communication Principles (3rd Edition) [M]. Beijing: Beijing University of Posts and Telecommunications Press, 2008.
  4. Fan Changxin, Cao Lina. Principles of Communication (7th Edition) [M]. Beijing: National Defense Industry Press, 2012.

Guess you like

Origin blog.csdn.net/m0_52316372/article/details/131322474