Convolutional coding and Viterbi decoding


convolutional coding

Convolutional codes are non-blocking codes that are generally suitable for forward error correction. In block codes, a code group of n symbols generated by the encoder is completely determined by the k-bit input information during this period of time. The supervision bits in this code group only supervise k information bits in this code group. Although convolutional codes also encode k-bit information segments into n-bit code groups, the supervision symbols are not only related to the current k-bit information segment, but also to the previous m = (N -1) information Segments are related, so the supervision symbols in a code group supervise N information segments. N is usually called the encoding constraint degree, and nN is called the encoding constraint length.
Generally speaking, for convolutional codes, the values ​​of k and n are relatively small integers, and n is greater than k. The convolutional code is recorded as (n,k,N), and the code rate is defined as kn \frac{k} { n}nk.
The encoder consists of three main components, including Nk stage shift registers , n modulo 2 adders and a rotary switch . The number of input terminals of each modulo 2 adder can be different. It is connected to the output terminal of some shift registers. The output terminal of the modulo 2 adder is connected to a rotary switch. The rotary switch rotates once per time slot and outputs n bits. Divide time into equally spaced time slots. In each time slot, k bits enter the shift register from the left end, and the information temporarily stored at each level of the shift register is shifted k bits to the right.
The general principle block diagram of a convolutional encoder is shown in the figure below.
Insert image description here
The block diagram of a (3,1,3) convolutional encoder is shown in the figure below.
Insert image description here
According to the above block diagram:
ci = bi {c_i}={b_i}ci=bi
d i = b i ⊕ b i − 2 {d_i}={b_i}⊕{b_{i-2}} di=bibi2
e i = b i ⊕ b i − 1 ⊕ b i − 2 {e_i}={b_i}⊕{b_{i-1}}⊕{b_{i-2}} ei=bibi1bi2
Among them, bi {b_i}biis the current input information bit, bi − 1 {b_{i-1}}bi1Japanese bi − 2 {b_{i-2}}bi2The first two bits of information stored in the shift register.
According to M 3 M 2 {M_3M_2}M3M2Different, the defined state table is as follows.

M 3 M 2 {M_3M_2}M3M2 corresponding status
00 a
01 b
10 c
11 d

The relationship between the state of the shift register and the input and output symbols is shown in the table below.

Shift register previous state M 3 M 2 {M_3M_2}M3M2 Import bi {b_i}bi M 3 M 2 M 1 {M_3M_2M_1} M3M2M1 c i d i e i {c_id_ie_i} cidiei Next state of shift register M 3 M 2 {M_3M_2}M3M2
a(00) b 1 = 0 {b_1}=0 b1=0 000 000 a(00)
a(00) b 1 = 1 {b_1}=1 b1=1 001 111 b(01)
b(01) b 2 = 0 {b_2}=0 b2=0 010 001 c(10)
b(01) b 2 = 1 {b_2}=1 b2=1 011 110 d(11)
c(10) b 3 = 0 {b_3}=0 b3=0 100 011 a(00)
c(10) b 3 = 1 {b_3}=1 b3=1 101 100 b(01)
d(11) b 4 = 0 {b_4}=0 b4=0 110 010 c(10)
d(11) b 4 = 1 {b_4}=1 b4=1 111 101 d(11)

It can be seen that the next state of state a can only be a or b, the next state of state b can only be c or d, the next state of state c can only be a or b, and the next state of state d can only be Can be c or d.
The geometric representation of convolutional codes can be divided into code tree diagrams, state diagrams and grid diagrams.
The code tree diagram corresponding to the above (3,1,3) convolutional encoder is shown in the figure below.
Insert image description here
In the code tree diagram, if the input information bit is 0, the state moves upward; if the input information bit is 1, the state moves downward.
It can be seen that starting from the fourth level branch, the upper half and the lower half of the code tree are the same, which means that starting from the fourth input information bit, the output symbol has nothing to do with the first input information, that is, this The constraint degree of the encoder is N=3. Moreover, on the code tree diagram, it is easy to read the output sequence after encoding by inputting information bits. For example, the input sequence is: 1101, then the output sequence is: 111 110 010 100. When reading, if the input information bit is 1, the lower branch is read, and if the input information bit is 0, the upper branch is read.
The state diagram corresponding to the above (3,1,3) convolutional encoder is shown in the figure below.
Insert image description here
In the state diagram, the dotted line represents the state transition route when the input information bit is 1, and the solid line represents the state transition route when the input information bit is 0.
The 3-digit numbers next to the line are the encoded output bits. The output sequence can also be easily obtained based on the input sequence using the state diagram. When reading, start from state a. If the input is 1, read the 3 bits on the dotted line as the output. If the input is 0, read the 3 bits on the solid line as the output. Then jump to the next state and continue reading.
The grid diagram corresponding to the above (3,1,3) convolutional encoder is shown in the figure below.
Insert image description here
In the grid diagram, the dotted line indicates that the input information bit is 1, and the solid line indicates that the input information bit is 0.
It can be seen that the grid pattern after the 4th time slot completely repeats the pattern of the 3rd time slot, which also reflects that the constraint length of the convolutional code is 3.


Viterbi decoding

The Viterbi decoding algorithm was proposed by Viterbi in 1967. This method is relatively simple and fast in calculation, so it has been widely used. The basic principle is: compare the received signal sequence with all possible transmitted signal sequences, and select the sequence with the smallest Hamming distance as the current transmitted signal sequence.
In information theory, the Hamming distance between two strings of equal length is the number of different characters in the corresponding positions of the two strings . In other words, it is the number of characters that need to be replaced to transform one string into another. For example, the Hamming distance between 0000 and 1111 is 4, and the Hamming distance between 0000 and 0101 is 2.
The following uses this example to illustrate the Viterbi decoding process. Assume that the sent sequence is 1101, and the encoded sequence is 111 110 010 100.
Since this is a (n,k,N) = (3,1,3) convolutional code, the constraint degree of the transmission sequence is N = 3, so we must first examine the first nN = 9 bits, that is, 111 110 010, and every time along the path There are 4 states at the first level, and each state has only two paths to reach. Therefore, there are 8 arrival paths in the 4 states. Now compare the Hamming distance between the corresponding sequences of these 8 paths and the received sequence 111 110 010, list as follows.

path Corresponding sequence Hamming distance Survive or not
aaaa 000 000 000 6 no
abca 111 001 011 4 yes
aaaab 000 000 111 7 no
abcb 111 001 100 5 yes
aabc 000 111 001 6 no
abdc 111 110 010 0 yes
aabd 000 111 110 5 no
Abd 111 110 101 3 yes

The Hamming distances of the two paths to each state are compared, and the path with the smaller distance is retained, which is called the surviving path. If the Hamming distances of the two paths are the same, any one can be retained, leaving only 4 paths.
Next, continue to examine the next three digits 100, and calculate the Hamming distance of the eight possible paths after the above four surviving paths are increased by one level, as shown in the table below.

path Hamming distance of the original surviving path Add new path Add distance total distance Survive or not
abca+a 4 aa(000) 1 5 no
abdc+a 0 ca(011) 3 3 yes
abc+b 4 ab(111) 2 6 no
abdc+b 0 cb(100) 0 0 yes
abcb+c 5 bc(001) 2 7 no
abdd+c 3 dc(010) 2 5 yes
abcb+d 5 bd(110) 1 6 no
abd+d 3 dd(101) 1 4 yes

The smallest Hamming distance in the table is 0, the corresponding path is abdcb, and its corresponding sequence is 111 110 010 100, which is consistent with the input encoding sequence, so its corresponding sending sequence is 1101, and the decoding is completed.
If there are a small number of bit errors in the sequence, decoding can be completed.
Still in the above example, the sent sequence is 1101, and the encoded sequence is 111 110 010 100. Assume that errors occur in bits 4 and 11 of the received sequence, that is, the received sequence is 111 010 010 110.
The same method is used for list analysis. First, examine the first nN = 9 bits 111 010 010, and compare the Hamming distance between the corresponding sequences of the 8 paths and the received sequence 111 010 010. The list is as follows.

path Corresponding sequence Hamming distance Survive or not
aaaa 000 000 000 5 no
abca 111 001 011 3 yes
aaaab 000 000 111 6 no
abcb 111 001 100 4 yes
aabc 000 111 001 7 no
abdc 111 110 010 1 yes
aabd 000 111 110 6 no
Abd 111 110 101 4 yes

Next, continue to investigate the successor three digits 110, and calculate the Hamming distances of the 8 possible paths after the above four surviving paths are increased by one level, as shown in the table below.

path Hamming distance of the original surviving path Add new path Add distance total distance Survive or not
abca+a 3 aa(000) 2 5 no
abdc+a 1 ca(011) 2 3 yes
abc+b 3 ab(111) 1 4 no
abdc+b 1 cb(100) 1 2 yes
abcb+c 4 bc(001) 3 7 no
abdd+c 4 dc(010) 1 5 yes
abcb+d 4 bd(110) 0 4 yes
abd+d 4 dd(101) 2 6 no

The smallest Hamming distance in the table is 2, the corresponding path is still abdcb, and its corresponding sequence is 111 110 010 100, which is consistent with the input coding sequence, so its corresponding sending sequence is 1101, and the correct translation has been completed. code.


That’s all about convolutional coding and Viterbi decoding!
Reference materials for this article:
Principles of Communication/Compiled by Fan Changxin and Cao Lina

Guess you like

Origin blog.csdn.net/weixin_42570192/article/details/131124661