Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple error correction
44,036 views
98 slides
Feb 08, 2016
Slide 1 of 98
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
About This Presentation
In contrast to block codes, Convolution coding scheme has an information frame together with previous m information frames encoded into a single code word frame, hence coupling successive code word frames. Convolution codes are most important Tree codes that satisfy certain additional linearity and ...
In contrast to block codes, Convolution coding scheme has an information frame together with previous m information frames encoded into a single code word frame, hence coupling successive code word frames. Convolution codes are most important Tree codes that satisfy certain additional linearity and time invariance properties. Decoding procedure is mainly devoted to correcting errors in first frame. The effect of these information symbols on subsequent code word frames can be computed and subtracted from subsequent code word frames. Hence in spite of infinitely long code words, computations can be arranged so that the effect of earlier frames, properly decoded, on the current frame is zero.
Size: 5.32 MB
Language: en
Added: Feb 08, 2016
Slides: 98 pages
Slide Content
CONVOLUTION COPES
e Difference between block codes and convolution codes.:
° Block code: Block of n digits generated by the encoder in a
particular time unit depends only on block of k input
massage digits within that time unit.
> used mainly for error detection as data are transmitted in
blocks.
° Convolution code: Block of n digits generated by the encoder
in a particular time unit depends not only on block of k
input massage digits within that time unit but also on the
previous (N-1) blocks of message digits (N>1).
° Mainly used for error correction.
N
TREE CODES and TRELLIS CODES
+ Input stream broken into m segments of k, symbols
each.
Each segment - Information frame.
Encoder — (Memory + logie circuit.)
m memory to store m recent information frames
at a time. Total mk, information symbols.
At each input, logie circuit computes codeword.
For each information frames (k, symbols), we get a
codeword frame (w, symbols ).
Same information frame may not generate same
code word frame. Why?
TREE CODES
Information Frame
Codeword Frame
Encoder
"Constraint length of a shift register encoder is
number of symbols it can store tn its memory.
*v=wmk,
TREE CODES - some facts/definitions!
° Wordlength of shift register encoder kk = (m+1)k,
e Blocklength of shift register encoder n = (m+1)n,
o (n,k,) code tree rate = k,/n, =k/n.
e A (n„R,) tree code that ts linear, time invariant and
has finite Wordlength R= (m+1)k,is called (n,K)
Convolution codes.
° A (n,k,) tree code that ts time invariant and has
finite Wordlength k = (m+1)k, is called (n,K)
Sliding Block codes. Not linear.
Convolution Encoder
Input Shift Register Encoded output
+ One bit input converts to two bits code.
+ k,=1, n,= 2, Block length = (m+1)n, =6
+ Constraint length = 2, Code rate = 1/2
Convolutional Encoder- Input and output bits
ad Current state of Outgoing Bits
0 0 0
1 0 0
0 0 1
1 0 1
0 1 0
1 1 0
0 1 1
1 1 1
N
Convolutional Encoder- input and output bits
Incoming
Bit
Current state of
Encoder
Outgoing Bits
0 0
PRO[(PRO PRO RIO
PPPPIOO|O|O
PPrPOoOo|P/P|i0|/o°
or rloor»
no o|r/ or»
N
Convolutional Encoder - State Diagram
° Same bit gets coded differently depending upon
previous bits.
+ Make state diagram of Encoder.
o 2 bits encoder - 4 states.
e M minimum distance d,* is smallest hamming
distance between any two initial codeword segments
L frames long that are not identical in Initial frames.
o tfl=m+1, then (m+1)" minimum distance is
minimum distance of the code d* or Ayin-
e Code can correct “t” errors in first L frames tf
o d% 22t+1
Convolutional encoder -Example
m=2,|=3
.d,*=2, d,*=3, d¿*=5... comparing with all ‘o’s.
e d*=5
e t=2
N
Convolutional encoder - Free Distance
° Minimum distance between arbitrarily long encoded
sequence. ( Possibly infinite)
° Free distanceis Ap, = max, Lal
° It can be minimum weight of a path that deviates
from the all zero path and Later merges back into the
all zero path.
° Also di < Asa S..£ tree
Convolutional encoder - Free Length
> Free length ns. is Length of non-zero segment of
smallest weight convolution codeword of nonzero
weight.
° d = vee À if L= Were:
Convolutional encoder - Free Length
o Here
o Free distance = 5
o Free length = 6
Matrix description of Convolution encoder - Example
D+D? D? D+D?
ME» D D
Matrix description of Convolution encoder -
Example
° Generator polynomials are:
° 9,¿(D) =D+ dD?
* 912(P) = D?
°9,,(>) =D + Dd?
* Y (D) = D?
* Ya (DP) =D
* Ja (D) =D
Matrix description of Convolution encoder -
Example
o Matrix G, has coefficients of DO L.e. constant s:
e OE a
[000
o Matrix G, has coefficients of Dt:
“OT
ar Lo 1 y
+ Matrix G, has coefficients of D? :
pe det
2 41:00
Matrix description of Convolution encoder - >
Example
° Generator matrix is only the representation of a
convolution coder. It ts actually not used to generate
codes.
o d14°0 111-1110 0.06
i} 1
dO DT Wi ti100 PM
eee TO 0-011 Old. EL 1
G= | |
10 0 0:0 1 111 00
A ae o aah
[i i I
| | | se
Matrix description of Convolutional encoder - General
° Generator matrix is only the representation of a
convolution coder. tt ts actually not used to generate
codes.
Es E & G, 0 oO 0 0
co o & Gri G, 0 0 0
0 0 G Gn-2 Gui Gyn 0 0
N
VITERBI DECODING
e Advantages:
° Highly satisfactory bit error performance
° High speed of operation.
° Ease of implementation.
e Low cost.
VITERBI DECODING
+ Given Trellis Diagram of rate 1/3.
+ Transmitted word - 000000000000000000000...
» Received word - 010000100001...
> e e e
. . _ oo 0 > Continues to
| ( infinity
‘
100, A
e e eo
States a >
Time axis
VITERBI DECODING
o Procedure:
° Divide the incoming stream into groups of three.
+ 010 000 100 001...
° Find hamming distances between first group with two
options at first node. Called Branch Metric.
e Take second group and continue with second node.
Find total hamming distance called Path Metric.
° At any node if two branches meet, discard branch
with higher Path Metric.
o Continue...
VITERBI DECODING
First group - 010
A(000,010) =1 BM1=1
d(111,010) =2 BM2=2
Time axis
VITERBI DECODING
+ Second group - 000
+ Branch Metric and Path Metric (circled) are found.
+ At each node three compare total path metric.
Ses Time axis
VITERBI DECODING
+ Third group - 100
+ At each node three compare total path metric.
+ Discard longer Path and retain Survivor.
° Code generated by combining outputs of m stage
shift registers, through n EX-OR'S .
° No of bits in message stream = L.
° No of bits in output code = w (L + m).
UP data stream b1
MSB in first
“ Commutator
vi x x
° Input stream m=1 0110 an
elL=5, n=3, m=4
e Code bits = 27
° Rate of code = 1/3
° CODE bo=111 010 100 110 001 000
011 000 000
1? data stream b1
MSB in first
v2
Commutator
VL 5 : v3
Code depends on
No of shift registers m.
No of shift summers n.
No of shift input bits L.
Manner in which all are connected.
Cone TTC
+ Code words = 2R- where -
- L = length of information sequence
— R = number of bits at each each clock
L+m-+1 tree levels are Labeled from O to L+m.
The leftmost node A in the tree is called the origin node.
2" branches leaving each node in first L Levels for an
(wk,m) code - called Dividing part of the tree.
Upper branch leaving each node in dividing part of tree
represent Input u, =0/1 while Lower branch represent
input w= 1/0.
After L time units, there is only one branch leaving
each node.
This represents input O for t= L, L+1, ...L+m-1 with
encoder returning to all zero state. (Taíl of tree)
Code Tree
e 2e-Rightmost nodes are called terminal nodes.
° Each branch is Labeled with n outputs .
° Each of 2" codewords of Length N is represented by
a totally distinct path through the tree.
Each input message bit effects m x m bits of the code
+ = 4X3 = 12 bits.
At node A, we must check first 12 bits only.
16 such unique 12-bit combinations from A.
Compare first 12 bits with 12 combinations and
select best match.
if it takes up the tree at A, first bit decoded to “0”
(Fit takes down the tree at A, first bit decoded to “1”.
Discard first 3 bits of input code and repeat the
same at B comparing next 12 bits with next 16
combinations.....and so on.
> CRETE
001
o 110 _ 011 _ ‚000
à o10
v 100 _ 10 _ on
1L_uo
ON | 000 000
má
COTON
100
. 101 “011: 000
ol VE 10
Mr 107° 000
on
000, 000 000
001 110 on
010
119, 011 000
FT TS GE Enr roi Ser
ns 011 000 000!
o10 10 ott
am Pe
101, 011. 000
mr 301. OM
000
(000° 000 000,
o 1 2 3 4 3 6 7
— - Levels
° For BSC with transition probability p, the bit metrics are
2 CANA) = log,IP(r,|v,) / P(r)1 -R
= 109, P(r, |v,) - Log,P(r;) -R
o P(r) = 22.
° M(r,|v) = log, p- log, 2- R
°M(r|v) = (log, 2p -R A
. = log. 2(4p) - R if r= v;}
The Stack Algorithm - Z) Algorithm
Example: tf R = 1/3 and p = 0.10, find M(r,|v,)
M(r,|v) = 2.65 rev,
= 0.52 if r= v,)
The metric table ts shown below.
Metric table ts usually scaled by positive constant to
approximate values to integers. Scaled table is -
1
Stack Algorithm - Z) Algorithm - Method à
° An ordered list or stack of previously examined paths of
different Lengths is kept in storage.
° Gach stack entry contains a path along with its metric.
° Path are listed in stack in order of decreasing metric.
° At each decoding step, top path of stack is extended by
computing the branch metrics of its 2° succeeding
branches and adding these to metric of top path to form 2*
new paths, called successors of the top path.
° Top path is then deleted from the stack, its 2 successors
are inserted and stack is rearranged in decreasing order
of metric values.
° Algorithm terminates when top path is at the end of tree.
° When the algorithm terminates, the top path in the stack
is taken as decoded path.
e Make flowchart for the stack algorithm.
The Stack -more facts N
° In the dividing part of the tree , there are 2% new metrics to
be computed at step 1.
° Iw the tail of the tree, only one new metric is computed.
° For (w,1,m) codes, size of stack increases by one for each
decoding step except at tail of tree, where size of stack does
not increase at all.
° Since dividing part of tree is much longer than tail
(L>>m), the size of stack ts roughly equal to the number
of decoding steps when algorithm terminates.
The Stack -
Flow chart
Compute metrics
of successors
of top path
Delete top
path
from stack
Reorder stack
according to
metric values
The Stack Algorithm -Example N
Consider the earlier R=1/3 code tree and apply stack
algorithm. Assume a code word is transmitted over à BSC
with p=0.10 and the following sequence is received. Using
metric table, show content of stack stepwise ana decode
correct word.
r = (010, 010, 001, 110, 100, 101, 011
Take first n=3 bits at first dividing part and compare
with two branches to find toe branch matrics.
O10 with 114 at upper branch (1) gives 2 discrepancies
and 1 match.
From table branch metric = 2X (-5) + 1X1 = -9
O10 with 000 at Lower branch (0) gives 1 discrepancy and
2 match.
From table branch metric = 1X (-5) + 2X1 = -3
/ weenie | | N
-Example Step I ot 4
00 (-6)
0(-3) 1(-9)
1 (-9) 01 (-12)
Step 1 path and path metric arranged in decreasing order in
stack above.
Top path ts extended to two parts and 2nd set 010 ts compared
with 111 (1) and 000(0). Branch metric added to previous path
metric.
Upper branch - 1 match and 2 discrepancy.
Path metric = -3 + (1X1 + 2X(-5)) = 12 path(01)
Lower branch - 2 match and 1 discrepancy.
Path metric = -3 + (2X1 + 1X(-5)) =-6 path(00)
Table modified with new path metric in decreasing order.
Procedure continues.
e A decoding step for the Viterbi algorithm ts the “compare
and select operation” performed for each state in the trellis
dingram beyond time unit m.
° It requires 15 computations to decode the required
sequence in example 1 as compared to only 10
computations tn Stack algorithm.
e This advantage of Stack algorithm over Viterbi, is more
prominent when fraction of error is wot too greater than
channel transition probability.
e 2 out of 21 bits are tn error. 2/21 =0.095 = p=0.10.
» If sequence is too noisy, stack algorithm requires too
many computations as compared to again 15 of Viterbi.
The Stack Algorithm vs Viterbi Algorithm N
+ Most important difference between sequential decoding
and Viterbi decoding is that the number of computations
performed by a sequential decoder is a random variable
while computational Load of the viterbi algorithm is fixed.
* Very noisy sequence may requires larger number of
computations in Stack algorithm as compared to víterbí.
* As very noísy sequences occur very rarely, average
number of computations performed by a sequential
decoder ts normally much Less than fixed number
performed by Viterbi algorithm.
The Stack Algorithm- Limitations N
Limitation 1:
Decoder tree traces random path back and forth through
the code tree, jumping from node to node.
Hence Decoder must have an input buffer to store
incoming blocks of sequence waiting to be processed.
Speed factor of decoder ts ratio of computation speed to
speed of incoming data in branches received per second.
Depending on speed factor of decoder, long searches can
cause input buffer to overflow resulting in Loss of data or
Erasure.
Buffer accepts data at fixed rate of 1/(nT) branches per
second where T is the time interval allotted for each bit.
Erasure probabilities of 10” are common in sequential
decoding.
À
The Stack Algorithm- Limitations N
Average number of computations and erasure probability
are independent of encoder memory ak.
Hence codes with Large K and Large free distance may
result in undetected errors being extremely unlikely.
Major limitation then is Erasures due to buffer overflow.
Can be actually beneficial!
Erasures usually occur when received sequence is very
noisy.
tt is more desirable to erase such frame than decode it
incorrectly.
Viterbi will always decode- may result in high errors.
Seq. Dec. -ARQ system can help in retransmission of
erased frames. (Advantage)
The Stack Algorithm- Limitations
° Limitation 2: Size of stack is finite.
+ Stack may fill up before decoding is complete.
° Remedy: Typical stack size ts 1000 entries.
* The path at the bottom of stack is pushed out of stack at
next entry.
° The probability that a path on the bottom of the stack
would ever recover to reach the top of the stack and be
extended is so small that the Loss In performance is
negligible.
The Stack Algorithm- Limitations N
Limitation 3: reordering stack after each decoding step.
This can become time very consuming as no of entries
increase.
Severely limit decoding speed.
Remedy: Stack -bucket algorithm by Jelinek.
Contents of stack are not reordered after each step.
Range of possible metric values is quantized into a fixed
number of intervals.
Each metric interval ts assigned a certain number of
locations in storage , called a bucket.
When a path is extended, it is deleted from storage and
each new path ts inserted as the top entry in the bucket
which includes its metric value.
The Stack Algorithm- Limitations N
No reordering of paths within bucket is required.
The top path in the top nonempty bucket is chosen to be
extended.
A computation now involves only finding the correct
bucket in which to place each new path.
Which is independent of previously extended path.
Faster than reordering an increasingly Larger stack.
Disadvantage: tt is mot always the best path that gets
extended but only a very good path. t.e. a path in the top
nonempty bucket or the best bucket.
Typically if bucket quantization is not too large and
received sequence ts not too noisy, best bucket contains the
best path.
Degradation from original algorithm is very slight. Fast.
The FANO Algorithm N
Sacrifices speed w.r.t.stack algorithm but no storage
required.
Requires more time as it extends more nodes than stack .
But stack reordering absent, hence saves time.
Decoder examines a sequence of nodes in the tree from
origin node to terminal nodes.
Decoder never jumps from node to node as in stack al..
Decoder always moves to an adjacent node-
- Either forward - to one of 2° nodes Leaving present node,
— Or backward to the node Leading to present node.
Metric of the next node to be examined can be computed by
adding (or subtracting) metric of connecting branch to
metric of present node.
The FANO Algorithm à
e This eliminates the need for storing the metrics of
previously examined nodes as done in stack algorithm.
e But some nodes are visited more than once, asking for re-
computation of their metric values.
The FANO Algorithm -Method N
+ The decoder will move forward through the tree as long as
the examined metric value continues to increase.
When metric value dips below a threshold, the decoder
backs up and begins to examine other path.
tf no path can be found with metric Value above threshold,
the threshold is Lowered and decoder attempts to move
forward again with a Lower threshold.
Each time a given node ís visited in the forward direction,
the threshold ts Lower than on the previous visit to that
node.
This prevents Looping in the algorithm.
The decoder eventually reaches end of tree, terminating
algorithm.
The FANO Algorithm -Detaíled Method N
Decoder starts at origin node with the threshold T=0 and
the metric value M=0.
tt Looks forward to search best of 2” succeeding nodes, the
one with the Largest metric.
tf metric of forward node M, ZT, the decoder moves to this
node.
If END not reached and this node has been examined for
the first time, then “threshold tightening’ is performed
- Increase T by the Largest possible multiple of a threshold
increment A, till it remains below current metric.
tf this node is traversed before, wo threshold tightening ts
performed.
Decoder again Looks forward to best succeeding node.
À
The FANO Algorithm —Detaíled Method
o If M, < T, the decoder then Looks backward to the
preceding node.
° tf metric of backward node being examined My < T,
(Both <T), Tis Lowered by A and Look forward for best
node ts repeated.
etfM, <T and M, 2 T, decoder moves back to preceding
node, say P.
> tf this backward path was already from worst of 2% nodes
succeeding node P, decoder again Looks back to node preceding
node P.
+ Number of computations depends upon threshold
increment A.
o FA is too small, Large number of computations required.
e Acan not be ratsed indefinitely without affecting error
probability.
° tf Ais too large, T may be Lowered below the minimum
metric of the maximum Likelihood path along with several
other paths, making it possible for any of these to be
decoded before maximum likelihood path.
° In addition, number of computations also increase as
more “bad” paths can be followed tf T ts Lowered too much.
The FANO Algorithm -Limitations N
° For moderate rates, Fano algorithm is faster than stack
bucket algorithm as it is free from stack control problems.
° For higher rates, stack bucket algorithm can be faster as
Fano algorithm would have additional computational
Loads .
o As it does wot require storage Fano algorithm is selected
in practical implementations of sequential decoding..