Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple error correction

44,036 views 98 slides Feb 08, 2016
Slide 1
Slide 1 of 98
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98

About This Presentation

In contrast to block codes, Convolution coding scheme has an information frame together with previous m information frames encoded into a single code word frame, hence coupling successive code word frames. Convolution codes are most important Tree codes that satisfy certain additional linearity and ...


Slide Content

CONVOLUTION COPES

e Difference between block codes and convolution codes.:

° Block code: Block of n digits generated by the encoder in a
particular time unit depends only on block of k input
massage digits within that time unit.

> used mainly for error detection as data are transmitted in
blocks.

° Convolution code: Block of n digits generated by the encoder
in a particular time unit depends not only on block of k
input massage digits within that time unit but also on the
previous (N-1) blocks of message digits (N>1).

° Mainly used for error correction.

N
TREE CODES and TRELLIS CODES

+ Input stream broken into m segments of k, symbols
each.

Each segment - Information frame.

Encoder — (Memory + logie circuit.)

m memory to store m recent information frames
at a time. Total mk, information symbols.
At each input, logie circuit computes codeword.
For each information frames (k, symbols), we get a
codeword frame (w, symbols ).
Same information frame may not generate same
code word frame. Why?

TREE CODES

Information Frame

Codeword Frame

Encoder

"Constraint length of a shift register encoder is
number of symbols it can store tn its memory.
*v=wmk,

TREE CODES - some facts/definitions!

° Wordlength of shift register encoder kk = (m+1)k,

e Blocklength of shift register encoder n = (m+1)n,

o (n,k,) code tree rate = k,/n, =k/n.

e A (n„R,) tree code that ts linear, time invariant and
has finite Wordlength R= (m+1)k,is called (n,K)
Convolution codes.

° A (n,k,) tree code that ts time invariant and has
finite Wordlength k = (m+1)k, is called (n,K)
Sliding Block codes. Not linear.

Convolution Encoder

Input Shift Register Encoded output

+ One bit input converts to two bits code.
+ k,=1, n,= 2, Block length = (m+1)n, =6
+ Constraint length = 2, Code rate = 1/2

Convolutional Encoder- Input and output bits

ad Current state of Outgoing Bits
0 0 0
1 0 0
0 0 1
1 0 1
0 1 0
1 1 0
0 1 1
1 1 1

N

Convolutional Encoder- input and output bits

Incoming
Bit

Current state of
Encoder

Outgoing Bits

0 0

PRO[(PRO PRO RIO

PPPPIOO|O|O

PPrPOoOo|P/P|i0|/o°

or rloor»
no o|r/ or»

N

Convolutional Encoder - State Diagram
° Same bit gets coded differently depending upon
previous bits.
+ Make state diagram of Encoder.
o 2 bits encoder - 4 states.

Convolutional Encoder - State Diagram

Convolutional Encoder - Trellis codes/Diagram

© 4 Nodes are 4 States. Code rate is 1/2.

° trellis diagram, follow upper or Lower branch to
next node if input ts O or 1.

° Traverse to next state of node with this input,
writing code on the branch.

o Continues...
° Complete the diagram.

N

Convolutional Encoder - Trellis Diagram

States

Convolutional Encoder - Trellis Diagram

States

Trellis Diagram - Input 1001101
Output - 11 01 11 11 10 10 00

SD

2
3
>
©
{4
. . .

Convolutional Encoder - Polynomial
description

> Any information sequence ty, by, ba ba, ..can be
expressed in terms of delay element D as
eld) =i, +i D+i,pet+i,pe+iLptt+..

° 10100011 will be 1+ D?+D*+ D*

Convolutional Encoder - Polynomial
description

+ Similarly, encoder can also be expressed as
polynomial of D.
+ using previous problem, k, =1, n,= 2,
— first bit of code 9,,(D)= D?FD+1
- Second bit of code 9,,.(P)=P*+1
- G(D) = lg, (©)1= ID2+p+1 D?2+11
+ J) = Y, 1, (B)9,¿(D)
+C(D) = ((B)4(D)

Convolutional encoder - Example
Find Generator polynomial matrix, rate of code
and Trellis diagram.

Convolutional encoder - Example

ek, = Ln = 2, rate = 1/2.

o G(D) = [1 D*+1]l

» Called systematic convolution encoder as k, bits of
code are same as data.

Convolutional encoder - Example
Find Generator polynomial matrix, rate of code
and Trellis diagram.

E

Convolutional encoder - Example

* kp =2,n, = 3, rate = 2/3.

+ G(D) = 9,,(D) 9,2 (D) 92 (P)
92 (D) 92(P) 92: (P)
= 1 O DIFD+1

(0) í (0)

Convolutional encoder - Formal définitions

° Wordlength R = R, max, deg 9 (P) + 11.

° Blocklength n= nm, max, {deg 9 (DI + 11.

e Constraint length
ev=2 max, (deg 9; (D1.

° Parity Check matrix H(D) ts (n.-R,) by n, matrix
of polynomials which satisfies-
e Alp) HD) *=0

. Syndrome polynomial s(D) ts (w,-k,) component
row vector
*s(D) =v(D) H(p)*

Convolutional encoder - Formal définitions

° Systematic Encoder has aD) = Li | P(D)I
° where | is k, by R, identity matrix and P(D) is k, by
no Ro) parity check polynomial given by
HE) = Le)T |
© where lis (n.-R,) by (noe) identity matrix.
° Also G(D) H(B)™ =0

Convolutional encoder - Error correction

e M minimum distance d,* is smallest hamming
distance between any two initial codeword segments
L frames long that are not identical in Initial frames.

o tfl=m+1, then (m+1)" minimum distance is
minimum distance of the code d* or Ayin-

e Code can correct “t” errors in first L frames tf
o d% 22t+1

Convolutional encoder -Example

m=2,|=3

.d,*=2, d,*=3, d¿*=5... comparing with all ‘o’s.
e d*=5

e t=2

N
Convolutional encoder - Free Distance

° Minimum distance between arbitrarily long encoded
sequence. ( Possibly infinite)

° Free distanceis Ap, = max, Lal

° It can be minimum weight of a path that deviates
from the all zero path and Later merges back into the
all zero path.

° Also di < Asa S..£ tree

Convolutional encoder - Free Length

> Free length ns. is Length of non-zero segment of
smallest weight convolution codeword of nonzero
weight.

° d = vee À if L= Were:

Convolutional encoder - Free Length
o Here

o Free distance = 5

o Free length = 6

Matrix description of Convolution encoder - Example

D+D? D? D+D?
ME» D D

Matrix description of Convolution encoder -
Example

° Generator polynomials are:

° 9,¿(D) =D+ dD?

* 912(P) = D?

°9,,(>) =D + Dd?

* Y (D) = D?

* Ya (DP) =D

* Ja (D) =D

Matrix description of Convolution encoder -
Example
o Matrix G, has coefficients of DO L.e. constant s:

e OE a
[000

o Matrix G, has coefficients of Dt:
“OT
ar Lo 1 y

+ Matrix G, has coefficients of D? :

pe det
2 41:00

Matrix description of Convolution encoder - >
Example

° Generator matrix is only the representation of a
convolution coder. It ts actually not used to generate
codes.

o d14°0 111-1110 0.06
i} 1
dO DT Wi ti100 PM
eee TO 0-011 Old. EL 1
G= | |
10 0 0:0 1 111 00
A ae o aah
[i i I
| | | se

Matrix description of Convolutional encoder - General

° Generator matrix is only the representation of a
convolution coder. tt ts actually not used to generate
codes.

Es E & G, 0 oO 0 0
co o & Gri G, 0 0 0
0 0 G Gn-2 Gui Gyn 0 0

N

VITERBI DECODING

e Advantages:

° Highly satisfactory bit error performance
° High speed of operation.

° Ease of implementation.

e Low cost.

VITERBI DECODING

+ Given Trellis Diagram of rate 1/3.
+ Transmitted word - 000000000000000000000...
» Received word - 010000100001...

> e e e
. . _ oo 0 > Continues to

| ( infinity

100, A
e e eo

States a >
Time axis

VITERBI DECODING

o Procedure:
° Divide the incoming stream into groups of three.
+ 010 000 100 001...

° Find hamming distances between first group with two
options at first node. Called Branch Metric.

e Take second group and continue with second node.
Find total hamming distance called Path Metric.

° At any node if two branches meet, discard branch
with higher Path Metric.

o Continue...

VITERBI DECODING
First group - 010
A(000,010) =1 BM1=1
d(111,010) =2 BM2=2

Time axis

VITERBI DECODING
+ Second group - 000
+ Branch Metric and Path Metric (circled) are found.
+ At each node three compare total path metric.

Ses Time axis

VITERBI DECODING
+ Third group - 100
+ At each node three compare total path metric.
+ Discard longer Path and retain Survivor.

States = = at

Time axis

VITERBI DECODING

> Continue...

States

Time axis

Assignments:

o 1. Design a rate */2 Convolutional encoder with a
constraint length v = 4 and d* = 6.
© Construct state diagram for this encoder.
e Construct trellis diagram.
° What ts Ae, for this code.
© Give generator matrix G.

+ 2) For given encoders each, construct Trellis
Diagram. Find Ry Mo, V, m, d* and dy. Find G.

.

3) write k, n, v, m and Rate R for this coder.

Give Generator polynomial matrix G(D), generator matrix G
and Parity check matrix H.

Find AE, dé Wires:
Encode 101 001 001 010 000

CONVOLUTION CODES - using Code Tree

1/P data stream b1
MSB in first

Code output bo

* M1, M2, M3, M4 are 1 bit storage elements feeding
to 3 summers as per design of the code.

o v1, VA, V3 are commutator pins, to be sampled one
after the other.

1? data stream b1
MSB in first

v2
Commutator

vi v3
° Vi = s1. Code output bo

e. V2=s1®s2®s3®s4

. v3=s10530 54

> Up stream enters registers one by one with MSB going in
first.

+ As each bit enters, logic summer gives 3-bit O/p vivavs.

© Enough zeroes to be added so that Last bit Leaves m4.

1? data stream b1
MSB in first

Commutator

v3
Code output bo

° Code generated by combining outputs of m stage
shift registers, through n EX-OR'S .

° No of bits in message stream = L.
° No of bits in output code = w (L + m).

UP data stream b1
MSB in first

“ Commutator
vi x x
° Input stream m=1 0110 an
elL=5, n=3, m=4
e Code bits = 27
° Rate of code = 1/3

° CODE bo=111 010 100 110 001 000
011 000 000

1? data stream b1
MSB in first

v2
Commutator

VL 5 : v3
Code depends on

No of shift registers m.

No of shift summers n.

No of shift input bits L.

Manner in which all are connected.

Cone TTC

+ Code words = 2R- where -

- L = length of information sequence

— R = number of bits at each each clock

L+m-+1 tree levels are Labeled from O to L+m.

The leftmost node A in the tree is called the origin node.
2" branches leaving each node in first L Levels for an
(wk,m) code - called Dividing part of the tree.

Upper branch leaving each node in dividing part of tree
represent Input u, =0/1 while Lower branch represent
input w= 1/0.

After L time units, there is only one branch leaving
each node.

This represents input O for t= L, L+1, ...L+m-1 with
encoder returning to all zero state. (Taíl of tree)

Code Tree

e 2e-Rightmost nodes are called terminal nodes.

° Each branch is Labeled with n outputs .

° Each of 2" codewords of Length N is represented by
a totally distinct path through the tree.

© Code tree contains same information about code as
the trellis diagram or state diagram.

° Tree better suited for operation of sequential
decoding.

Code Tree

000
000 c
111
000
B
010
C
0
— AN A
1
| 010 €
B
111 ——
101

Decoding the convolution code

* EXHUSTIVE SEARCH METHOD : With no noise

e Start at A.

° Take first n-bit s and compare with two sets of n-
bits connected to A.

© tf input matches with a w-bit set of tree for which we
have to go up the tree, decoded data first bit is “0”.

> tf input matches with a n-bít set of tree for which we
have to go down the tree, decoded data first bit is “1”.

© Go to nodes B, C etc and repeat the process to decode
other bits.

Pecoaing the convolution code

EXHUSTIVE SEARCH METHOD : With noise

Each input message bit effects m x m bits of the code
+ = 4X3 = 12 bits.

At node A, we must check first 12 bits only.

16 such unique 12-bit combinations from A.
Compare first 12 bits with 12 combinations and
select best match.

if it takes up the tree at A, first bit decoded to “0”
(Fit takes down the tree at A, first bit decoded to “1”.
Discard first 3 bits of input code and repeat the
same at B comparing next 12 bits with next 16
combinations.....and so on.

Decoding the convolution code

© EXHUSTIVE SEARCH METHOD : With noise

e Discard first 3 bits of input code and come to node
B.

© Repeat the same at B comparing next 12 bits with
next 16 combinations.....and so on.

e Disadvantages:

+ Although it offers Low probability of errors, we have
to search mXn code bits in 2" branches. Lengthy.

© Errors tend to propagate . If incorrect direction is
chosen at a node due to more errors. No coming
back.

Decoding the convolution code-
SEQUENTIAL DECODING METHOD

+ Sequential decoding was first introduced by
Wozencraft as first practical decoding method for
convolution code.

° Later Fano introduced new version of sequential
decoding called Fano’s algorithm.

° Then another version called the stack or Z)
algorithm was discovered tndependently by
Zigangirov and Jelinek.

Wozencraft SEQUENTIAL DECODING
METHOD : With noise

e w bits at a time are considered at a node.

e At node A. we move in the direction of least
discrepancies and decode accordingly.

° No of discrepancies are noted.
+ Move to node E and repeat.

° At every node compare no of discrepancies with
already decided maximum permissible errors.

© tf current error exceed decided value at any node,
traverse back one node and take the other direction.

© Repeat the process.

Total no of
discrepancies at
a node or errors
while decoding

(2)

(4)

(1)

Correct Path

Number of message bits decoded

SEQUENTIAL DECODING METHOD

e Advantages:
e Works on smaller code blocks.
e Faster.

e Although trial and error and retracing involved,
still gives better results at smaller time.

ASSIGNMENT find code tree.

3 bit input

The Stack Algorithm - Z) Algorithm

Let input data stream length is L with k bits
entering simultaneously in each of L.

Information sequence length = RL

2 code words of length N = n(L+m) for (n.k,m)
code where

- Number of memory element = m

— Number of summer = n

Code tree has L+m-+1 time units or levels.

Code tree is just expanded version of trellis diagram
in which every path is totally distinct from every
other path.

Method searches through the nodes of tree to find
maximum likelihood path by measuring “closeness”
called metric.

The Stack Algorithm - Z) Algorithm

+ Example: (3,1,2) convolution coder has generator
matrix as

o G(D) = [1+D,1+P?2,1+Db+D271

e LetL=5. n=3, k=1, m=2

o L+m+1=8 tree levels from 0 to 7.

© The code tree ts-

fT
Example
code tree.

001
no

10 _ on
o11 000

001

> CRETE
001
o 110 _ 011 _ ‚000
à o10
v 100 _ 10 _ on
1L_uo
ON | 000 000

COTON
100
. 101 “011: 000
ol VE 10
Mr 107° 000
on
000, 000 000
001 110 on
010
119, 011 000
FT TS GE Enr roi Ser
ns 011 000 000!
o10 10 ott
am Pe
101, 011. 000
mr 301. OM
000
(000° 000 000,
o 1 2 3 4 3 6 7
— - Levels

The Stack Algorithm - Z) Algorithm

° For binary input and @-ary output BMC, bit metric -
° M(r,|v) = log IP(r,|v,) 7 P(r)1-R
e where P(r, |v,) is channel transition probability,
o P(r,) ts channel output symbol probability,
e Ris code rate.
° The above includes length of path in metric.
© Partial path metric for first | branches of a path v -

° Mr lvl) = x Mr, lv) = Yi. Mllv)

° Where M(r,|v)) is branch metric of jth branch and is
computed by adding bit metric for the w bits on that
branch.

° Combining above two-

The Stack Algorithm - Z) Algorithm N

°M(Ir, via) = = wt Lt

° À } log¿Plr, Iv) - dix 0 P(r) ~ ums R

° A binary input @-ary out put BMC ts said to be
symmetric if -

-P(jlo) = P(a-1-j|2), j¡=0,1,2...2-1

* eg. Q=2, P(jlo) = Pl lu,

* Forsy mmetric channel with equally Likely t/p sy mbols,
o/p symbols are also equally likely. Then-

°P(n=ÿ) = 1/0, OFSQ-1 eg. Q=2, P(0)=P(1)=1/2.

e Mr) = Lixo "gr, |v) - wl log, (1/@)- wR.

nl
=>, log,P(r,|v) + nl(log.R-R)

The Stack Algorithm - Z) Algorithm N
nia
°M (Ir, | vl) = DE log,P (r, | v,) + wl (109,8 - R)
° First term Is metric for Viterbi algorithm.

© Second term represents a positive bias (R22, REI), which
increases linearly with path length.

° Hence Longer paths with Larger bias are closer to ‘End of
tree” and more likely to be Maximum Likelihood path.

° Called Fano Metric as first introduced by Fano.

The Stack Algorithm - Z) Algorithm N

° Example: For BSC(Q=2) with transition probability p,
find Fano metric for truncated codeword [Vv], and Iv7,.
Also find metric Ífp=0.1.

° [Mis shown tn code tree by thick line. 2 bits tn error out
of 18.

° M(Ir|vIJ= 16l09,(1-p) + 2lo9.p + 18( 1-1/3)

. = 16log,(1-p) + 2log.p + 12

° M(Ir|v1,)= 2l09,(1-p) + log,p + 3(1-1/3)
° = 2109.(1-p) + log.p + 2

o (tf p=0.1,

° M([r| v1.) = 2.92 > M(Ir

V1.) = -1.63

The Stack Algorithm - Z) Algorithm

N

° For BSC with transition probability p, the bit metrics are
2 CANA) = log,IP(r,|v,) / P(r)1 -R

= 109, P(r, |v,) - Log,P(r;) -R

o P(r) = 22.

° M(r,|v) = log, p- log, 2- R

°M(r|v) = (log, 2p -R A

. = log. 2(4p) - R if r= v;}

The Stack Algorithm - Z) Algorithm
Example: tf R = 1/3 and p = 0.10, find M(r,|v,)
M(r,|v) = 2.65 rev,

= 0.52 if r= v,)
The metric table ts shown below.

Metric table ts usually scaled by positive constant to
approximate values to integers. Scaled table is -
1

Stack Algorithm - Z) Algorithm - Method à
° An ordered list or stack of previously examined paths of
different Lengths is kept in storage.
° Gach stack entry contains a path along with its metric.
° Path are listed in stack in order of decreasing metric.

° At each decoding step, top path of stack is extended by
computing the branch metrics of its 2° succeeding
branches and adding these to metric of top path to form 2*
new paths, called successors of the top path.

° Top path is then deleted from the stack, its 2 successors
are inserted and stack is rearranged in decreasing order
of metric values.

° Algorithm terminates when top path is at the end of tree.

The Stack Algorithm N
© Step 1 : Load the stack with the origin node in the tree,
whose metric ts taken to be zero.

© Step 2: Compute the metric of the successor of the top path
tw the stack.

© Step 3: Delete the top path from the stack.

© Step 4: Insert the new paths In the stack and rearrange
the stack tn order of decreasing metric values.

© Step 5: tf the top path in the stack ends at a terminal node
tw the tree, stop. Otherwise returning to step 2.

° When the algorithm terminates, the top path in the stack
is taken as decoded path.

e Make flowchart for the stack algorithm.

The Stack -more facts N

° In the dividing part of the tree , there are 2% new metrics to
be computed at step 1.

° Iw the tail of the tree, only one new metric is computed.

° For (w,1,m) codes, size of stack increases by one for each
decoding step except at tail of tree, where size of stack does
not increase at all.

° Since dividing part of tree is much longer than tail
(L>>m), the size of stack ts roughly equal to the number
of decoding steps when algorithm terminates.

The Stack -
Flow chart

Compute metrics
of successors
of top path

Delete top
path
from stack

Reorder stack
according to
metric values

The Stack Algorithm -Example N

Consider the earlier R=1/3 code tree and apply stack
algorithm. Assume a code word is transmitted over à BSC
with p=0.10 and the following sequence is received. Using
metric table, show content of stack stepwise ana decode
correct word.

r = (010, 010, 001, 110, 100, 101, 011
Take first n=3 bits at first dividing part and compare
with two branches to find toe branch matrics.

O10 with 114 at upper branch (1) gives 2 discrepancies
and 1 match.

From table branch metric = 2X (-5) + 1X1 = -9

O10 with 000 at Lower branch (0) gives 1 discrepancy and
2 match.

From table branch metric = 1X (-5) + 2X1 = -3

/ weenie | | N
-Example Step I ot 4

00 (-6)
0(-3) 1(-9)
1 (-9) 01 (-12)

Step 1 path and path metric arranged in decreasing order in
stack above.

Top path ts extended to two parts and 2nd set 010 ts compared
with 111 (1) and 000(0). Branch metric added to previous path
metric.

Upper branch - 1 match and 2 discrepancy.

Path metric = -3 + (1X1 + 2X(-5)) = 12 path(01)
Lower branch - 2 match and 1 discrepancy.

Path metric = -3 + (2X1 + 1X(-5)) =-6 path(00)
Table modified with new path metric in decreasing order.
Procedure continues.

[ The Stack Algorithm -Example

Step |
0(-3)
1 (-9)

Step 6
111 (-3)
0001 (-12)

01 (-12)
001 (-15)
0000 (-18)
110 (-21)
10 (724)

Step 2

00 (-6)
1(-9)

01 (-12)

Step 7
1110 (0)
0001 (-12)

01 (-12)

001 (-15)
1111 18)
0000 (-18)

110 (221)

10 (-24)

Step 3

000 (-9)
19)
Ol (12)
001 (-15)

Step 8
11101 (+3)
0001 (-12)

01 (+12)
11100 (-15)

001 (-15)
1111 (+18)
0000 (-18)

110 (-21)

10 (224)

Step 4
1(-9)
0001 (-12)
01 (-12)
001 (-15)
0000 (-18)

Step 9
111010 (+6)
0001 (-12)
01 (212)
11100 (-15)
001 (-15)
1111 (-18)
0000 (-18)
110 (-21)
10-24)

à

Step 5
11 (-6)
0001 (-12)

01 (-12)
001 (-15)
0000 (-18)
10 (-24)

Step 10
1110100 (+9)
0001 (-12)
01 (+12)
11100 (-15)
001 (+15)
1111 (718)
0000 (-18)
110 (21)
10 (-24)

The Stack Algorithm -Example

° The algorithm terminates after 10 decoding steps giving
final decoding path and information sequence as —

° C= (111, 010, 001, 110, 100, 101, 011)

e u = (11101)

> Ties in the metric values are resolved by placing the
Longest path on top.

° This sometimes, has the effect of slightly reducing the
total number of decoding steps.

The Stack Algorithm -Example -2 N

o Let received code is r= (110, 110, 110, 111, 010, 101, 101)

° Solution: This sequence terminates after 20 steps and
gives—

°C = (111, 010, 110, 011, 111, 101, 011)

e u= (11001)

The Stack Algorithm -Example N

Step 1 Step 2 Step 3 Step 4 Step 5 Step 6

1 (-3) 11 (-6) 110(-3) 1100 (-6) 11000 (-9) 0(-9)

0(-9) 0-9 0(-9) 0(-9) 0(-9) 1101 (-12)

10 (-12) 10 (-12) 1101 (-12) 1101 (-12) 10 (-12)

11121) 1012) 10 (-12) 11001 (-15)

11121) 11001 (-15) 110000 (-18)

111 (21) 111 (-21)

Step 8 Step 9 Step 10 Step 11 Step 12 Step 13
11011 (+9 01 (-12) 10(-12) 11001 (-15) 110010(-12) 101 (-15)
01 (-12) 10(-12) 11001 (-15) 101 (15) 101 (-15) 011 (15)
10(-12) 11001 (-15) O11 (515) 011 (-15) O11(-15)a 110110(-18)
11001 (-15) 110110(-18) 110110(-18) 110110(-18) 110110 (-18) 110000 (-18)
110000 (-18) 110000 (-18) 110000 (-18) 110000(-18) 110000 (-18) 00 (-18)
00 (-18) 00 (-18) 00 (-18) 00 (-18) 00(-18) 1100100 (-21)
111-21) 111 (21) 010 (-21) 100 (-21) 100 (-21) 100 (-21)
11010(-27) 11010 (-27) 11121) 010 (21) 010 (-21) 010 (-21)
11010 (-27) 111 (221) 111(2D 1 (2)

11010 (-27) 11010 (-27) 11010 (-27)

The Stack Algorithm -Example

Step 15 Step 16
110110 (-18) 110000 (—18)
110000 (-18) 0110 (-18)

0110 (-18) 1010 (-18)

1010 (-18) 00 (-18)

00 (+18) 1100100 (-21)
1100100 (-21) 100 (-21)

100721) 010 (-21)

01021) 111 (21)

111 (21) 0111 (-24)

0111 (24) 1011 (—24)

1011-24)... ::1101100 (-27)

11010 (-27) 11010 (-27)

Step 17

0110 (-18)
1010 (-18)

00 (-18)
1100100 (-21)
100 (-21)

010 (-21)

111 (421
0111 24)
1011 (-24)
1100000 (-27)

1101100 (27).

11010 (-27)

Step 18

1010 (-18)

00 (-18)
1100100 (21)
01100 (-21)
100 (-21)
010(-21)

111 (221)

0111 (524)
1011 (-24)
1100000 (=27)
1101100 (-27)
01101 (27)
11010 (=27)

Step 19

00 (-18)
1100100 (-21)
10100 (-21)
01100 (-21)
100 (-21)

010 (-21) x
11121)
0111.24)
1011 (224)
1100000 (+27)
1101100 (-27)

10101 (-27).

01101 (-27) |
11010 (+27)

The Stack Algorithm vs Viterbi Algorithm N

e A decoding step for the Viterbi algorithm ts the “compare
and select operation” performed for each state in the trellis
dingram beyond time unit m.

° It requires 15 computations to decode the required
sequence in example 1 as compared to only 10
computations tn Stack algorithm.

e This advantage of Stack algorithm over Viterbi, is more
prominent when fraction of error is wot too greater than
channel transition probability.

e 2 out of 21 bits are tn error. 2/21 =0.095 = p=0.10.

» If sequence is too noisy, stack algorithm requires too
many computations as compared to again 15 of Viterbi.

The Stack Algorithm vs Viterbi Algorithm N

+ Most important difference between sequential decoding
and Viterbi decoding is that the number of computations
performed by a sequential decoder is a random variable
while computational Load of the viterbi algorithm is fixed.

* Very noisy sequence may requires larger number of
computations in Stack algorithm as compared to víterbí.

* As very noísy sequences occur very rarely, average
number of computations performed by a sequential
decoder ts normally much Less than fixed number
performed by Viterbi algorithm.

The Stack Algorithm- Limitations N

Limitation 1:

Decoder tree traces random path back and forth through
the code tree, jumping from node to node.

Hence Decoder must have an input buffer to store
incoming blocks of sequence waiting to be processed.
Speed factor of decoder ts ratio of computation speed to
speed of incoming data in branches received per second.
Depending on speed factor of decoder, long searches can
cause input buffer to overflow resulting in Loss of data or
Erasure.

Buffer accepts data at fixed rate of 1/(nT) branches per
second where T is the time interval allotted for each bit.
Erasure probabilities of 10” are common in sequential
decoding.

À

The Stack Algorithm- Limitations N
Average number of computations and erasure probability
are independent of encoder memory ak.

Hence codes with Large K and Large free distance may
result in undetected errors being extremely unlikely.
Major limitation then is Erasures due to buffer overflow.
Can be actually beneficial!

Erasures usually occur when received sequence is very
noisy.

tt is more desirable to erase such frame than decode it
incorrectly.

Viterbi will always decode- may result in high errors.

Seq. Dec. -ARQ system can help in retransmission of
erased frames. (Advantage)

The Stack Algorithm- Limitations

° Limitation 2: Size of stack is finite.

+ Stack may fill up before decoding is complete.

° Remedy: Typical stack size ts 1000 entries.

* The path at the bottom of stack is pushed out of stack at
next entry.

° The probability that a path on the bottom of the stack
would ever recover to reach the top of the stack and be
extended is so small that the Loss In performance is
negligible.

The Stack Algorithm- Limitations N

Limitation 3: reordering stack after each decoding step.

This can become time very consuming as no of entries
increase.

Severely limit decoding speed.

Remedy: Stack -bucket algorithm by Jelinek.
Contents of stack are not reordered after each step.
Range of possible metric values is quantized into a fixed
number of intervals.

Each metric interval ts assigned a certain number of
locations in storage , called a bucket.

When a path is extended, it is deleted from storage and
each new path ts inserted as the top entry in the bucket
which includes its metric value.

The Stack Algorithm- Limitations N

No reordering of paths within bucket is required.

The top path in the top nonempty bucket is chosen to be
extended.

A computation now involves only finding the correct
bucket in which to place each new path.

Which is independent of previously extended path.
Faster than reordering an increasingly Larger stack.

Disadvantage: tt is mot always the best path that gets
extended but only a very good path. t.e. a path in the top
nonempty bucket or the best bucket.

Typically if bucket quantization is not too large and
received sequence ts not too noisy, best bucket contains the
best path.

Degradation from original algorithm is very slight. Fast.

The FANO Algorithm N
Sacrifices speed w.r.t.stack algorithm but no storage
required.

Requires more time as it extends more nodes than stack .
But stack reordering absent, hence saves time.

Decoder examines a sequence of nodes in the tree from
origin node to terminal nodes.

Decoder never jumps from node to node as in stack al..
Decoder always moves to an adjacent node-

- Either forward - to one of 2° nodes Leaving present node,
— Or backward to the node Leading to present node.

Metric of the next node to be examined can be computed by
adding (or subtracting) metric of connecting branch to
metric of present node.

The FANO Algorithm à
e This eliminates the need for storing the metrics of
previously examined nodes as done in stack algorithm.

e But some nodes are visited more than once, asking for re-
computation of their metric values.

The FANO Algorithm -Method N

+ The decoder will move forward through the tree as long as
the examined metric value continues to increase.

When metric value dips below a threshold, the decoder
backs up and begins to examine other path.

tf no path can be found with metric Value above threshold,
the threshold is Lowered and decoder attempts to move
forward again with a Lower threshold.

Each time a given node ís visited in the forward direction,
the threshold ts Lower than on the previous visit to that
node.

This prevents Looping in the algorithm.

The decoder eventually reaches end of tree, terminating
algorithm.

The FANO Algorithm -Detaíled Method N

Decoder starts at origin node with the threshold T=0 and

the metric value M=0.

tt Looks forward to search best of 2” succeeding nodes, the

one with the Largest metric.

tf metric of forward node M, ZT, the decoder moves to this

node.

If END not reached and this node has been examined for

the first time, then “threshold tightening’ is performed

- Increase T by the Largest possible multiple of a threshold
increment A, till it remains below current metric.

tf this node is traversed before, wo threshold tightening ts

performed.

Decoder again Looks forward to best succeeding node.

À

The FANO Algorithm —Detaíled Method

o If M, < T, the decoder then Looks backward to the
preceding node.
° tf metric of backward node being examined My < T,
(Both <T), Tis Lowered by A and Look forward for best
node ts repeated.
etfM, <T and M, 2 T, decoder moves back to preceding
node, say P.
> tf this backward path was already from worst of 2% nodes
succeeding node P, decoder again Looks back to node preceding
node P.

© tf not, decoder again Looks forward to the next best of 2% nodes
succeeding node P and checks if M, 2 T.

The FANO Algorithm -Detaíled Method N

° tf decoder Looks backward from origin node, preceding
node is assumed to be -w and threshold is always Lowered
by A.

° Tíes in metric values can be resolved arbitrarily without
affecting average decoder performance.

The FANO
Algorithm —
Flow Chart

Look forward
to best node

Look forward
to next
best node

The FANO Algorithm -Flow Chart

Origin - T=0, M=0, Mz =-00

Decoder moves to forward node.

Largest M.2T > If first time, T increased till <T.
Otherwise no change in T

Me <T Looks backward

ME <T T=T-A

MB <T Looks forward

MF <T Moves to

MB 2T backward node.
If current is worst path, Looks forward
Looks backward

The FANO Algorithm -Example -No Error.
e Let C= 111 010 001 110 100 101 011

eteA=1

e Data = 1110100

Step Look Me Me Node Metric T
o x 0 0
1 LFB 8 1 +3 +3
E LFB +6 au +6 +6
E LFB +9 111 +9 +9
4 LFB +12 1110 +12 +12
5 LFB +15 11101 +15 +15
6 LFB +18 111010 +18 +18
Fa LFB +21 1110100 +21 Stop

LFB - Look forward to best node

LENB - Look forward to next best node

The FANO Algorithm -Example -with Error. N
e Let V= 010 010 001 110 100 101 O14. LetA=1

Step Look My Ms Node Metric T
0 - x 0 0
1 LFB #3 Ses x 0 1
2 LFB =3, —æ X 0 #2
3 LFB =3 —æ X 0 =3
4 LFB 3 0 y ES
5 LFB —6 0 x 0 3
6 LFNB =Y = X 0 —4
7 LFB =3 - 0 3, =4
8 LFB 6 0 x 0 74
9 LFNB 9: oo x 0 =
10 LFB =3 - 0 + =
11 LFB -6 0 x 0 =5
12 LFNB =9 o X 0 Fe
13 » LFB 3 - 0 =$ —6
14 LFB =$ - 00 ~6 36
15 LFB =9 3 0 =3 =0
16 LFNB =12 : 0 x 0 -6
17 LFNB =9 = Me 0 =
18 LFB 3 - 0 3 =
19 LFB 6 - 00 -6 #1
Mo 20 LFB -9 -3 0 -3 -7 BE

The FANO Algorithm -Example -with Error. >
e Let V= 010 010 001 110 100 101 011. LetA=1

20
21
22
23
24
25
26
21
28
29
30
3

32
33
34
35
36
37
38
39

N 40

LFB
LFNB
LFNB
LFB
LFB
LFB
LFNB
LFNB
LFB
LFB
LFB
LFB
LFNB
LFNB
LFNB
LFB
LFB
LFB
LFB
LFB
LFB

=9
-12
=9
=3
—6
=9
=12
=9
=3
=6
=9
-12
=15
=12
#9
6
=3
0
+3
+6
+9

3
0
00

3,

e
Orne OSOmmO

00
000
00

0
x

1

11

111
1110

11101
111010
1110100

#3
0
0

3

=6

3
0
0

=3

0.

-9

=8

3
0

4

—6

=
0

+3
+6
+9

-7
-7
-8
8
=8
—8
—8
=9
+9
y
9.
0
=o
=9
=;
6:
=>

0
+3
+6

Stop 2

The FANO Algorithm -Examp

e Let V= 010 01

Step

E U BR W ND — ©

DEN © me Sem
OVH TADA LIDO

|

Look

LFB
LFB
LFB
LFNB
LFB
LFB
LFB
LFNB
LFNB
LFB
LFB
LFB
LFB

LFNB +

LFNB
LFNB
LFB
LFB
LFB
LFB
LFB
LFB

Mp
=3
=
6
=9:
=
-6
=9
-12
=9:
73
6
Y
12
-15
Y
9.
—6
ss
0
+3
+6
+9

Mp

Le -with Error.

Node

- 08880 »xoSoxxoxx

11

111
1110
11101
111010
1110100

001 110 100 101 011. PutA = 3

Metric

0
0
=3
0
0
ae
=6
Fo
0
0
=3
—6
=
=6
3
0
59,
56
=
0
+3
+6
+9

N

T

0
=S
=3
=:
—6
6
=6
=6
=6
=o
F9
=9
=9
9
=e)
+9.
9
—6
<3

0
+3
+6

Stop | 7

The FANO Algorithm -Limitations N

+ Number of computations depends upon threshold
increment A.

o FA is too small, Large number of computations required.

e Acan not be ratsed indefinitely without affecting error
probability.

° tf Ais too large, T may be Lowered below the minimum
metric of the maximum Likelihood path along with several
other paths, making it possible for any of these to be
decoded before maximum likelihood path.

° In addition, number of computations also increase as
more “bad” paths can be followed tf T ts Lowered too much.

The FANO Algorithm -Limitations N

° For moderate rates, Fano algorithm is faster than stack
bucket algorithm as it is free from stack control problems.

° For higher rates, stack bucket algorithm can be faster as
Fano algorithm would have additional computational
Loads .

o As it does wot require storage Fano algorithm is selected
in practical implementations of sequential decoding..