MSc Data Comms compare different modulation schemes with different values of M.ppt

ElhenshireHosam 24 views 181 slides Jun 29, 2024
Slide 1
Slide 1 of 181
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141
Slide 142
142
Slide 143
143
Slide 144
144
Slide 145
145
Slide 146
146
Slide 147
147
Slide 148
148
Slide 149
149
Slide 150
150
Slide 151
151
Slide 152
152
Slide 153
153
Slide 154
154
Slide 155
155
Slide 156
156
Slide 157
157
Slide 158
158
Slide 159
159
Slide 160
160
Slide 161
161
Slide 162
162
Slide 163
163
Slide 164
164
Slide 165
165
Slide 166
166
Slide 167
167
Slide 168
168
Slide 169
169
Slide 170
170
Slide 171
171
Slide 172
172
Slide 173
173
Slide 174
174
Slide 175
175
Slide 176
176
Slide 177
177
Slide 178
178
Slide 179
179
Slide 180
180
Slide 181
181

About This Presentation

compare different modulation schemes with different values of M


Slide Content

MSc Data Communications
By R. A. Carrasco
Professor in Mobile Communications
School of Electrical, Electronic and Computing Engineering
University of Newcastle-upon-Tyne
2006

Recommended Text Books
1.“Essentials of Error-Control Coding”,
Jorge Costineira, Patrick Guy Farrell
2.“Digital Communications”, John G.
Proakis, Fourth Edition

•Deliver/Data from the source to the user in a:
•FAST
•INEXPENSIVE (Efficient)
•RELIABLE WAY
Goals Of A Digital
Communication System

Task: to compare different modulation schemes with different values of M
•Choice of modulation scheme involve trading off
•Bandwidth
•Power
•Complexity
•Define:
•Bandwidth
•Signal-to-noise ration
•Error probability
Digital Modulation Schemes

Examples: Memoryless Modulation (Waveforms are
chosen independently –each waveform depends only
on m
i)
0 1 0 1 0 1 0 1 1 0
A
-A
T
t
t
t
01 01 01 01 10
Source Symbols
T
010
101
011
010101
011001
T
M=8
T = 3T
s
8 different
amplitude
levels
M=4
T=2T
s
Sinusoids with
4 different phases
M=2
T=T
s
S
1(t) = A, 0t<T
S
2(t) = -A
T
s
a
b
c
d

A crucial question is raised
what is the difference?
•If Tis kept constant, the waveforms of scheme C requires less
bandwidth than those of 2, because the pulse duration is longer
•In the presence of noise, and if the same average signal power is used,
it is more difficult to distinguish among the waveforms of c.
AM/AM = Amplitude Modulation –Amplitude Modulation conversion
AM/PM = Amplitude Modulation –Phase Modulation conversion

Output
Envelope
Input
Envelope
B
A
Notice:
•Waveforms b have constant envelopes.
•This choice is good for nonlinear radio channels
A: Output envelop (AM/AM conversion)
B: Output phase shift (AM/PM conversion)

TRADE-OFF BETWEEN
BANDWIDTH AND POWER
•In a Power-Limited Environment, use low values of M
•In a Band-Limited Environment, use high values of M
What if both Bandwidth and Power are Limited ?
•Expand Complexity
•DEFINE:
BANDWIDTH
SIGNAL-TO-NOISE RATIO
ERROR PROBABILITY

Performance of Different Modulation
Schemes

DIGITAL MODULATION TRADE -
OFFS
SHANNON CAPACITY LIMIT FOR AWGN
C = W LOG (1 + S/N)
•S = Signal Power = e/T
•N = Noise Power = 2N
oW
•W = Bandwidth

Define Bandwidth W
dB
0 1 2 3-1-2-3
B
1
B
2
B
3
B
4
B
5
0
-10
-20
fT
S(f)
•Half –power
•Equivalent –noise
•Null –to –null
•Fractional power containment
•Bounded power spectral density
Different bandwidth definitions of the power density spectrum
of (5.2). B
1is the half-power bandwidth: B
2is the equivalent
noise bandwidth: B
3is the null-to-null bandwidth: B
4is the
fractional power containment bandwidth at an arbitrary level:
B
5is the bounded power spectral density at a level of about
18dB. Notice that the depicted bandwidths are those around
the frequency f
0.
In general, W = /T Hz,depends on modulation scheme and on bandwidth definition

DEFINE SNRrate signalling
T
1
signals of #
sec
bitslog
2



M
T
M
R
s T
M
T
P
b 2
log



Rate at which the source outputs binary symbols
•Average Signal Power
Average signal energy

b= average energy per bit
•Signal-to-Noise RatioW
R
NWN
P
SNR
sb
00


Noise power spectral
density.
Bits/sec Hz
(Bandwidth

BPS/HZ Comparison

Digital Modulation Trade –Offs
(Comparison among different schemes)
-2 0 2 4 6 8 10 12 14 16 18 20 (dB)
0.125
0.5
0.25
1
2
4
8
16w
R
s 0N
b
1
32
16
8
4 2
64
32
16
8
4 2
x
x
x
x
2
4
8
16
2
4
8
16
8
16
32
2
4
8
165
10)(

eP
b
SHANNON CAPACITY BOUND
BANDWIDTH
LIMITED REGION
POWER
LIMITED REGION
PAM (SSB)
(COHERENT) PSK
AM-PM
DCPSK
COHERENT FSK
INCOHERENT FSK
x

The whole word is defined by
x1=u1, x2=u2
x3=u1+u2
Encoder for the (3,2) parity-check code
u1
x1x2x3
n=3
k=1
Encoder for the (3, 1)
repetition code
x1= u1, x2=u1, x3=u3
0 000
1 111
k=2
n=3
u2u1
x3 x2 x1
00 000
01 011
10 101
11 110
[1], pages 157 -179

Hamming Code (7,4)
Block Encoders
Notice that only 16 of 128 sequences of length 7 are
used for transmission
Source
symbols
encoded
symbols
The codeword is defined by
x
i= u
i, i = 1,2,3,4
x
5= u
1+ u
2+ u
3
x
6= u
2+ u
3+ u
4
x
7= u
1+ u
2+ u
40000 0000000
0001 0001011
0010 0010110
0011 0011101
0100 0100111
0101 0101100
0110 0110001
0111 0111010
1000 1000101
1001 1001110
1010 1010011
1011 1011000
1100 1100010
1101 1101001
1110 1110100
1111 1111111
(7, 4) Hamming Codeu
4u
3u
2u
1
x
7
x
6
x
5
x
4
x
3
x
2x
1

Convolutionally encoding the
sequence 101000...
Rate , K = 3
From B Sklar,
Digital Communiations.
Prentice-Hall, 1988x
1
x
2
x
1
x
2
10
10
Time Encoder Output
x
1
x
2
t
1
t
2
11
10
x
1
x
2
01 1t
2
00
0
0 2
1

Convolutionally encoding the
sequence 101000...010
x
1
x
2
t
4
1 0
010
x
1
x
2
t
5
1 1
000
x
1
x
2
t
6
0 0
Output sequence: 11 10 00 10 11

1
2
c
1
c
2
d
3 d
2
d
1
d
3 d
2 d
1
d
1 d
3
Input
Data
Output
1
(7, 5) Convolutional Encoder312
3211
ddc
dddc


Constraint length K= 3
Code Rate = 1/2Time
Interval
1 2 3 4 5 6 7 8
Input 0 1 1 0 1 0 0 1
Output 00 11 01 01 00 10 11 11
SW
Position
12 12 12 12 12 12 12 12

The Finite State Machine
11
1001
00
a
b
c
d
1/10
0/01
1/01
1/11 0/11
0/00
1/00
0/10
a=00
b=01
c=10
d=11
output bit
The 0 or 1 input bit
The coder in state a= 00
A 1 appearing at the input produces 11 at the output, the system moves to state b = 01
Example message:
00101111100001011100Output
1010010110Input

•If in state b, a 1 at the input produces
01 as the output bits. The system
then moves to state d (11).
•If a 0 appears at the input while the
system is in state b, the bit sequence
10 will appear at the output, and the
system will move to state c (10).

2
1 Tree Representation
11
00
00
00
00
00
00
00
00
11
11
11
11
11
11
11
10
10
10
10
10
10
10
01
01
01
01
01
01
01
0
1
Input data bits
Time
K=3
Rate=
1 2 3 4
Upward transition
Downward transition
0 bit
1 bit
a
b
c
d
a
b
c
d
a
b
a
b
c
d
c
d

Signal-flow Graphcab
bdd
bdc
ca
XXDX
DXDXX
DXDXX
XDX




2
2
' D
D
0
= 1
D
2
D D
D
a b c a
d
X
a X
b X
c
X
d
X
a’
1 D 11 D
2
D
2

Transfer Function T(D)bba
ba
bbbbcba
bc
bbbbc
bd
bd
X
DD
D
X
DD
D
X
X
D
D
XD
X
D
D
X
D
D
X
D
D
XXXXD
X
D
D
X
X
D
DD
X
D
D
DXX
D
D
X
X
D
D
X
DXDX
322
2
2
22
21
)1(
21
1
21
11
1
1
1equation From
1
)1(
)1(
11
2equation From
1
)1(
3equation From



























 D
D
D
D
D
D
DD
D
D
D
X
DD
D
X
D
D
D
X
XD
DT
b
b
a
c
21
1
21
1
)1(
21
1
)1(
21
1
)(
5
5
2
3
2
2
2


















































Therefore,

Transfer Function T(D)




8765
8
87
7
76
6
65
5
842
8
84
4
42
2
2
21
DDDD
D
DD
D
DD
D
DD
DD 


8 distance
paths 8
8
7 distance
paths 4
7
6 distance
paths 2
6
5 distance
path 1
5
5
842
21
DDDD
D
D
Performing long division gives:
This gives the number of paths in the state
diagram with their corresponding
distances.
In this case, the minimum distance of the
code is 5

Block Encoders
by Professor R.A Carrasco
s1=(00)
s2=(01)
s3=(10)
s4=(11)
0
1
Source
Symbol
“State Diagram” of Code
u= (11011.....) corresponds to the paths s1 s3 s4 s2 s3 s4 through the state diagram and the output sequence is x=(111100010110100)
Ui Ui-1 Ui-2
STATE
U
X
000
001
111
110
011
010100
101
S1
00
S2
01
S3
10
S4
11

Tree Diagram
The path corresponding
to the input sequence
11011 is shown as an
example.000
000
000
000
000
111
111
011
100
111
111
011
100
001
110
010
101
111
011
100
001
110
010
011
100
011
100
001
110
010
101
101
0
1
S
1
S
1
S
1
S
1
S
3
S
3
S
2
S
4
S
3
S
2
S
4
S
1
S
3
S
2
S
4
S
3
S
4

Signal Flow Graph
X
a= S1
X
b= S3
X
c= S4
X
d= S2bcd
bcc
dab
XDDXX
DXXDX
XDXDX
2
2
23
)3
)2
)1


 X
a
X
b
X
c
X
d
D
3
D
2
D
2
D
2
D D
D X
a’

Transfer Function T(D)ad
ddab
db
bbbd
bc
a
d
a
a
XD
DD
DDD
X
X
DD
D
XDXDX
X
DD
D
X
X
D
DD
XDX
D
D
X
X
D
D
X
X
DX
X
X
DT
3
42
642
42
2
23
42
2
2
42
2
2
2
'
2
21
2
1
1equation From
2
1
1
2
1
1
)(
























  
 
 
642
86
642
423
423
642
423
642
21
2
21
2.
2
21
)(
2
21
DDD
DD
DDD
DDDD
X
DDD
DDD
DX
DT
X
DDD
DDD
X
a
a
ba












Transfer Function of the State
Diagram...552
595
51055
05
2
24
2422
221
121086
161412
16141210
141210
1412108
12108
121086
86642








DDDD
DDD
DDDD
DDD
DDDD
DDD
DDDD
DDDDD
We have d
free
= 6,
for error events:
S
1
to S
3
to S
2
to S
1
and
S
1
to S
3
to S
4
to S
2
to S
1 



12 distance
paths 5
12
10 distance
paths 5
10
8 distance
path 1
8
6 distance
paths 2
6
642
86
552
21
2
DDDD
DDD
DD

Trellis Diagram for Code (Periodic
from time 2 on)
The minimum distance of the convolutional code l= N
d
min
= d
c
(N), the column distance.
The free distance d
free
of the convolutional coded
f r ee
limd
c
l 000
111
100
011
001
110
101
010
000
111
100
011
001
110
101
010
000
111
100
011
001
110
101
010
000
111
100
011
001
110
101
010
000
111
000
111
100
011
00
10
01
11
0 1 2 3 4 5 6
Time
States
Legend
Input 0
Input 1

Trellis Diagram for the computation
of d
free
Trellis labels are the Hamming distances of encoder outputs and the
all-zero sequence.0
1
2
2
1
0
1
2
1
2
2
1
0
1
2
1
2
2
1
0
1
2
1
2
2
1
0
3
0
1
2
00
10
01
11
0 1 2 3 4 5
Time
States

Viterbi Algorithm



1
0
)(min
k
l
ll
l

We want to compute
Functions whose arguments 
lcan take on a finite
number of values
Simplest situation arises when T2, T1…….are “independent”
(The value taken on by each one of them does not influence
the other variables)
AB
D
C

1

0
Then},...,,{
1
0
110
)(min









k
k
l
ll
[1], pages 181 –185

)xy(P is a maximum
received sequence transmitted symbols)|()|(
l
l
xyPxyP
1.Observe
(memoryless channel)
received
no-tuple
n
0-tuple of
coded digits
Viterbi Decoding of Convolutional Codes
1-P
1-P
P
P
0
1
0
1
Tx Rx
2. We have, for a binary symmetric channel:)1ln(
1
ln)|(ln
0 Pn
P
P
dxyP
ll
l



Irrelevant
multiplicative
constant
Irrelevant
additive
constant
Hamming distance
Between x
land y
l

Brute force approach:
•Compute all the values of the function and choose the
smallest one.
•We want a sequential algorithm

What if 
0, 
1, … are not independent?
AB
D
C

1

0

0= A 
1= C
or

1= D

0= B 
1= D
What is the shortest route from
Los Angeles to Denver?
Viterbi AlgorithmLos
Angeles
Bishop
Ely
Spanish
Forks
Grand
Junction
Denver
Las
Vegas
Cedar
City
Salina
Page
Blythe
Williams
Gallup
Durango
265
284
235
236
257
338
172
215
228
224
282
182
285
241
130
207
224
210
Day 1 Day 2 Day 3 Day 4 Day 5

Viterbi Algorithm
2 1 3 0 1
1
2
0
1
2
1
2
3
2
4
1
3
2
1
1
l=0 l=1 l=2 l=3 l=4 l=5
2
1
l=1
2
1
2 1
1
2
0
1
l=2
3
1
4
2
24
1
4
3
2
2
4
2
6
4
2
3
l=3
4
3
6
4
0
1
4
5
l=4
14
l=5
5
(a)
(b) (c)
(d)
(e)

•We maximise P(y|x) by minimising , the Hamming distance
between the received sequence and coded sequence.
•Brute-Force Decoding
Compute all the distances between yand all the possible x’s.
Choose xthat gives the minimum distance.
•Problems with Brute-Force Decoding
-Complexity
-Delay
•Viterbi algorithmsolves the complexity problem (complexity
increases only linearlywith sequence length)
•TruncatedViterbi algorithm also solves delay problem
l
ld
Conclusion

Trellis-Coded Modulation
A.K.A
-Ungerboeck Codes
-Amplitude-Redundant Codes
-Modulation
How to increase transmission efficiency?
Reliability or Speed
•Use higher-order modulation schemes.
(8 PSK instead of 4 PSK)
Same BW, More bit/s per hz, more power
•Use coding: Less power, less bit/s per hz, BW
expanded
•Band-limited environment
(Terrestrial radio communications)
•Power-limited environment
(Satellite radio communications)
[2], pages 522-532
G. Ungerboeck, "Channel coding with multilevel/phase signals," IEEE Trans. Inform. Theory, vol. IT-28, pp. 55-67, 1982.

Construction of TCM
•Constellation is divided into smaller constellations with
larger euclidean distances between constellation
points.
•Construction of Trellis (Ungerboeck’s rules)
1.Parallel transitions are assigned members of the same partition
2.Adjacent transitions are assigned members of the next larger
transitions
3.Signals are used equally often

Model for TCM
Memory Part

n
Select
Constellation
Select Signal
From
Constellation
a
n

Some examples of TCM schemes
Consider transmission of 2 bits/signal
We examine TCM schemes using 8-PSK
With uncoded 4-PSK we have
We use the octogonary constellation'
1
0
2
3
4
5
6
7'2
min
d
d’8
sin2
'
'


d

•The Free Distanceof a convolutional code is the
Hamming distance between two encoded signals
•d
freeis a measure of the separation among encoded
sequences: the larger d
free, the better the code (at least
for large enough SNR).
•Fact: To compute d
freefor a linear convolutional code we
may consider the distances with respect to the all-zero
sequence.

(00)
An “error event”
Split Remerge
(00)
A simple algorithm to compute d
freeis
1)Compute d
c(l) for l = 1,2,…
2)If the sequence giving d
c(l) merges into the all-zero sequence, store its weight as d
free

First Key Point2
min
2
d
d
free '

Constellation size must be increased to get the same rate of information
•We gain
•We lose
Gain is'/
/
2
min
2



d
d
free

Minimum distance between sequences
Minimum distance for uncoded transmission
Energy with coding
Energy without coding

Second Key Point 
Lnnnn aaafx
 ,,,
1 ),(
nnn afx 
How to introduce the dependence among signals
Transmitted symbol
at time nT
Source symbol
at time nT
Previous source
Symbols = “state” 
n
We write
(Describes output as a function
of input symbol + encoder state)
(Describes transitions between states)),(
1 nnn ag

TCM Example 1
Consider the 8-PSK TCM scheme, which involves the transmission of 2 bits/symbol using an
uncoded 4-PSK constellation and the coded 8-PSK constellation for the TCM scheme as shown
below 1
0
2
3
4
5
6
7
min
d

0
'
Show that and 






8
sin'2
0

 2
mind

TCM Example 1: Solution







2
45sin
min
d
  2
2
2
245sin2
min 

d 








4
cos1'2
4
cos'2''
4
cos''2''
2
0
22
2
0






We have from the uncoded 4-PSK constellation
We have from the coded 8-PSK constellation
Using






8
2
cos1
2
1
8
sin
2  8
sin'2
8
sin'4
8
sin)2('2
0
222
0









TCM Scheme Based on 2-State
Trellis  586.2
8
sin42)1,0()2,0(
1
222
11
2



dd
d
free dB1.1293.1
2
586.2

0
4
2
0
1
3
7
6
5
0
1
Hence, we get the coding gain
a
0
a
1
I
4PSK 8PSK
Can this performance gain Trellis coded QPSK be
improved? The answer is yes by going to more
trellis states.

TCM Example 2
•Draw the set partition of 8-PSK with
maximum Euclidean distance between two
points.
•By how much is the distance between adjacent
signal points increased as a result of
partitioning?8
sin4
'
)1,0(
8
sin'2)1,0(
2
'
)2,0(
'2)2,0(
2
2
2








d
d
d
d

TCM Example 2: Solution'765.0
8
sin'2
0 

  '41.1'2
1 
1
0
7
6
5
4
3
001
111
101
011
000
110
100
010
000100
110
010
001
101
111
011'2
2
(0, 4) (2, 6) (1, 5) (3, 7)'
0 1
0 01 1

0426
1537
2640
3715
0
4
2
6
2
6
1
5
3
7
5
1
7
3
0
4 1
2
0 0 And hence
TCM Scheme Based On A 4-State Trellis 

2
2
222
2
'2
'
1
)4,0(
'
1
)0,0()0,0()4,0(
'
1
'





d
ddd
d
free
Calculating the Euclidian distance for each, one such path
s
2–s
1–s
2, leaving and returning to s
0, or s
6–s
1–s
22/1
2
0
0
min

n
n
l
n
ss
E ssd
nn
l
Let us now use a TCM scheme with a more complex structure, in order to increase the coding gain.
Take a trellis with four states as shown below.2
2
4
'
2
min
2





















d
d
free

TCM Code Worked Example
S
1 S
2
8-PSK
Encoder
16-QAM
a
1
a
2
c
1
c
2
c
3
c
4
Output
Rate ½ 4-state Convolutional Code

State Table for TCM Codea1 a2 S1 S2 S

1 S

2 c1 c2 c3
0 0 0 0 0 0 0 0 0
1 0 0 0 0 1 0 1 0
0 1 0 0 0 0 0 0 1
1 1 0 0 0 1 0 1 1
0 0 0 1 1 0 1 0 0
1 0 0 1 1 1 1 1 0
0 1 0 1 1 0 1 0 1
1 1 0 1 1 1 1 1 1
0 0 1 0 0 1 0 0 0
1 0 1 0 0 0 0 1 0
0 1 1 0 0 1 0 0 1
1 1 1 0 0 0 0 1 1
0 0 1 1 1 1 1 0 0
1 0 1 1 1 0 1 1 0
0 1 1 1 1 1 1 0 1
1 1 1 1 1 0 1 1 1

Inputs Initial StateNext State Outputs

Trellis Diagram of TCM Code'
2
'
2
'
222
422
)6,0()6,0(
 












 ddd
free
00
10
01
11
0 0
4
2
6
2
6
4
0
1
5
3
7
1
5
3
7
6
0426
2640
1537
3715
State
OR'
2
'22
42)4,0( 





dd
free

Coding Gain over Uncoded QPSK
Modulation 2
d
min =
Uncoded QPSK2
2
mind
Gain =2
2
4
2
min
'
2



















d
d
free or 3 dB

TCM Problems
S
1 S
2
8-PSK
Encoder
16-QAM
a
1
a
2
c
1
c
2
c
3
c
4
Output
S
1 S
2
8-PSK
Encoder
16-QAM
a
1
a
2
c
1
c
2
c
3
c
4
Output
S
3

S
2 S
3
8-PSK
Encoder
16-QAM
a
1
a
2
c
1
c
2
c
3
c
4
Output
S
4S
1

The trellis-coded signal is formed as shown below, by encoding one bit using a rate
½ convolutional code with three additional information bits left uncoded. Perform the
set partitioning of a 32-QAM (cross) constellation and indicate the subsets in the
partition. By how much is the distance between the adjacent signal points increased as
a result of partitioning.
a
1
a
2
a
3
a
4
c
1
c
2
c
3
c
4
c
5

TCM and Decoding
•Viterbi Algorithm is used with soft
decisions of the demodulator for maximum
likelihood estimation of the sequence
being transmitted

Turbo Encoding / Decoding
By R. A. Carrasco
Professor in Mobile Communications
School of Electrical, Electronic and Computing Engineering
University of Newcastle-upon-Tyne
[1], pages 209 –215
http://en.wikipedia.org/wiki/Turbo_code

Introduction
•The Turbo Encoder
–Overview
–Component encoders and their construction
–Tail bits
–Interleaving
–Puncturing
•The Turbo Decoder
–Overview
–Scaling

Introduction Cont’d
•Results
–AWGN results
•Performance
•Conclusions

Concatenated Coding and Turbo Codes
Input
data
Outer encoder Inner encoder
channel
Inner decoderOuter decoder
Output
data
Serially concatenated codes
Input
data
encoder
encoder
interleaver
Systematic bits
Parity bits#1
Parity bits#2
Parallel-concatenated (Turbo encoder)
•Convolutional codes
Non-systematic convolutional codes (NSC)
–Have no fixed back paths;
–They act like a finite impulse response (FIR) digital filter;
–NSC codes do not lead themselves to parallel concatenation;
–At high SNR the BER performance of a classical NSC code is better than the
systematic convolutional codes of the same constraint length.

The Turbo Encoder
•Recursive systematic convolutional encoders in
parallel concatenation, separated by pseudo-random
interleaver
•Second systematic is interleaved version of first
systematic
–Interleaver process is known at the decoder, therefore this is
surplus to our needs
Component
Encoder #1
Component
Encoder #2
Interleaver
d
k
d
k
-1
s
k= d
k
p
k1
p
k2

Component Encoders
•[23;33]
8RSC
component encoder
•16 state trellis
representation
•[7;5]
8RSC component
encoder
•4 state trellis
representation
D D
D D D Dd
k
d
k
s
k
p
k
s
k
p
k

Systematic convolutional codes
–Recursive Systematic Convolutional (RSC) codes can
be generated from NSC codes by connecting the output
of the encoder directly to the input;
–At low SNR the BER performance of an RSC code is
better than the NSC.
The operation of the Turbo encoder is as follow:
1.The input data sequence is applied directly to encoder 1
and the interleaved version of the same input data
sequence is applied to encoder 2.
2.The systematic bits (i.e. the original message bits) and
the two parity check bit streams (generated by the two
encoders) are multiplexed together to form the output of
the encoders.

Turbo code interleavers
The novelty of the parallel-concatenated turbo encoder lies in
the use of RSC codes and the introduction of an
interleaver between the two encoders;
•The interleaver ensures that two permutations the same
input data are encoded to produce two different parity
sequences;
•The effect of the interleaver is to tie together errors that are
easily made in one half of the turbo encoder to errors that
are exceptionally unlikely to occur in the other half;
•This ensures robust performance in the event that the
channel characteristics are not known and is the reason
why turbo codes perform better than traditional codes.

•The choice of interleaver is therefore the key to be
performance of a turbo coding system;
•Turbo code performance can be analysed in terms of the
Hamming distance between the code words;
•If the applied input sequence happens to terminate one of
the encoders, it is unlikely that, once interleaved, the
sequence will terminate the other leading to a large
hamming distance in at least one of the two encoders;
•A Pseudo-random interleaver is a good choice.
Turbo code interleavers(Cont’d)

Interleaving
•Shannon states that large frame length
random codes can achieve channel
capacity
•By their very nature, random codes are
impossible to decode
•Pseudo-random interleavers make turbo
codes appear random while maintaining a
decodable structure

Interleaving cont’d
•Primary use
–To increase average codeword weight
–Altering bit position does not alter data-word weight
but can increase codeword weight
–Thus a low weight convolutional output from encoder
#1 does not mean a low-weight turbo output
DATAWORD CODEWORD
CODEWORD
WEIGHT
01100 0011100001 4
01010 0011011001 5
10010 1101011100 6

Interleavers
•An interleaver takes a given sequence of symbols and permutes their positions,
arranging them in a different temporal order;
•The basis goal of an interleaver is to randomise the data sequence when used
against burst errors;
•In general, data interleavers can be classified into: block, convolutional, random
and linear interleavers;
•Block interleaver:data are first written in row format in a permutation matrix, and
then read in a column format;
•A pseudo –random interleaveris a variation of a block interleaver when data are
stored in a register at position that are deinterleaved randomly;
•Convolutional interleaverare characterised by a shift of the data, usually applied in
a fixed and cumulative way.

Example: Block interleaver
•Data sequence: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 2 3 4
5 6 7 8
9 10 1112
13 14 15 16
read out:
1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16
read in (interleave):
1 5 9 13
2 6 10 14
3 7 1115
4 8 12 16
read out:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
read in (De-interleave):
1 2 3 4
5 6 7 8
9 10 1112
13 14 15 16
Channel
transmit
receive

Example: Pseudo -random interleaver
•Data sequence: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 2 3 4
5 6 7 8
9 10 1112
13 14 15 16
read out (by a random position
pattern):
1 6 11 15 2 5 9 13 4 7 12 14 3 8 16 10
read in (interleave):
1 6 1115
2 5 9 13
4 7 12 14
3 8 16 10
read out: (by the inverse of random
position pattern)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
read in (De-interleave):
1 2 3 4
5 6 7 8
9 10 1112
13 14 15 16
Channel
transmit
receive

Convolutional interleaver
2L
(N-1)/L
Channel
(N-1)/L
2L
L
………….
………….

Continued…
Input: …, x
0, x
1, x
2, x
3, x
4, x
5, x
6, …Interleave:
D
D D
D D D
x
0
x
1 x
-3
x
2 x
-2 x
-6
x
3 x
-1 x
-5 x
-9
Output: …, x
0, x
-3, x
-6, x
-9, x
4, x
1, x
-2, x
-5, x
8, x
5,
x
2, x
-1…
Channel
transmit
receive
Input: …, x
0, x
-3, x
-6, x
-9, x
4, x
1, x
-2, x
-5,
x
8, x
5, x
2, x
-1…
De-interleave:
D D
x
12 x
8 x
4
x
0
D
D D
x
9 x
5 x
1
D
x
6 x
2
x
3
Output: …, x
0, x
1, x
2, x
3, x
4, x
5, x
6, …
D
Corresponds to a delay of 4 symbols
in this example.

Puncturing
•High rate codes are usually generated by a
procedure known as puncturing;
•A change in the code rate to ½ could be achieved by
puncturing the two parity sequences prior to the
multiplexer. One bit might be deleted from code parity
output in turn, such that one parity bit remains for
each data bit.

Puncturing
•Used to reduce code rate
–Omits certain output bits according to a pre-
arranged method
–Standard method reduces turbo codeword from
rate n/k = 1/3 to rate n/k = 1/2
p
2,1p
2,2p
2,3p
2,4
p
1,1p
1,2p
1,3p
1,4
s
1,1s
1,2s
1,3s
1,4
s
1,1p
1,1s
1,2p
2,2s
1,3p
1,3s
1,4p
2,4
puncturer

Tail Bits
•Tail bits are added to the dataword such that the first
component encoders codeword terminates at the all-zero state
–Look up table is most common method
= 1
= 0
Data bits Tail
S
0
S
1
S
2
S
3

Turbo decoding
•A key component of iterative (Turbo) decoding is
the soft-in, soft-out (SISO) decoder;
Matched filter
S+H
8-level
3-bit
quantization
Soft decisions
Combined soft decision
error control decoding
Hard decisions
Matched filter
S+H
Binary quantization
Error
control
Hard decisions
[1], pages 239 -244

000 010 100 110
001 011 101 111
Soft decision
Digital 0 Digital 1
0 1

Hard decision vs. Soft decision
1.Soft (multi-level) decisions;
2.Hard (two-level) decisions;
Each soft decision contains not only information about the most likely
transmitted symbol
000 to 011 indicating a likely 0
100 to 111 indicating a likely 1
but also information about the confidence or likelihood which can be
placed on this decision.
Soft decision

The log-likelihood ratio
•It is based on modulo-2 addition of the binary random
variable u, which is -1 for logic 0, and +1 for logic 1;
•L(u) is the log-likelihood ratio for the binary random
variable and is defined as:
This is described as the ‘soft’ value of the binary random
variable u.
The sign of the value is the hard decision while the
magnitude represents the reliability of this decision;)1(
)1(
ln)(



uP
uP
uL

•As L(u) increases towards +∞,the
probability that u=+1 also increases.
•As L(u) increases towards -∞, the
probability that u=-1 increases.
•For the conditional log-likelihood ration
L(u/y) defined as:)/1(
)/1(
ln)/(
yuP
yuP
yuL



The log-likelihood ratio

The information uis mapped to the encoded bits x. These
encoded bits are received by the decoder as y. All with the
time index k. From this the log-likelihood ration for the
system is:
From Bayes theorem, this is equivalent to:)/1(
)/1(
ln)/(
kk
kk
kk
yxP
yxP
yxL


 )1(
)1(
ln
)1/(
)1/(
ln
)1()1/(
)1()1/(
ln)/(









k
k
kk
kk
kkk
kkk
kk
xP
xP
xyP
xyP
xPxyP
xPxyP
yxL
The log-likelihood ratio

•Assuming the ‘channel’ to be flat fading with Gaussian noise, the
Gaussian pdf, G(x):
With q representing the mean and representing the variance,
showing that:k
ay
N
Eb
ay
N
Eb
kk
kk
kk ay
N
Eb
e
e
xyP
xyP
xyL
k
k
0)(
)(
4ln
)1/(
)1/(
ln)|(
2
0
2
0





 2
2
2
)(
2
1
)(


qx
exG


 2

The log-likelihood ratio 
2
0
02
1
)|(
kk
b
axy
N
E
kk e
N
xyP


where represent the signal to noise ratio per bit and abeing the fading
amplitude (a = 1 for a non-fading Gaussian Channel).
From equation0N
Eb )1(
)1(
ln)(



uP
uP
uL )()|(
)1(
)1(
ln
)1/(
)1/(
ln)/( xLxyL
xP
xP
xyP
xyP
yxL
c
k
k
kk
kk
kk







The log-likelihood ratio)(
)()|(
)|()()|()()|(),(
yP
xPxyP
yxPxPxyPyPyxPyxP   
 
  
  )1()1|(
)1()1|(
ln
)(
)1(1|
)(
11|
ln
|1
|1
ln)|(










 







 




xPxyP
xPxyP
yP
xPxyP
yP
xPxyP
yxP
yxP
yxL

The log likelihood ratio of x
kdepends on y
kis:
where is the channel reliability.
Therefore L(x
k/y
k) is the weighted received
value.)()()/(
kkckk xLyLyxL  a
N
Eb
L
c
0
4
The log-likelihood ratio

Turbo Decoding: Scaling by Channel
Reliability
•Channel Reliability = 4*E
b/N
0*Channel
Amplitude
–Channel Amplitude for AWGN = 1
–Channel Amplitude for Fading varies
4*E
b/N
0*A
Corrupted, Received
codeword
Scaled, Corrupted,
Received codeword

Performance of Turbo Codes
-4 -2 0 2 4 6 8 10
10
0
10
-1
10
-2
10
-3
10
-4
10
-5
10
-6
Shannon
limit
Turbo codes
Uncoded
At a bit error rate of 10
-5
, the turbo code is less than 0.5 dB
from Shannon's theoretical limits.

Decoder
Stage 1
Interleaver
Decoder
Stage 2
De-interleaver
De-interleaver
Hard-limiter
Decoder bits
Noise parity-check bits ε
1
Noise
Systematic
bits
Noise parity-check bits ε
2
Block diagram of Turbo Decoder
Block diagram of Turbo Decoder

Figure shows the basic structure of the turbo decoder. It operates on
noisy versions of the systematic bits and the two noisy version of the
parity bits in two decoding stages to produce an estimate of the original
message bits.
Turbo Decoder0)(2
~
xI )(
2xI )(2
~
xI
BCJR BCJRI D∑ ∑ ∑ ∑
u ε
1 u ε
2
+
-
+
-
Stage 1 Stage 2
Set )(
1xI )(1
~
xI ~
x
Hard
Limiter

Turbo Decoding
•The BCJR algorithm is a soft input –soft output decoding
algorithm with two recursions, one forward and the other
backward.
•At stage 1, the BCJR algorithm uses extrinsic information
I
2(x) added to the input (u). At the output of the decoder
the ‘input’ is subtracted from the ‘output’ and only the
information generated by the 1
st
decoder is passed on I
1(x)
.
For the first ‘run’ I
2(x) is set to zero as there is no ‘prior’
information.

Turbo Decoding
•At stage 2, the BCJR algorithm uses extrinsic information
I
1(x) added to the input (u). The input is then interleaved
so that the data sequence matches the previously
interleaved parity (ε
2). The decoder output is then de-
interleaved and the decoder’s ‘input’ is subtracted so that
only decoder 2’s information is passed on I
2(x). After this
loop has repeated many times, the output of the 2
nd
decoder is hard limited to form the output data.

The first decoding stage use the BCJR Algorithm to produce a
soft estimate of systematic bit x
J, expressed as the log-
likelihood ratio:
where uis the set of noisy systematic bits, ε
1is the set of noisy
parity-check bits generated by encoder 1. )
))(,,|0(
))(,,|1(
ln()(
2
~
1
2
~
1
1
xIuxP
xIuxP
xI
J
J
J




 ,3,2,1J
Turbo Decoding

I
2(x) is the extrinsic information about the set of message
bits x derived from the second decoding stage and fed
back to the first stage.
The total log-likelihood ratio at the output of the first
decoding stage is therefore:


K
J
JxIxI
1
11 )()(
Turbo Decoding

The extrinsic information about the message bits derived from
the first decoding stage is:
where is to be defined.)()()( 2
~
1
1
~
xIxIxI  )(2
~
xI
∑ ∑
Soft-input
Soft-output
Intrinsic
information
Extrinsic
information
Other
information
Raw data
At the output of the SISO decoder, the ‘input’ is subtracted from the
‘output’ and only the reliability information generated by the decoder is
passed on as extrinsic information to the next decoder.
Turbo Decoding

The extrinsic information fed back to the first
decoding stage is therefore:
where is itself defined before and is the log-
likelihood ratio computed by the second storage. )(2
~
xI )()()( 1
~
2
2
~
xIxIxI  )(1
~
xI )(2
~
xI )
))(,,/0(
))(,,/1(
(log)(
1
~
2
1
~
2
22
xIuxP
xIuxP
xI
J
J
J




 ,2,1J
Turbo Decoding

An estimate of the message bits x is computed by
hard-limiting the log-likelihood ratio at the output
of the second stage, as shown by:
we set on the first iteration of the algorithm.)(
2xI ))(sgn(
2
nIx
 0)(2

xx
Turbo Decoding

Turbo Decoding: Serial to Parallel
Conversion and Erasure Insertion
•Received, corrupted codeword is returned to
original three bit streams
•Erasures are replaced with a ‘null’
0p
2,20p
2,4
p
1,10p
1,30
s
1,1s
1,2s
1,3s
1,4
s
1,1p
1,1s
1,2p
2,2s
1,3p
1,3s
1,4p
2,4
Serial to Parallel

Results over AWGN[7;5] SOVA vs Log-MAP for AWGN, 512 data bits
1.00E-06
1.00E-05
1.00E-04
1.00E-03
1.00E-02
1.00E-01
1.00E+00
0.5 1 1.5 2 2.5 3
snr
BER
7;5 punctured, AWGN, Log-MAP
7;5 punctured, AWGN, SOVA

Questions
•What is the MAP algorithm first of all? Who found it?
•I have heard about the Viterbi algorithm and ML sequence estimation for decoding
coded sequences. What is the essential difference between these two methods?
•But I haven’t heard about the MAP algorithm until recently (even though it was
discovered in 1974). Why?
•What are SISO (Soft-Input-Soft-Output) algorithms first of all?
•Well! I am quite comfortable with the basics of SISO algorithms. But tell me one thing.
Why should a decoder output soft values? I presume there is no need for it to do that.
•How does the MAP algorithm work?
•Well then! Explain MAP as an algorithm. (Some flow-charts or steps will do).
•Are there any simplified versions of the MAP algorithm? (The standard one involves a
lot of multiplication and log business and requires a number of clock cycles to
execute.)
•Is there any demo source code available for the MAP algorithm?
•References

Problem 1
•Let r
c1=p/q
1and r
c2=p/q
2be the codes rates of RSC encoders 1 and 2 in
the turbo encoder of figure 1. Determine the code rate of the turbo
code.
•The turbo encoder of figure 1 involves the use of two RSC encoders .
(i) Generalise this encoder to encompass a total of M interleavers .
(ii) construct the block diagram of the turbo decoder that exploits the M
sets of parity-check bits generated by such a generalization.

ENC
1
ENC
2
p
p
p
q
1
q
2
Figure 1

Problem 2
Consider the following generator matrices for rates ½
turbo codes:
4-state encoder: g (D) =
8-state encoder: g (D)=
(i)Construct the block diagram for each one of these RSC
encoders.
(ii)Construct the parity-check equation associated with each
encoder.







2
2
1
1
,1
D
DD 







32
32
1
1
,1
DDD
DD

Problem 3
•Explain the principle of Non-systematic convolutional
codes (NSC) and Recursive systematic convolutional
codes (RSC) and make comparisons between the two
•Describe the operation of the turbo encoder
•Explain how important the interleaver process is to the
performance of a turbo coding system

Problem 4
•Describe the meaning of Hard decision and soft decision
for the turbo decoder process
•Discuss the log-likelihood ratio principle for turbo
decoding system
•Describe the iterative turbo decoding process
•Explain the operation of the soft-input-soft-output (SISO)
decoder

School of Electrical, Electronics and
Computer Engineering
Low Density Parity Check Codes: An
Overview
By R.A. Carrasco
Professor in Mobile Communications
University of Newcastle-upon-Tyne
[1], pages 277 –287
http://en.wikipedia.org/wiki/LDPC

•Parity check codes
•What are LDPC Codes?
•Introduction and Background
•Message Passing Algorithm
•LDPC Decoding Process
–Sum-Product Algorithm
–Example of rate 1/3 LDPC (2,3) code
•Construction of LDPC codes
–Protograph Method
–Finite Geometries
–Combinatorial design
•Results of LDPC codes constructed using BIBD design
Outline

•A binary parity check code is a block code: i.e., a collection of
binary vectors of fixed length n.
•The symbols in the code satisfy m parity check equationsof the
form:
– x
ax
bx
c…x
z= 0
–where means modulo 2 addition and
– x
a,x
b,x
c,…,x
z
•are the code symbols in the equation.
•Each codeword of length n can contain (n-m)=kinformation
digits and mcheck digits.
Parity Check Code

For a code word of the form c
1, c
2, c
3, c
4, c
5, c
6, c
7, the equations are:
c
1 c
2 c
3 c
5= 0
c
1 c
2 c
4 c
6= 0
c
1 c
3 c
4 c
7 = 0
The parity check matrix for this code is then:
1110100
1101010
1011001
Note that c
1is contained in all three equations while c
2is contained in
only the first two equations.
Example: Hamming Code with
n=7, k=4, and m=3

What are Low Density Parity Check
Codes?
•The percentage of 1’sin the parity check matrix for a LDPC
code is low.
•A regularLDPC codehas the property that:
–every code digit is contained in the same numberof
equations,
–each equation contains the same numberof code
symbols.
•Anirregular LDPC coderelaxes these conditions.

c
3 c
6 c
7 c
8= 0
c
1 c
2 c
5 c
12= 0
c
4 c
9 c
10 c
11 = 0
c
2 c
6 c
7 c
10= 0
c
1 c
3 c
8 c
11= 0
c
4 c
5 c
9 c
12 = 0
c
1 c
4 c
5 c
7= 0
c
6 c
8 c
11 c
12= 0
c
2 c
3 c
9 c
10 = 0
Equations for Simple LDPC Code with
n=12 and m=9

The Parity Check Matrix for the
LDPC Code
c
1 c
2 c
3 c
4 c
5 c
6 c
7 c
8 c
9c
10c
11c
12
0 0 10 0 1 1 10 0 0 0
110 0 10 0 0 0 0 0 1
0 0 0 10 0 0 0 1 1 10
0 10 0 0 1 10 0 10 0
10 10 0 0 0 10 0 10
0 0 0 1 10 0 0 10 0 1
10 0 110 10 0 0 0 0
0 0 0 0 0 10 10 0 1 1
0 1 10 0 0 0 0 1 10 0
c
3 c
6 c
7 c
8= 0
c
1 c
2 c
5 c
12= 0
c
4 c
9 c
10 c
11 = 0
c
2 c
6 c
7 c
10= 0
c
1 c
3 c
8 c
11= 0
c
4 c
5 c
9 c
12 = 0
c
1 c
4 c
5 c
7= 0
c
6 c
8 c
11 c
12= 0
c
2 c
3 c
9 c
10 = 0

Introduction
-LDPC codes were originally invented by Robert Gallager in the early
1960s but were largely ignored until they were rediscovered in the mid-
1990s by MacKay.
-Defined in terms of a parity check matrix that has a small number of non-
zero entries in each column
-Randomly distributed non-zero entries
–Regular LDPC codes
–Irregular LDPC codes
-Sum and Product Algorithm used for decoding
•-Linear block code with sparse (small fraction of ones) parity-check matrix
•-Have natural representation in terms of bipartite graph
•-Simple and efficient iterative decoding

Introduction
Low Density Parity Check (LDPC) codes are a class of linear block codes
characterized by sparse parity check matrices (H).
Review of parity check matrices:
–For a (n,k) code, H is a (n-k ,n) matrix of ones and zeros.
–A codeword c is valid if cH
T
=s= 0
–Each row of H specifies a parity check equation. The code bits in positions where the row
is one must sum (modulo-2) to zero.
–In an LDPC code, only a few bits (~4 to 6) participate in each parity check equation.
–From parity check matrix we obtained Generator Matrix G which is used to generate
LDPC Codeword.
–G.H
T
= 0
–Parity check matrix is arranged in Systmatic from as H = [I
m| P]
–Generator matrix G = [I
k| P
T
]
–Code can be expressed as c= x . G

•Representations Of LDPC Codes
parity check
matrix
(Soft) Message passing:
Variable nodes communicate to
check nodes their reliability (log-
likelihoods)
Check nodes decide which
variables are not reliable and
“suppress” their inputs
Number of edges in graph =
density of H
Sparse = small complexity



















011010110001
111001011000
010110000111
001110101010
100001011110
100101100101
Low Density Parity Check Codes

Parity Check Matrix to Tanner Graph



















011010110001
111001011000
010110000111
001110101010
100001011110
100101100101

LDPC Codes
•Bipartite graph with
connections defined
by matrix H
•r’: variable nodes
–corrupted codeword
•s: check nodes
–Syndrome, must be
all zero for the
decoder to claim no
error
•Given the syndromes and
the statistics of r’, the
LDPC decoder solves the
equation
r’H
T
=s
in an iterative manner

Construction of LDPC codes
•Random LDPC codes
–MacKay Construction
•Computer Generated random Construction
•Structured LDPC codes
–Well defined and structured code
–Algebraic and Combinatoric construction
–Encoding advantage over random LDPC codes
–Performs equally well as random codes
–Examples
•Vandermonde-matrix (Array codes)
•Finite Geometry
•Balance Incomplete block design
•Other Methods (e.g. Ramanujan Graphs)

Protograph Construction of LDPC codes by J.
Thorpe
•A protograph can be any Tanner graph with a relatively small
number of nodes.
•The protograph serves as a blueprint for constructing LDPC
codes of arbitrary size whose performance can be predicted
by analysing protograph.
J.Thrope, “Low-Density Parity Check codes constructed from Protograph,” IPN Progress report 42-154,
2003.

Protograph Construction (Continued)

LDPC Decoding (Message passing
Algorithm)
•Decoding is accomplished by passing messages along the lines of
the graph.
•The messages on the lines that connect to the i
th
variable node, r
i,
are estimates of Pr[r
i=1] (or some equivalent information).
•Each variable node is furnished an initial estimateof the probability
from the soft outputof the channel.
•The variable node broadcaststhis initial estimate to the check
nodes on the lines connected to that variable node.
•But each check node must make new estimatesfor the bits involved
in that parity equation and send these new estimates (on the lines)
back to the variable nodes.

Message passing Algorithm or Sum-
Product Algorithm
While(not equal to stop criteria)
{
-All variable nodes pass messages to
corresponding check nodes
-All check nodes pass messages to
corresponding variable nodes
}
Stop Criteria:
•Satisfying Equation r’H
T
=0
•Maximum number of iterations reached

1mnz
2
mnz
4
mnz
3
mnz
3mnL
2mnL
4
mnL
1
mnL
2mnL
1mnz
4
mnz
3
mnz
Var Nodes
N(m)
Check Node
m 








 



nmNn
nmmn zL
\)(
1
2tanhtanh2
nmz4
nmz
3
nmz2
nmz1
nmL4
nmL3
nmL
2
nmL
1
nF
Check Nodes
M(n)
Var Node
n


mnMm
nmnmn LFz
\)( ,
)(



nMm
mnnn LFz
for hard
decision
Check Node Processing Variable Node Processing
LDPC Decoding Process
Where F
n is the channel
reliability value

Sum-Product Algorithm
Step #1 : Initialisation
LLR of the (soft) received signal y
i , for AWGN
L
ij= R
i= where jrepresents check and ivariable node
Step # 2 Check to Variable Node
Extrinsic message from check node jto bit node i
Where B
jrepresents set of column locations of the bits in the jth parity-check
equations and ln is natural logarithmNo
Eb
y4
i 




































iiBi
j
''
'
j
'
,
ii,Bi
2
tanh1
2
tanh1
ln
ji
ji
ji
'
'
L
L
E

Sum-Product Algorithm (Continued)
Step #3 : Codeword test or Parity Check
Combined LLR is the sum of the extrinsic LLRs and the
original LLR calculated in step #1.
Where A
iis the set of row locations of the parity-check equations
which check on the ith bitof the code
Hard decision is made for each bit


i
Aj
ijii REL 





0L0,
0L1,
z
i
i
i

Sum-Product Algorithm (Continued)
Condition to stop further iterations:
If r=[r
1, r
2,…………. r
n] is a valid code word then,
•It would satisfied H.r
T
= 0
•maximum number of iterations reached.
Step #4 : Variable to Check Node
Variable node i send LLR message to check node j
without using the information from check node j .
Return to step # 2


jj,Aj
ijiij
'
i
'
'REL

Example:
Code word send is [ 001011 ] through AWGN channel with Eb/No
= 1.25 and Vector [-0.1 0.5 –0.8 1.0 –0.7 0.5] is received.
Parity Check Matrix 












101100
110001
010110
001011
H  
 010101
5.25.30.50.45.25.0R
1Iteration
 No
Eb
y4
i
R=
Step # 1
After Hard Decision we find that 2 bits are in errors so we need to
apply LDPC decoding to correct the errors.













101100
110001
010110
001011 Example (Continued)
Variable Nodes
Check Nodes
Variable NodesCheck Nodes

Initialisation
















5.205400
5.25.30005.0
05.3045.20
00505.25.0
ji,L

Questions
1.Suppose we have codeword cas follows:
where each is either 0 or 1 and codeword now has three parity-
check equations
a)Determine the parity check matrix Hby using the above equation
b)Show the systematic form of Hby applying Gauss Jordan
elimination
c)Determine Generator matrix G from H and prove G * H
T
= 0
d)Find out the dimension of the H, G
e)State whether the matrix is regular or irregular654321 ccccccc ic 0
521  ccc 0
631  ccc 0
6421  cccc

Questions
2.The parity check matrix Hof LDPC code is given below:
a)Determine the degree of rows and column
b)State whether the LDPC code is regular or irregular
c)Determine the rate of the LDPC code
d)Draw the tanner graph representation of this LDPC code.
e)What would be the code rate if we make rows equals to column
f)Write down the parity check equation of the LDPC code




















011010110001
111001011000
010110000111
001110101010
100001011110
100101100101
H

Questions
3.Consider parity check matrix Hgenerated in question 1,
a) Determine message bits length k, parity bits length m, codeword length n
b) Use the generator matrix Gobtained in question 1to generate all possible codewords c.
4.
a) What is the difference between regular and irregular LDPC codes?
b) What is the importance of cycles in parity check matrix?
c) Identify the cycles of 4 in the following tanner graph.
Check
Nodes
Variable Nodes

Solutions
Question 1 a)










101011
100101
010011
H
Question 1 b)









101011
110110
010011 









111000
110110
010011 









110100
111010
010011
Applying Gaussian elimination first
modulo-2 addition of R1
and R2 in equation (A)
(A)
modulo-2 addition of R1
and R3 in equation (A)
Swap C3 with C4 to
obtain the diagonal of 1’s
in the first 3 columns
Desired diagonal
This 1 needs to be
eliminated to achieve
the identity matrix I

Solutions
Now, we need an Identity matrix of 3 x 3 dimensions. As you can see the first 3 columns and
rows can become an identity matrix if we somehow eliminate 1 in the position (row =1 and
column =2). To do that we apply Jordan elimination to find the parity matrix in
systematic form, hence
Modulo-2 addition of R2 into R1 gives 










110100
111010
101001
Hsys
It is now in systematic form and represented as
H = [I | P]

Solutions
Generator matrix can be obtained by using Hin the systematic form obtained in a)
G = [P
T
| I]










000111
010110
001011
G
To prove G * H
T
= 0*
000111
010110
001011









 



















111
110
011
100
010
001 










000
000
000

Solutions
d) Dimension of H is (3 ×6) and G is also (3 ×6)
Matrix is irregular because the number of 1’s in rows and columns are not equal e.g.
number of 1’s in 1st row is ‘3’ while 3rd row has ‘4’ 1’s. Similarly, number of 1’s in 1st
column is ‘3’ while in 2nd columns has ‘2’ 1’s.
e)
Question 2
The parity check matrix Hcontains 6 ones in the each row and 3 ones in each column. The
degree of rows is the number of 1’s in the rows which is 6 in this case, similarly and the
degree of column is the number of 1’s in the column hence in this case 3.
a)
Regular LDPC, because the number of ones in each row and column are the same b)
Rate = 1 –m/ n = 1 –6/12 = ½ c)

Solutions
Tanner graph is obtained by connecting the check and variable nodes as followsd)

Solutions
If we make rows equals to columns then the code rate is 1. It means there is no redundancy
involved in the code and all bits are the information bits.
e)
Parity check equations of the LDP code are f)0
1297631  cccccc 0
1275432  cccccc 0
1098642  cccccc 0
1198321  cccccc 0
121110754  cccccc 0
11108651  cccccc

Solutions
message bits length k= 6-3 =3
parity bits length m= 3
codeword length n= 6
Question 3
a)
Since the information or message is 3 bits long therefore, b)0x 1x 2x
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1

The information bits has 8 possibilities
as shown in the Table below

Solutions
The codeword is generated by multiplying information bits with the generator matrix as
follows
c = x G
The Table below shows the code words generated by using Gin question 1. x c
000 000000
001 111001
010 011010
011 100011
100 110100
101 001101
110 101110
111 010111

Solutions
Question 4
The Regular LDPC code has constant number of 1’s in the rows and columns of the
Parity check matrix.
The Irregular LDPC code has variable number of 1’s in the rows and columns of the
Parity check matrix.
a)
A cyclein a tanner graph is a sequence of connected vertices that start and end at the same
vertex in the graph, and other vertices participates only once in the cycle. The length of the
cycle is the number of edges it contains, and the girthof a graph is the size of the smallest
cycle. A good LDPC code should have a large girth so as to avoid short cycles in the tanner
Graph since they introduce an error floor. Avoiding short cycles have been proven to be more
effective in combating error floor in the LDPC code. Hence the design criteria of LDPC
code should be such that it removes most of the short cycles in the code.
b)

Solutions
Cycles of length 4 are identified as follows
Check Nodes
Variable Nodes c)

Security in Mobile Systems
By Prof R A Carrasco
School of Electrical, Electronic and Computing Engineering
University of Newcastle-upon-Tyne

Security in Mobile Systems
•Air Interface Encryption
-Provides security to the Air Interface
-Mobile Station to Base Station
•End-to-end Encryption
-Provides security to the whole
communication path
-Mobile Station to Mobile Station

Air Interface Encryption Protocols
•Symmetric Key
-Use Challenge Response Protocols for
authentication and key agreement
•Asymmetric Key
-Use exchange and verification of
‘Certificates’ for authentication and key
agreement
Where it is used

Challenge Response Protocol
GSM
*Only authenticates the Mobile Station
*A3,A8,Algorithms are used
TETRA
*Authentication both Mobile Station and the
Network
*TA11, TA12, TA21, TA22 algorithm are used
3G
*Authentication and Key Agreement (AKA)

3G
•Advantages
-Simpler than Public key techniques
-Less processing power required in the hand set
•Disadvantages
-Network has to maintain a Database of secret
keys of all the Mobile stations supported by it
-The secret key is never changed in normal
operation
-Have to share secret keys with MS

Challenge-Response Protocol
Challenge-Response Protocol
1.MS sends its identity to BS
2.BS sends the receiver MS identity to AC
3.AC gets the corresponding key ‘k’ from database
4.AC generates a random number called a challenge
5.By hashing K and the challenge the AC computes a
signed response
6.It also generates a session authentication key by
hashing K and the challenge (difference hashing
function)

Challenge-response Protocol
7.AC sends the challenge, Response and session
key to BS
8.BS sends the challenges to the MS
9.MS computes the Response and the session
authentication key
10.MS sends the response to the BS
11. If the two Responses received by BS from AC
& MS are equal, the MS is authentic
12. Now MS and BS uses the session key to
encrypt the communication data between them

Challenge-Response Protocol
MS BS AC
Database
Identity (i) Identity (i) Identity key
K challenge challenge K I1 K1
I2 K2
Response/ks
Response

Challenge response protocol in GSM
1)MS sends its IMSI (international mobile
subscriber identity) to the VLR
2)VLR sends the IMSI to AC via HLR
3)AC looks up in the database and gets the
authentication key “Ki” using IMSI
4)Authentication center generates RAND
5)It combines Ki with RAND to produce SRES
using A3 algorithm
6)It combines ki with RAND to produce kc using A8
algorithm

Challenge response protocol in GSM
7) The AC provides the HLR a set of RAND ,SRES ,kc
triplets
8) HLR sends one set to the VLR to authenticate the MS
9) VLR sends RAND to the MS
10) MS computes SRES and kc using ki and RAND
11) MS sends SRES to the VLR
12) VLR compares the two SRES ’s received from the HLR
and MS. If they are equal, the MS is authenticated
SRES=A3(ki,RAND) and kc=A8(ki,RAND)

TETRA Protocol
Protocol flow
1.MS sends its TMSI(TETRA Mobile subscriber Identity) to
BS (Normally a temporary Identity)
2) BS sends the TMSI to AC
3)AC looks up in the database and gets the Authentication
key ‘k’ using TMSI
4)AC generates a 80 bit RANDOM Seed (RS)
5)AC computes KS (session authentication key-128bits)
using K and RS
6)AC sends KS &RS to BS
7)BS generates a 80 bit random challenge called RAND1
and computes a 32 bit expected response called XRES1

TETRA Protocol
8.BS sends RAND1 and RS to the MS
9.MS computes KS using k & RS
10.Then MS computes RES1 using KS &
RAND1
11.MS sends RES1 to the BS
12.BS compares RES1 and XRES1.If they
are equal ,the MS is authenticated .
13.BS sends the results ‘R1’ of the
comparison to the MS

Authentication of the user
Protocol Flow
1)A Random number is chosen by AC called RS
1a)AC uses the K and RS to generate session key
(KS), using TA11 algorithm
2)AC sends the KS and RS to the base station.
3)BS generates a random number called RAND1.
4)BS computes expected response (XRES1) and
Derived Cypher key noting also TA12

Authentication of the user
5)BS sends RS and RAND1 to MS
6)MS using his own key (k) and RS
computes KS (session key) using TA11
also use TA12 computes RES1 and DCK1.
7)MS sends RES1 to BS.
8)BS computes XRES1 with RES1.

Comparison
•Challenge response Protocol
•Advantages
Simpler than Public key techniques
Less processing power required in the
hand set
•Disadvantages
Network has to maintain a Database of
secret keys of all the Mobile stations
supported by it

Comparison
•Public key Protocol
Advantages
•Network doesn’t has to share the secret keys
with MS
•Network doesn’t has to maintain a database of
secret keys of the Mobiles
Disadvantages
•Requires high processing power in the mobile
handsets to carry out the complex computations
in real time

Hybrid Protocol
•Combines the challenge-response
protocol with a Public key scheme.
•Here the AC also acts as the Certification
authority
•AC has the public key PAC and private
key SAC for a public key scheme.
•MS also has the public key pi and private
key si for a public key scheme.

End to End Encryption Requirements
•Authentication
•Key Management
•Encryption Synchronisation for multimedia
(e.g Video)

Secret key Methods
Advantages
•Less complex compared to public key
methods
•Less processing power required for
implementation
•Higher encryption rates than the public key
techniques
Disadvantages
•Difficult to manages keys

Public key Methods
Advantages
•Easy to manage keys
•Capable of providing Digital Signatures
Disadvantages
•More complex and time consuming
computations
•Not suitable for bulk encryption of user
data

Combined Secret-key and Public-key
Systems.
•Encryption and Decryption of User Data
(Private key technique)
•Session key Distribution (Public key
technique)
•Authentication (Public key technique)

Possible Implementation
•Combined RSA and DES
•Encryption and Decryption of Data (DES)
•Session key distribution (RSA)
•Authentication (RSA and MD5 Hash
Function)
•Combined Rabin’s Modular Square Root
(RMSR) and SAFER

One Way Hashing Functions
•A one way hash function, H(M), operates on an
arbitrary-length pre-image M and return a fixed-
length value h
h=H(M) where h is of length m.
The pre-image should contain some kind of binary
representation of the length of the entire
message. This technique overcome a potential
security problem resulting from message with
different lengths possibly hashing to the same
value. (MD-Strengthening).

Characteristic of hash functions
•Given M. it is easy to compute h.
•Given h, it is hard to compute M such that
H(M)=h
•Given M, it is harder to find another
message, M’ such that H(M)=H(M’)
• MD5 Hashing Algorithm
Variable Length MD5 128 bit Message
message Digest

Questions
1)Describe the Channel Response Protocol for authentication and
key agreement.
2)Describe the Channel Response Protocol for GSM.
3)Describe the TETRA Protocol for authentication and key
agreement.
4)Describe the authentication for the user in mobile communications
and networking.
5)Describe the End to End encryption requirement, Secret Key
Methods, Public Key Methods and possible implementation.
6)Explain the principle of public/private key encryption. How can
such encryption schemes be used to authenticate a message and
check integrity.
7)Describe different types of data encryption standards.