Information Theory and Coding
(CSE2013)
Presented By:
Dr. Neeraj Kumar Misra
Associate Professor
Department of SENSE,
VIT-AP University
Google Scholar: https://scholar.google.co.in/citations?user=_V5Af5kAAAAJ&hl=en
Research Gate profile: https://www.researchgate.net/profile/Neeraj_Kumar_Misra
ORCHID ID: https://orcid.org/0000-0002-7907-0276
Agenda
Quiz
Introduction to information Theory
What is Information?
Properties of the information
Information Rate
Entropy
Numerical problems
Q1. An event has two possible outcomes with probability P1 = 1/2 and P2 = 1/64 . The rate
of information with 16 outcomes per second is:
a. 38/4 bit/sec
b. 38/64 bit/sec
c. 38/2 bit/sec
d. 38/32 bit/sec
Q2. For a system having 16 distinct symbols maximum entropy is obtained when
probabilitiesare
a. 1/8
b. 1/4
c. 1/3
d. 1/16-
Q3. An Analog signal band limited to 10Khz is quantized in 8 level with probability 1/4, 1/4, 1/5, 1/5,
1/10, 1/10, 1/20, and 1/20. Find the entropy and rate of information
a. 2.84 bits/message, 56800 bit/sec
b. 3.42 bits/message, 6.823 bit/sec
c. 1.324 bit/message. 2.768 bit/sec
d.4.567bit/message,8.653bit/sec
Quiz
Q. Ifthe sourceemit mmessage[m1, m2, m3…………..]with probability
[p1, p2, p3………]. Calculatetheentropy when allthemessage are equiprobable
Q. A source produce 4 symbol with probability1/2, 1/4, 1/8 and 1/8. For this source
practical coding scheme has an average code-word length of 2bits/symbols. Calculate
theefficiency ofcode
Numerical Problem
Good-Companies
PublicSectorCompanies in India
Information Theory and Coding
Course Objective
To define and apply the basic concepts of information theory (entropy, channel capacity etc.)
To study different types of channels in communication
To learn the principles and applications of information theory in communication systems
To study various data compression methods and describe the most common such methods
To understand the theoretical framework upon which error-control codes are built
Course Outcomes
quantify the notion of information in a mathematically sound way
explain what is the significance of this quantitative measure of information in the
communications systems
calculate entropy, joint entropy, relative entropy, conditional entropy, and channel capacity of a
system
differentiate between lossy and lossless compression techniques
decide an efficient data compression scheme for a given information source
explain the impact of feedback and/or many senders or receivers on the communication
systems
Module No. 1 Introduction 6 Hours
Module No. 1 Introduction 6 Hours
Introduction to information Theory, Information rate and entropy, Measure of Information, Properties of entropy of a
binary memory less source.
Module No. 2 Entropy, Relative Entropy, and Mutual Information 6 Hours
Joint entropy, Conditional entropy, Relative entropy, Mutual information, Discrete memoryless channels - BSC, BEC,
noise-free channel, Channel with independent I/O, Cascadedchannels.
Module No. 3 Lossless Compression
7 Hours
Channel capacity, Shannon limit, Source Coding, Shannon Fano coding, Shannon Fano Elias coding, Huffman coding,
Minimum variance Huffman coding, Adaptive Huffman coding,Arithmetic coding.
ModuleNo.4 Communication and Channel Capacity
8 Hours
Dictionary coding – LZ77, LZ78, LZW, Channel coding, Channelcoding theorem for DMC,
Block codes- Hamming weight, Hamming distance, Minimum distance decoding – Single
paritycodes,Hammingcodes.
ModuleNo.5 Encoding–DecodingSchemes 8Hours
Repetition codes – Linear block codes, Cyclic codes - Syndrome calculation, Encoder and
decoder–CRC,Convolutionalcodes–codetree,trellis,statediagram-encoding-decoding.
ModuleNo.6 SpecialTopicsinITC 8Hours
Sequential search and Viterbi algorithm – Principle of Turbo coding, Interleaved
convolutionalcodes,Specialtopicsininformationtheoryandcoding.
Recommended Books
Text Books
1. R. Togneri, C.J.S deSilva, “Fundamentals of Information Theory and Coding Design”, 1e,
CRS Press, Imprint: Taylor and Francis, 2003.
2. R. Bose, “Information Theory Coding and Cryptography”, 3e paperback,TataMcGraw
Hill, 2016.
3. J. A. Thomas, “Elements of Information Theory”, T. M. Cover, 2e, Wiley, 2008.
References
1. R. J. McEliece, “The Theory of Information and Coding”, Cambridge University Press,
2004.
2. S. Roman, “Coding and Information Theory”, Springer, 1997.
11
Claude Shannon
Father of Digital
Communications
“Claude Shannon's creation in the 1940's of
the subject of information theory is arguably
one of the great intellectual achievements of
the twentieth century”
Bell Labs
Computing and Mathematical
Sciences Research
Fundamental of Information Theory
Information Theory
and
Coding
Q1. In a binary system if '0' occur with probability 1/4 and '1‘ occur with probability 3/4.
Thencalculateamountofinformationconveyedby eachbits.
Ans. I1=2 bits, I2=0.415bits
Q2. If there are M equally likely and independent message, then prove that amount of
information carried by each message be I=N bits where M=2^N and N is an integer
Q3. Prove the following statement
If receiver knows the message being transmitted then amount of information carried is
zero
Q4. If I1 is the information carried by message m1 and I2 is the information carried by
message m2 then prove that the amount of information carried compositely due to m1 and
m2 is I 1,2=I1+I2
Numerical Problem
Coding Theory
Q. Comments on the information content of the following message
1. Tomorrow the sun will rise from the east.
2. It will snow fall in Amaravatithis winter
3. The phone will ring in the next one hour.
Ans.1.The first statementdoesnot carryany information sinceitissurethat sun always
rises from east. The probability of occurrence of first event is high orsure. Hence it
carries less ornegligibleinformation
Ik=Log1/pk=Log1/1=0
2.In the winter season snow fall in amaravati is very rare. Hence probability of
occurrence ofthis eventis veryrare, so itcarries largeamountofinformation.
3. In the third statement predicts about phone ring in the time spanof one hour. It does
not mention exact time but span of one hour is mentioned. Hence it carries moderate
information.
Numerical Problem
Q. A source emit one of 4 possible symbol X0 to X3 during each
signalling interval. The symbol occur with probability as given in
table
Find the amount of information gained by observing the source
emittingeachofthesesymbolandfindalsotheentropyofsource.
Ans.I0=1.322bits,I1=1.737bits,I2=2.322bits,I3=3.322bits
H(x)=1.84bits/msgsymbol
Symbol Probability
X0 P0=0.4
X1 P1=0.3
X2 P2=0.2
X3 P3=0.1
Q.AnAnalogSignalisbandwidthtoBHzsampledatNyquistrate.The se
samples are quantized into 4 levels. Each level represent message. Thus
these are 4 message. The probability of occurrence of these 4 level are
P1=P4=1/8andP2=P3=3/8.Findourinformationrateofsource.
Ans.R=3.6bit/ses
Numerical Problem
Q1. The entropy of sure event is
a. Zero
b. One
c. Infinity
d. None
Q2. The entropy of impossible event is
a. Zero
b. One
c. Infinity
d. None
Q3. Entropy is maximum when probability
a. P = 0
b. P = 1
c. P = ∞
d. P = 1/2
Q4. How to define the information rate
a. R = r/H
b. R = H/r
c. R = r. H
d. None
Quiz
Q5. If the bandwidth is B then what is the Nyquist rate
a. B/2
b. B
c. B/4
d. 2B
Q6. Calculate the information when P=1/4
a. 4 bits
b. 3 bits
c. 1 bits
d. 2 bits
Fill in the blank
Q1. Entropy means_____degree of randomness_________
Q2. Entropy is minimum when probability_______p=1/2__________
Q3. Unit of information_______bit/symbol________
Q4. Probability of occurrence is less then information is_____more___________
Measure of Information
Quick Revision
Entropy Concept
Entropy means degree of randomness or higher level of disorder and hence a higher
value of entropy
0<P<1 means log (1/p) > 0
Then H(x) > 0
The entropy of sure (P = 1) or impossible (P = 0) event is zero
Q. A source transmit two independent message with probability ofP
and (1-P) respectively. Prove that the entropy is maximum when both
the message are equally likely. Plot the entropy of source versus
probability(0<P<1)
Numerical Problem
Numerical Problem
Q.Fora discretememory-lesssourcethereare3 symbolswithprobability
P1=α and P2=P3. Determine the entropy of the source and sketch its
variationfordifferentvaluesofα
Q Find the entropy ofa sourcein nats/symbols and Hartleys/symbols ofa
source that emits one out of four symbols A, B, C, D in a statistically
independentsequencewithprobability1/2,1/4,1/8,and1/8
Ans.H(s)=1.2135nats/symbols,0.5268Hartleys/Symbols
Q1. A binary random variable X takes the value +2 or -2. The probability P(X = +2) = α.
The value of αfor which the entropy of X is maximum is
a. 0
b. 1
c. 2
d. 0.5-
Q2. Discrete source S1 has 4 equi-probable symbols while discrete source S2 has 16 equi-
probable symbols. When the entropy of these two source is compared the entropy of:
a.S1 is greater than S2
b. S1 is less than S2 -
c. S1 is equal to S2
d. Depends on rate of symbols/second
Quiz 3
Q3. If all the message are equally likely then entropy will be
a. Minimum
b. Zero
c. Maximum
d. None
Q4. In a binary source 0s occur three times as often as 1s. What is the information
contained in the 1s?
a. 0.415 bit
b. 0.333 bit
c. 3 bit
d. 2 bit
Fill in the blank
Q5. What is the unit of entropy -------Bits/symbols
Q6. What is the unit of Information rate-------Symbol/Sec
Extension of zero order source
Two symbol S1 and S2 with probability P1 and P2
Second order extension of the source (Number of Source)
Order of extension
2
2
=4 (Possible combination)
S1 S1= P1 P1= P1
2
S1 S2= P1 P2
S2 S1= P2 P1
S2 S2= P2 P2= P2
2
Numerical Problem
Q Consider a discrete memory-less source with alphabet S={S0, S1, S2}
withsourcestatistics{0.7,0.15,0.15}
a) Calculatetheentropyofsource
b) Calculatetheentropyofthe2
nd
orderextensionofthesource
Q Asource generate information with probability P1=0.1, P2=0.2, P3=0.3
andP4=0.4.Findtheentropyofthesystem.Whatpercentageofmaximum
possibleinformationisbeinggeneratedbysource.
Q1. An event has two possible outcomes with probability P1=1/2 and P2=1/64. Calculate the
rate of information with 16 outcomes per second is
A. 24/3
B. 64/6
C. 38/4
D. None
Q2. Entropy is always
a. Negative
b. Fractional
c. Positive
d. None
Q3. How to define the efficiency of the channel
a. ή=H(s) x |H(s)|max
b. ή=H(s) + |H(s)|max
c. ή=H(s) -|H(s)|max
d. ή=H(s) / |H(s)|max
Q4. How to define the redundancy (R) of the channel
a. R=1 + ή
b. R=1 / ή
c. R=1 x ή
d. R=1 -ή
Numerical Problem
Q. Consider a system emitting three symbols A, B and C with respective
probability0.7,0.15and0.15.Calculatetheefficiencyandredundancy
Ans. Efficiency = 74.52%, Redundancy = 25.48%
Q. Prove that the nth order extension of Entropy is represented in this
formH(S
n
)=n.H(s)
Numerical Problem
Q.Consideratelegraphsourcehavingtwoelementdotanddash.Thedot
duration is 0.2sec and the dash duration is 3 times of dot duration. The
probability of the dot occurring is twice that of dash and time between
symbol is 0.2 second. Assume 1200 symbol. Calculate information rate of
thetelegraphsource.
Ans. R=1.7218 bit/sec
Q3. An image use 512x512 picture elements. Each of the picture element
can take any of the 8 distinguishable intensity levels. Calculate the
maximumbitneeded.
Ans.786432bits
Numerical Problem
Q1. When the base of the logarithm is 2, then the unit of measure of information is
a) Bits
b) Bytes
c) Nats
d) None of the mentioned
Q2. When probability of error during transmission is 0.5, it indicates that
a) Channel is very noisy
b) No information is received
c) Channel is very noisy & No information is received
d) None of the mentioned
Q3. An event has two possible outcomes with probability P1 = 1/2 & P2 = 1/64. The
rate of information with 16 outcomes per second is:
a) (38/4) bits/sec
b) (38/64) bits/sec
c) (38/2) bits/sec
d) (38/32) bits/sec
Quiz 6
Q4. When the base of the logarithm is e, the unit of measure of information is
a) Bits
b) Bytes
c) Nats
d) None of the mentioned
Q5. A binary source emitting an independent sequence of 0’s and 1’s with probabilities p
and (1-p) respectively. In which case probability is maximum
a) P=0
b) P=1
c) P=0.5
d) P=Infinity
Q6. An image use 512x512 picture elements. Each of the picture element can take any of the
4 distinguishable intensity levels. Calculate the maximum bit needed.
a) 512x512x3
b) 512x512x4
c) 512x512x2
d) 512x512x1
Joint Probability P(x, y)
Probability that (x, y) simultaneously occur
Where x and y represent the random variable
Conditional Probability P(x/y)
P(x/y)= represent the conditional probability (Probability of x when y is known)
P(x, y)= P(x/y). P(y)
Or
P(x, y)= P (y/x). P(x)
If x and y are independent
Then P(x, y)= P(x). P(y)
P(x, y)=P(x). P(y)
Average Uncertainty of channel input Average Uncertainty of channel output
Entropy of output channel
Joint Entropy
Joint Entropy
Conditional Entropy
Conditional Entropy
H(Y/X) is the average uncertainty of channel
output gives that X was transmitted
H(X/Y) is the average uncertainty of channel
output gives that Y was transmitted
H(X/Y) is the average uncertainty of channel output gives that Y was
transmitted
H(Y/X) is the average uncertainty of channel output gives that X
was transmitted
Important Property
Mutual Information
I(X:Y)=Uncertainty about the channel input that is
resolved by observing the channel output
Discrete Memory-less Channel
Note
Discrete Memory-less channel means output only depends on the current input not
on the previous input.
Channel matrix represent the transition probability.
Channel Matrix
P(1/0)----> Probability of transmitted 0 and received 1.
P(0/1)----> Probability of transmitted 1 and received 0.
P(1/1) ---->Probability of transmitted 1 and received 1.
P(0/0)---->Probability of transmitted 0 and received 0.
Q1. Which is the correct relationship between Joint entropy and conditional
entropy?
A) H(X, Y) = H(X/Y) + H(Y)
B) H(X, Y) = H(Y/X) + H(X)
C) Both a and b
D) None
Q2.WhichofthefollowingisNOTrepresentativeofthemutualinformation?
A)I(X;Y)=H(X)-H(X/Y)
B)I(X;Y)=I(Y;X)
C)I(X;Y)=H(Y)-H(X/Y)
D)I(X;Y)=H(X)+H(Y)-H(X,Y)
Quiz
Q3. For which value(s) of p is the binary entropy function H(p) maximized?
A) 0
B) 0.5
C) 1
D) 1.2
Q4. When the event are independent then which relations is correct?
A) P(x, y)= P(x) + P(y)
B) P(x, y)= P(x) -P(y)
C) P(x, y)= P(x) / P(y)
D) P(x, y)= P(x). P(y)
Q5. How to represent conditional probability when 1was transmitted and 0received
A) P(1/1)
B) P(1/0)
C) P(0/0)
D) P(0/1)
Q6. There is a heavy snowfall in Amaravatimeans
A) Information is more
B) Information is less
C) Information is equal
D) None
Q7. 1 cm=10^-2 m means
A) Information is more
B) Information is less
C) Information is equal
D) None
Channel Matrix, Noise Matrix or Probability Transition Matrix
Lossless Channel
Important Points
Only one non-zero element in each column
In lossless channel no source information is lost in transmission
Each output has received some information
Non-zero element in each column
Only one non zero element in each row. This element must be unity
Output symbol will be received for a given source symbol.
Everything is determined already
Deterministic Channel
Noiseless Channel
It has both the property as
Lossless
+
Deterministic channel
Binary Symmetric Channel
This channel has probability distribution
Calculation of Output Probability
Properties
Q Given a binary channel shown below
a. Find the channel matrix of the channel.
b. Find the P(y1) and P(y2) when P(x1)=P(x2)=0.5
c. Find the joint probability P(x1,y2) and P(x2,y1) when P(x1)=P(x2)=0.5
Quick Revision
Discrete memoryless channel means output only depends on the current
input not on the previous input.
Channel matric represent the transition matrix
P(0/1) represent the probability of transmitted is 1 and received 0
P(Y/X) represent conditional probability matrix or channel matrix
Joint probability means complete channel as whole
In lossless only one non-zero elements in each column
In deterministic channel one non-zero element in each row and element
must be unity
Noiseless channel has satisfy both lossless and deterministic channel
property.
Q1. In Lossless channel the important features are
a. Non-zero elements in each column
b. Non-zero elements in each row
c. Non-zero elements in both column and row
d. None
Q2. In Deterministic Channel the important features are
a. Non-zero elements in each column
b. Non-zero elements in each row
c. Non-zero elements in both column and row
d. None
Q3. In Noiseless Channel the important features are
a. Non-zero elements in each column
b. Non-zero elements in each row
c. Non-zero elements in both column and row
d. None
Quiz
Q4. Which property is correct for mutual information
a. I(X;Y)=I(Y;X)
b. I(X;Y)>0
c. I(X;Y) = H(Y) - H(Y/X)
d. I(X;Y)=H(X) + H(Y) -H(X;Y)
e. All of these
Q5. Channel matrix represent the
a. Transition probability
b. Steady state probability
c. Conditional probability
d. None
Fill in the blank
1. Unit of Entropy_____________
2. Unit of Information rate_________
Calculation of Output Probability
Properties
Q. A discrete source transmit message {x0, x1, x2} with probability {0.3,0.4, 0.3}. The
source is connectedto the channelgivenin thefigure.
CalculateH(x),H(y), H(x,y), H(x/y), andH(y/x)
Numerical Problem
Hint: Useful formula to solve this question
Q. A Channel has the following channel matrix
P(y/x) =
a) Draw the channel diagram
b) If the source has equally likely outputs, compute the probabilities
associated with the channel outputs for P = 0.2
Cascaded Channel based Numerical Problem
Q. Two binary symmetric channel are connected in cascaded as shown in the
figure
a)Find the channel matrix of resultant channel
b)Find P(z1) and P(z2) if P(x1)=0.6 and P(x2)=0.4
Cascaded Channel based Numerical Problem
Q1. Which relationship is correct
a. P(x, y)= P(y/x). P(y)
b. P(x, y)= P(x/y). P(x)
c. P(x, y)= P(y/x). P(z)
d. P(x, y)= P(y/x). P(x)
Q2. Which relationship is correct for cascaded channel
a. P(z/x) = P (x/y). P(z/y)
b. P(z/x) = P (y/x). P(y/z)
c. P(z/x) = P (y/x). P(z/y)
d. None
Q3. An event has two possible outcomes with probability P1=1/2 and P2= 1/64. Find the
rate of information with 16 outcomes per second is
a. 19/2
b. 25/3
c. 35/6
d. None
Quiz
Cascaded Channel
Mutual Information for Channel 1: I(x, y) = H(y) -H(y/x)
Mutual Information for Channel 2: I(y, z) = H(z) -H(z/y)
Mutual Information for Cascaded Channel: I(x, z) = H(z) -H(z/x)
For cascaded channel I(x, z) < I (x, y)
Cascaded channels based Numerical Problems
Q. Two channel are cascaded as shown below
Given P(x1)=P(x2)=0.5
Show that I(x, z) < I(x, y)
Channel Capacity (Cs)
The maximum mutual information is called as channel capacity (Cs).
Channel capacity indicates the utilization factor of the channel.
Cs = max { I(x; y) } bit/Symbols-------(1)
Where, max { I(x; y) } represent the maximum value of mutual information
Channel capacity (Cs) indicated maximum number of bits the channel is capable of
transmit through the channel per symbol.
C = r x Cs bit/sec
Where C represent the channel capacity in bit/sec and r represents the data rate.
Channel capacity
Cs or C
Channel Efficiency
ἡ
Redundancy
Cs = max { I(x; y) } bit/Symbols-
C = r x Cs bit/sec
For symmetrical and
uniform channel
C = (log S –H) bit/sec,
where S represent the
number of output symbol
ἡ=I(x; y) / CRedundancy = 1-ἡ
The capacity of Special Channels
1)Lossless Channel: For lossless channel Conditional Entropy, H(x/y) = 0
Cs = max { I(x; y) }….> Maximum of Mutual Information
Cs= max { H(x) }
Cs = log m
where m represents the message
I(x; y)= H(x) –H(x/y) = H(x) –0 = H(x)
2. Deterministic Channel: Only one non-zero element in each row
Conditional Entropy---H(y/x)= 0
Information----I = log (1/p)= log (1/1) = 0
Entropy----H = I. P = 0. P= 0
Mutual information…I(x; y) = H(y) -H(y/x) = H(y)
Channel Capacity…..Cs= max I(x;y)=max(Hy)
Cs=log n
Where n represents the different symbol
3. Noiseless Channel
H(x) = H(y), Both entropy are same
H(y/x) = H(x/y) = 0
Cs=max{I(x;y)}
Cs=log m= log n
4. Binary Symmetric Channel
Cs= 1+ P log p + (1-P) log (1-p)
Numerical
Q2. For the channel matrix given below compute the channel
capacity,givenrs=1000symbol/sec
Useful Hint
1. Find the Entropy H
3. Use formula C = (log S –H ) bit/sec
Q1. What is the value of Information, Entropy and channel capacity, in the
deterministic channel.
a) 0, 1, log(n)
b) 1, 0 , log(n)
c) 1, 1, log(n)
d) 0, 0, log(n)
Q2. For symmetrical and uniform channel, how to define the capacity
a) C = (log S + H) bit/sec
b) C = (H + log S) bit/sec
c) C = (H -log S) bit/sec
d) C = (log S -H) bit/sec
Quiz
Q3. Channel capacity (Cs) indicated _________ number of bits the channel is capable of
transmit through the channel per symbol
a) Lowest
b) Equal
c) Maximum
d) None
Q4. Calculate the Entropy for symmetrical and uniform channel, when
P(y/x) =
a) 5/8
b) 6/8
c) 7/8
d) None
Q5. For the binary symmetric channel, how to define the capacity
a) Cs= 1+ P log p + (1+P) log (1+p)
b) Cs= 1+ P log p + (1+P) log (1-p)
c) Cs= 1+ P log p + (1-P) log (1+p)
d) Cs= 1+ P log p + (1-P) log (1-p)
Numerical
Q2.Determinethecapacityofthechannelshownbelow
Useful Hint
1. Find the channel matrix P(y/x)
2. Find the Entropy H
3. Use formula C = log S -H
Numerical
Q6. Two noisy channel are cascaded whose channel matrix are given below
With P(x1)=P(x2)=1/2. Find the overall mutual information I(X;Z)
Channel Diagram Of Cascaded Channel
Equivalent channel diagram
Shannon Hartley Law
For error free transmission Cs > r
where Cs is the channel capacity
and r is the bit rate
Relationship between Capacity, bandwidth and Signal to noise ratio
Where S/N= Signal to noise ratio
B= Bandwidth
Data rate or Channel capacity = H log (1+S/N)
or
B
Q1.AssumingthatachannelhasbandwidthB=3000Hz,andtypicalSignal
to noise ratio (SNR) of 20dB. Determine the maximum data rate that can
beachieved.
Q2. A channel has a bandwidth of 8KHZ and signal to noise ratio of 31.
For the same channel capacity if the SNR is increase to 61 then find the
newchannelbandwidthofthechannel.
Q3. In a Communication system the S/N rtio at the input of receiveris 15.
Determinethechannelcapacityofthechannelifbandwidthas
a)B=1Khz
b)B=1MHZ
c)B=1GHZ
Q1. A communication channel with AWGN operating at SNR
ratio>>1 and bandwidth B has capacity C1. If the SNR is doubled
keeping B constant then find the resulting capacity C2 in terms of
C1andB
Numerical
Fact
1. If there is more uncertainty about the message, information carried is also more.
2. I know after finishing my lecture, I will go to my cabin then amount of information
carried is zero.
3. Hi--------> I
1
How are You-----> I
2
Hi How are You------> I
1 + I
2
4. P (Sun rise in the east) > P (Sun rise in the west)
I (Sun rise in the east) < I (Sun rise in the west)
Q. Comments on the information content of the following message
1. Tomorrow the sun will rise from the east.
2. It will snow fall in Amaravatithis winter
3. The phone will ring in the next one hour.
Numerical Problem
Q. Comments on the information content of the following message
1. Tomorrow the sun will rise from the east.
2. It will snow fall in Amaravatithis winter
3. The phone will ring in the next one hour.
Ans.1.The first statementdoesnot carryany information sinceitissurethat sun always
rises from east. The probability of occurrence of first event is high orsure. Hence it
carries less ornegligibleinformation
Ik=Log1/pk=Log1/1=0
2.In the winter season snow fall in amaravati is very rare. Hence probability of
occurrence ofthis eventis veryrare, so itcarries largeamountofinformation.
3. In the third statement predicts about phone ring in the time spanof one hour. It does
not mention exact time but span of one hour is mentioned. Hence it carries moderate
information.
Numerical Problem
Average Information (Entropy)
E = P log (1/P)
1. Entropy is zero if the event is sure or it is impossible
H = 0 if P=1 or 0
2. Upper bound on entropy Hmax= log M
Information Rate
R = r H Information bits/second
R = (r in message/sec) (H in Information bits/message)
Unit of information rate………….> Information bits/second
Extension of Discrete Memoryless Source
H (X
n
) = n H (X)
Where n is the number of successive symbols in one group or block
Communication Channel
Communication channel has three main parts
1.Transmitter
2.Receiver
3.Channel
Transmitted data is discrete then it is known as Discrete channel
Binary Erasure Channel (BEC)
•Mostcommonlyusedchannelindigitalcommunication.
•This channel has 100% data recovery. Always ensure this channel provide
thecorrectmessage.
•If a symbol isreceived in errorit is just marked as errorand discardedand
aretransmittedisrequestedfromthetransmitterusingareversechanneltill
thecorrectsymbolisreceivedattheoutput.
Numerical Problem
Q. In the binary erasure channel (BEC), input message probability is x and (1-
x). Find the capacity of the channel.
Useful Hint
1. Find P(B/A)
2. Find H(B/A)
3. Find H(A)
4. Find P(A, B) = P(B/A). P(A)
5. From P(A,B) matrix find P(B) matrix
6. Find P(A/B) = P(A, B) / P(B)
7. From P(A/B) matrix find H (A/B) using expression H(A/B)= P(A,B) log(1/P(A/B)
8. Put H(A/B) value in channel capacity equation as mentioned below
9. C = max [H(A) – H(A/B)]. r
Useful Hint
1. Find P(B/A)
2. Find H(B/A)
3. Find H(A)
4. Find P(A, B) = P(B/A). P(A)
5. From P(A,B) matrix find P(B) matrix
6. Find P(A/B) = P(A, B) / P(B)
7. From P(A/B) matrix find H (A/B) using expression H(A/B)= P(A,B) log(1/P(A/B)
8. Put H(A/B) value in channel capacity equation as mentioned below
9. C = max [H(A) – H(A/B)]. r
Q. In a Binary Eraser channel P=0.1, P(x1)=0.6 and P(x2)=0.4.
Determine
a) mutualinformation,
b) channelcapacity,
c) channelefficiencyand
d) redundancy.
Q1.
Q2.In the communication system, if for a given rate of information transmission requires channel
bandwidth, B and signal-to-noise ratio SNR . If the channel bandwidth is doubled for same rate of
information then a new signal-to-noise ratio will be
Quiz
Q3 The channel capacity under the Gaussian noise environment for a discrete memory-
less channel with a bandwidth of 4 MHz and SNR of 31 is
1. 19.816 Mbps
2. 4 Mbps
3. 8 kbps
4. 4 kbps
Q4 The Shannon’s Theorem sets limit on the
1. Highest frequency that may be sent over channel
2. Maximum capacity of a channel with a given noise level
3. Maximum number coding levels in a channel
4. Maximum number of quantizing levels in a channel
Quiz
Q. An analog signalhaving 4KHzbandwidth is sampled at 1.25 times thenyquistrateand
each sample isquantizedinto one of equally likely levels. Assume 8 message that the
successive samples arestatisticallyindependent.
a. Whatistheinformationrateof thissource?
b. Can the output of this source be transmitted without error over an AWGN channel
with a bandwidthof10KHzandan SNR ratio of20dB.
c. Findthe SNR ratiorequired forerrorfreetransmission ofpart(a)
d. Find the bandwidth required for an AWGN channel for error freetransmission of the
outputofthis sourceifSNR ratio is 20dB?
Hint
1. Nyquist rate fs = 2fm
2. Nyquist rate (r) = fs X 1.25
3. If all message are equally likely then H(x) = log m
4. R= r. H(x)
5. Use channel capacity formular C= Blog (1+S/N) bit/Sec
6. For error free communication C > r
Source Coding
A conversion of the output of a DMS into a sequence
of the binary symbol is known as source coding.
The device that performs this conversion is known as source encoder.
Aim
To minimize the average bit rate required for the representation of the source by
reducingtheredundancyandincreasingtheefficiencyoftheinformationsource
Terminology
1. Code length: The length of a codeword is the number of binary digit in
the codeword.
2. Code Efficiency
When ἡapproach to unity the code is said to be efficient
3. Code redundancy
Code Efficiency ἡ = Lmin/ L
or
ἡ = H(s)/ Lmin
Code length L = ∑ P(xi) . li
Code redundancy: 1-ἡ
Q. Consider a DMS with two symbols X1 and X2 and P(X1)=0.9 and
P(X2)=0.1.SymbolsX1andX2areencodedasinthetable.Findtheefficiency
andredundancyofthiscode.
Q.Asecond-order extension of the DMS denoted by X2 is formed by taking
the source symbol two at a time. The coding of this extension is shown in the
table.Findtheefficiencyandredundancyofthisextensioncode.
Xi P(Xi) Code
a1 0.9 0
a2 0.1 1
a1a1=x1 0.81 0
a1a2=x2 0.09 10
a2a1=x3 0.09 110
a2a2=x4 0.01 111
Source coding Theorem
Or
Shannon’s First Theorem
The source coding theorem states that for a DMS X with entropy H(x) the
averagecodewordlength(L)persymbolisboundedby
If Lmin = H(x) thenἡ= H(x)/L
As it establishes an error free encoding it is called noiseless coding theorem
L ≤ H(x)
Data Compaction: Variable length source coding
algorithm (Entropy Coding)
Aim of Data Compaction:
•Efficient in-terms of average number of bits per symbol.
•Original data can be reconstructed without loss of any information.
•Remove the redundancy and increase the efficiency from signal prior to
transmission
Technique for data compaction
Three technique
1. Prefix coding or instantaneous coding
2. Shannon-fanocoding
3. Huffman coding
Steps in Shannon-fanocoding
1. Arrange the message in the order of decreasing probability.
2. Divide the message into two almost equal probable groups.
3. The message in the first group is assigned the bit ‘0’ and the message in the
second group are assigned the bit ‘1’.
4. The procedure is repeated until no further division is possible.
Q. Encode using Shannon Fanocoding
Symbols A B C D E F
Probability 0.30 0.25 0.20 0.12 0.08 0.05
Numerical
Q. Encode using Shannon Fanocoding
Symbols A B C D E F
Probability 1/3 1/3 1/6 1/12 1/24 1/24
Numerical
Q2. Adiscrete memoryless source x has five symbols x1, x2, x3, x4, x5, x6
with
P(x1)=0.30,
P(x2)=0.25,
P(x3)=0.20,
P(x4)=0.12,
P(x5)=0.08and
P(x6)=0.05.
UsingHuffmancodetofindthecodelengthandefficiency.
Numerical
Q1. In Shannon Fano Coding all the messages in the order of
a. Increasing Probability
b. Equal Probability
c. Decreasing probability
d. None
Q2. In Shannon Fano Coding Divide the message into
a. Three almost equal probable groups
b. Four almost equal probable groups
c. Two almost equal probable groups
d. None
Q3. In Shannon Fano Coding
a. First group is assigned the bit ‘1’ and second group are assigned the bit ‘0’.
b. First group is assigned the bit ‘1’ and the second group are assigned the bit ‘1’.
c. First group is assigned the bit ‘0’ and second group are assigned the bit ‘1’.
d. None
Quiz
Q1. Encode using Shannon Fano coding and find the efficiency.
Symbols A B C D E F
Probability 0.30 0.25 0.15 0.12 0.10 0.08
Numerical Question based on Shannon Fano Coding
Symbols A B C D E
Probability 0.4 0.19 0.16 0.15 0.10
Q2. Encode using Shannon Fano coding and find the efficiency.
0.30 0 0 00 2
0.25 0 1 01 2
0.15 1 0 0 100 3
0.12 1 0 1 101 3
0.10 1 1 0 110 3
0.08 1 1 1 111 3
Symbols A B C D E F
Probability 0.30 0.25 0.15 0.12 0.10 0.08
Solution of Q1
Symbols A B C D E
Probability 0.4 0.19 0.16 0.15 0.10
0.4 0 0 00
0.19 0 1 01
0.16 1 0 10
0.15 1 1 0 110
0.1 1 1 1 111
Solution of Q2
Numerical
Q3. Encode using Shannon Fano coding and find the efficiency.
A B C D E F G H
0.0625 0.03125 0.0625 0.03125 0.0625 0.50 0.125 0.125
0.50 0 0
0.125 1 0 0 100
0.125 1 0 1 101
0.0625 1 1 0 0 1100
0.0625 1 1 0 1 1101
0.0625 1 1 1 0 1110
0.03125 1 1 1 1 0 11110
0.03125 1 1 1 1 1 11111
A B C D E F G H
0.0625 0.03125 0.0625 0.03125 0.0625 0.50 0.125 0.125
Solution of Q3
Q. Encode using Shannon Fanocoding
Symbols A B C D E F
Probability 1/3 1/3 1/6 1/12 1/24 1/24
Numerical
Q1. Calculate the value of H(X
2
) if H(x)=1.5
a. 1 bits/symbols
b. 2 bits/symbols
c. 3 bits/symbols
d. None
Q2. Huffman coding technique is adopted for constructing the source code with ________
redundancy.
a.Maximum
b.Constant
c.Minimum
d.Unpredictable
Quiz
Q3. Which type of channel does not represent any correlation between input and output
symbols?
a.Noiseless Channel
b.Lossless Channel
c.Useless Channel
d.Deterministic Channel
Q4. On which factor the channel capacity depends on the communication system
a. bandwidth
b. Signal to noise ratio
c. Both a and b
d. None
Q1.Adiscretememorylesssourcexhasfivesymbolsx1,x2,x3with
P(x1)=0.4, P(x2)=0.19, P(x3)=0.16, P(x4)=0.15 andP(x5)=0.1. Using
Huffmancodetofindthecodelengthandefficiency.
Numerical
Numerical
Q. A DMS X has five symbols X1, X2, X3, X4 with P(X1)=0.4,
P(X2)=0.19,P(X3)=0.16,P(X4)=0.15andP(X5)=0.1
a) Construct a Shannon-fano code for X and calculate the
efficiencyofthecode.
b)RepeatfortheHuffmancodeandcomparetheresults.
Numerical
Q Compare the Huffman coding and Shannon-fano coding algorithm for data
compression. Fora discrete memoryless source X with six symbolx1, x2, x3, x4, x5 and x6.
Findthe compactcodeforevery symbol iftheprobabilitydistributionisas follow
P(x1)=0.3,P(x2)=0.25,P(x3)=0.2,P(x4)=0.12,P(x5)=0.08,P(x6)=0.05
Calculate
a. Entropyof thesource
b. Averagelengthofthecode.
c. Comparethe efficiencyand redundancyin both thecoding
Numerical
Q A information source produce a sequence of independent symbols having the following
probabilities,using Huffman encodingfindthe efficiency
Symbol S0 S1 S2 S3 S4 S5 S6
Probabili
ty
1/3 1/27 1/3 1/9 1/9 1/27 1/27
Q1. For designing a communication system, which among the following
parametersshouldbemaximum?
A. Transmissionrate
B. Receivedsignal-to-noiseratio
C. Errorprobability
D. Bandwidthrequirement
A.A&B
B.C&D
C.A&C
D.B&D
Quiz
Mainpoints
1. A drawback of the Huffman code is that it requires knowledge of a
probabilisticmodelofthesource.
2. Toovercomethesepracticallimitations,Lempel-Zivcodingwillbehelpful.
3. LZcodingisusedforlosslessdatacompression.
4. In LZ coding parsing the source data stream intosegmentsthat are the
shortestsub-sequencesnotencounteredpreviously.
LZ (Lempel-Ziv Coding)
Numerical
Q. Encode Message: AABABBBABAABABBBABBABB using LZ coding
Sol.
Assumption: A denoted by 0 and B denoted by 1
A
1
AB
2
ABB
3
B
4
ABA
5
ABAB
6
BB
7
ABBA
8
BB
9
φA 1B 2B φB 2A 5B 4B 3A 7
0 1, 1 10, 1 00, 1 10, 0 101, 1 100, 1 011, 0 0111
Q1. Expansion of LZ Coding is _________
a) Lossy b) Lossless
b)Lempel-ziv-welsh d) Lempel-ziv
Q2. Expansion of LZW Coding is ________
a) Lossy b) Lossless
b)Lempel-ziv d) Lempel-ziv-welsh
Q3. Dolby AC stands for_______
a) Dolby allocation coder b) Dolby acoustic coder
c) Dolby adaptive coder d) Dolby MPEG coder
Q4. Applications of Huffman Coding
(a) Text compression (b) Audio compression
(c) Lossless image compression (d)All of the above
Quiz
Q5. Which conveys more information?
(a) High probability event
(b) Low probability event
(c) High & Low probability event
(d) None of the mentioned
Q6. Full form of GIF
(a) Graphics Interchange Form
(b) Graphics Inter Format
(c) Graphics Interchange Format
(d) Graphics Interact Format
encoded output index entry
- 1 a
- 2 b
- 3 c
1 4 ab
2 5 ba
4 6 abb
5 7 bab
2 8 bc
3 9 ca
4 10 aba
6 11 abba
1 - -
Encoded message
1, 2, 4, 5, 2, 3, 4, 6, 1
Message:
a b a b ba b c a b a b ba
Decode
1 a
2 b
3 c
4 ab
5 ba
6 abb
1 2 4 5 2 3 4 6 1
a b ab ba b c ab abb a
Index Entry
Q. Decode this code using LZW dictionary, initial dictionary is given
3, 1, 4, 6, 8, 4, 2, 1, 2, 5, 10, 6, 11, 13, 6
1 a
2 b
3 r
4 t
5 ra
6 at
7 ta
8ata
9 atat
10 tb
11 ba
12 ab
13 br
3 1 4 6 84 2 1 2 5 10 6 11 13 6
r a t at atat b a b ra tb at ba br at
r a t at ata t b a b ra tb at ba br at
Received Message
1 a
2 b
3 r
4 t
Quiz
Q.1 The channel capacity is measured in terms of:
a. bits per channel
b. number of input channels connected
c. calls per channel
d. number of output channels connected
Q2. Channel capacity of a noise-free channel having m symbols is given by:
a. m2
b. 2m
c. Log (m)
d. M
Q3. An Ideal power limited communication channel with additive white Gaussian noise is having 4 kHz band
width and Signal to Noise ratio of 255. The channel capacity is:
a. 8 kilo bits / sec
b. 9.63 kilo bits / sec
c. 16 kilo bits / sec
d. 32 kilo bits / sec
Salient Feature of LZW Coding
Uses the greedy approach
Most popular lossless compression
Reduces file size contained repetitive data
Fast and simple
Q. Decode this code using LZW dictionary, initial dictionary is given
3, 1, 4, 6, 8, 4, 2, 1, 2, 5, 10, 6, 11, 13, 6
1 a
2 b
3 r
4 t
5
6
7
8
9
10
11
12
13
3 1 4 6 84 2 1 2 5 10 6 11 13 6
r a t at atat b a b ra tb at ba br at
r a t at ata t b a b ra tb at ba br at
Received Message
1 a
2 b
3 r
4 t
Q. Encode this code using LZW dictionary, initial dictionary is given as
wabbabwabbabwabbab wabbab woo bwoo b woo
Dictionary Coding: LZW Coding
1 b
2 a
3 b
4 o
5 w
1
b
2 a
3 b
4 o
5 w
6 wa
7 ab
8 bb
9 ba
10 ab
11 bw
12 wab
13 bba
14 abw
15 wabb
16 bab
17 bwa
18 abb
19 babw
20 wo
21 oo
22 ob
23 bwo
24 oob
25 bwoo
Encoded message
wabbabwabbabwabbab wabbab woo bwoo b woo
Q1.Decode1124358232usingLZWcoding,initialdictionaryisgivenas
1 a
2 b
3 c
4 aa
5 ab
6 ba
7 aac
8 ca
1 1 2 4 3 5 8 2 3 2
a a b aa c ab ca b c b
Index Entry
Decode
1 a
2 b
3 c
Q Encode this message using LZW Coding, if initial dictionary
is given as
aabaacabcabcb
1 a
2 b
3 c
4 d
- 1 a
- 2 b
- 3 c
1 4 aa
1 5 ab
2 6 ba
4 7 aac
3 8 ca
5 9 abc
8 10 cab
2 11 bc
3 12 cb
2 13 b
Encoded message 1,1,2,4,3,5,8,2,3,2
Numerical
Q. The output of an information source consists of 200 symbols, 40 of
whichareofprobability1/32andremainingwithaprobabilityof1/256.
The source emits 3000 symbols/sec. Assuming that the symbols are
chosenindependently.Findtheaverageinformationrate.
Hamming Weight and Hamming Distance
Hamming Weight:The hamming weight of a code-words is equal to the
numberofnon-zeroelementsinthecode-word.
Hamming distance:between two code-words is the number of places by
which thecode-words differ. The hamming distance between two code-
wordsC1andC2isdenotedbyd(C1,C2)
Lemma:Valid Relationship between hamming distance and hamming weight
d (C1, C2) = w (C1 -C2)= w (C2 -C1)
Proof:
Let the Code-word A={0010, 1110}
w(0010) = c1 = 1
w(1110) = c2 = 3
w(c2 -c1) = 2
Hamming distance 0 0 1 0
1 1 1 0
d(C1,C2) = 2
Hence Proved: d (C1, C2) = w (C1 -C2)
Q1. Find the hamming weight of the given code word 11000110
A. 1
B. 2
C. 3
D. 4
Q3. Find the hamming distance in the code word
'work' ‘walk’,
‘point’ ‘paint’,
‘lane vale’
A. 3, 1, 2
B. 2, 1, 2
C. 1, 1, 1
D. 3, 2, 1
Quiz
Kraft Inequality
A necessary and sufficient condition for the existence of an instantaneous binary
code is
K=
Kraft inequality assurance of the existence of instantaneously decodable code
with codeword length ni. This nishould satisfy the inequality.
Q.Consider a DMS with symbols xi=1, 2, 3, 4. Below table list of 4 possible
binarycodes.Showthat
a) All codes except code B satisfy the Kraft inequality
b)CodesAandDareuniquelydecodablebutcodesBandCarenotuniquel y
decodable
Numerical Based on Kraft Inequality
Symbol xi Code A Code B Code C Code D
X1 00 0 0 0
X2 01 10 11 100
X3 10 11 100 110
X4 11 110 110 111
Code B:Inequality not satisfied, so it is not uniquely decodable.
Let’s take example 110 message is received
Decode either x4 (110) or x3x1 (110), So there is confusion while decoding
Codes A and D are prefix codes. They are uniquely decodable
Code C: Inequality satisfied but not uniquely decodable.
Let’s take example 0110110message is received
Decode either
X1x2x1x2x1= (0 11 0 11 0)
x1x4x4 = (0 110 110)
x1x4x2x1 = (0 110 11 0)
x1x2x1x4 = (0 11 0 110)
Hamming Code
Aim: Error detection, Error correction and Encoding and decoding
Main Points:
In this coding parity bits are used to detect and correct the error.
Parity bits are the extra bit to mix with message bits.
Hamming code is used to detect and correct single bit error.
Parity bits position is decided by 2
n,
where n=0,1,2,3….
For (7,4) Hamming code parity bits position as follows
2
0
=1, 2
1
=2, 2
2
=4………….
Parity bits values are decided as
P1--------Check 1bit and skip 1-bit (1,3,5,7,9…………..)
P2-------Check 2-bit and skip 2-bit (2,3,6,7…………….)
P3-------Check 4-bit and skip 4-bit (4,5,6,7) (12,13,14) (20,21,22)
7 6 5 4 3 2 1
D7 D6 D5 P4 D3 P2 P1
Q. Let the transmitted message be 1001, using hamming code find out
A.Encode the message and transmit
B. Include error in 6
th
bit position and find out the error
C.If the received code word has error in 6
th
bit position then correct the error
Numerical
Hamming codes are used for the purpose of error detection and correction.
Hamming codes are used for channel encoding and decoding.
Hamming code are known as linear-error correcting codes.
Hamming code encodes four bits of data into seven bits by adding three
parity bits.
Hamming code is a set of error-correction codes that can be used to detect
and correct '1' biterrors that can occur when bit stream are sent through
channel
Quick Revision
Q1. Hamming code is capable of
1. Only detects single bit error
2. Only corrects single bit error
3. Detects and corrects single bit error
4. None of the above
Q2. The Position of parity is decided by
a. 2
2n
b. 2
n-1
c. 2
n
d. None of the above
Quiz
Q3. How to decide the parity P1 in (7,4) Hamming code
a. Check 1-bit and skip 1-bit
b. Check 2-bit and skip 2-bit
c. Check 3-bit and skip 3-bit
d. None
Q4. How to decide the parity P2 in (7,4) Hamming code
a. Check 1-bit and skip 1-bit
b. Check 2-bit and skip 2-bit
c. Check 3-bit and skip 3-bit
d. None
Q5. How to decide the parity P3 in (7,4) Hamming code
a. Check 1-bit and skip 1-bit
b. Check 2-bit and skip 2-bit
c. Check 4-bit and skip 4-bit
d. None
Q6. At end of message received by receiver parity means
A. Redundant bit as they do contain useful information
B. Redundant bit as they do not contain any information
C. UnRedudantbit as they do not contain any information
D. None of these
Q7. Find the hamming weight of the given code word 11000110
A.1
B. 2
C.3
D.4
Important Terminology in Hamming Code
For (n, k) Hamming Code, where n-Total length of message,
Parity bit (q)= n –k
Number of message bits k = n –q or 2
q
–q-1
Block Length n = 2
q
–1
Rate (R) = k/n
= (n-q)/ n
= 1-q/n
= 1-q/2
q
– 1
Numerical related to Hamming Code
Q. A 7-bit hamming code is received as 1011011. Assume even
parity and state whether the received code is correct or wrong, if
wronglocatesthebitinerror.
Error detection and Correction Capabilities in Hamming Codes
Error Detection is find out by using this expression
Dmin ≥ s+1
Where s represent the errors
For eg. If Dmin= 3 then 3 ≥ s+1, s ≥ 2 that means detect only 2 error
Error Correction is find out by using this expression
Dmin ≥ 2t+1
Where t represent the errors
For eg. If Dmin= 3 then 3 ≥ 2t+1, t ≥ 1 that means correct only 1 error
Lemma Prove thatDmin≥ 3, if hamming code detect double error and correct
single error.
Proff: For detecting double (2) error Dmin ≥ s+1
Dmin ≥ 2+1
Dmin≥ 3
For correcting uptoone (1) error Dmin ≥ 2t+1
Dmin ≥ 2(1)+1
Dmin≥ 3
Q1. What is the correct expression for error detection in Hamming code
a. Dmin ≥ 2s+1
b. Dmin ≥ s-1
c. Dmin ≥ s+1
d. None
Q2. What is the correct expression for error correction in Hamming code
a. Dmin ≥ 2t+1
b. Dmin ≥ t-1
c. Dmin ≥ t+1
d. None
Quiz
Q3. What is the correct expression for Rate (R) in Hamming Code
a. k/2n
b. k/3n
c. k/n
d. None
Q4. What is the correct expression for parity bit (q) in Hamming code
a. q= n + k
b. q= n – 2k
c. q= n – k
d. None
Block diagram of Communication System
Parity Check Matrix (H)
Aim
Parity check matrix (H) isused at the thereceiversidefor channel docoding.
Parity check matrix (H) isused for detectionon and correction of errors
In (n, k) Hamming codes, paritycheck matrix [H] is definedas:
H = [P
T
: I ]
Where P represent the parity matrix
and I represent the Identity matrix
Generator Matrix (G)
Aim
Generatormatrix generateparity and mix with message.
In (n, k) Hamming codes, paritycheck matrix [H] is definedas:
G= [I: P]
Where P represent the parity matrix
and I represent the Identity matrix
Numerical related to Hamming Code
Q. Generator matrix
3x6 = k x n
Find out all possible code vector
Hint
1. Use formula C= D.G, where D- data word and G is the generator matrix
2. G-Generator Matrix G = [I
k: P]
kxn
3. Data word combination is find out by
2
k
Numerical Problem
Q1. The main purpose of generator matrix is
a. Error correction
b. Error detection
c. Generate the parity bit and include in message bit
Q2. Parity check matrix is used for
a. Detection and correction of errors
b. Used at receiver side
c. Both a and b
d. None
Q3. Which expression is correct for block length
a. Block Length n = 2
q
+ 1
b. Block Length n = 2
q
– 1
c. Block Length n = 2
q
x 1
d. None
Quiz
Presented By:
Dr. Neeraj Kumar Misra
Associate Professor
Department of SENSE,
VIT-AP University
Google Scholar: https://scholar.google.co.in/citations?user=_V5Af5kAAAAJ&hl=en
Research Gate profile: https://www.researchgate.net/profile/Neeraj_Kumar_Misra
ORCHID ID: https://orcid.org/0000-0002-7907-0276
Information Theory and Coding
06/10/2022
Numerical
Q. For a (5, 2) linear block code the generator matrix is in the form [Ik: P] where P matrix as
P=
� � �
� � �
Find out
(a) Generator matrix
(b) Parity check matrix
(c) All possible code vectors
(d) Find minimum Hamming distance
Hint
1. Use formula G= [I
k: P]
2. Use paritycheck matrix formula H = [P
T
: I
n-k]
3. Use formula C= D. G where D- Data word and G is the generatormatrix.
4. For total combination of data word use formula 2
k
5. Count the numberof 1 in the codewordthen find the minimum weight
Numerical
Q. For a (6, 3) code the generator matrix G is
G=
� � �
� � �
� � �
� � �
� � �
� � �
Find out
(a) All corresponding code vectors
(b) Minimum hamming distance
(c) Verify that this code is single error correcting code
(d) Parity check matrix
(e) Determine transmitted codeword if received codeword is 100011
Hint
1. Use formula G = [Ik: P]
2. Use formula C= D. G where D- Data word and G is the generator matrix
3. Count the numberof 1 in the codeword then find the minimum distance dmin
4. For error correction Dmin >2t+1
5. Use formula for parity check matrix H = [PT : I ]
6. Find syndrome S=r. [HT]
Q1. Which expression is correct for Syndrome
a.S = r / [H
T
]
b.S = r + [H
T
]
c.S = r. [H
T
]
d.None
Q2. Which expression is correct for error correction
a. Dmin > t+1
b. Dmin > t-1
c. Dmin > 2t+1
d. None
Quiz
Q3. Which one of the following set of gates are best suited for parity checking and parity
generation?
a. AND, OR, NOT gates
b. EX-NOR or EX-OR gates
c. NAND gates
d. NOR gates
Q4. The coding efficiency is given by
a. 1-Redundancy
b. 1 + Redundancy
c. 1/Redundancy
d. none
Q. If the hamming distance in code is 5, then the maximum number of errors correction is
a. 4
b. 3
c. 2
d. 1
Q. To guarantee the detection of up to 5 errors in all cases, the minimum Hamming
distancein ablock codemust be _______
a. 5
b. 6
c. 11
d. none of the above
Q.To guarantee correction of up to 5 errors in all cases, the minimum
Hammingdistanceinablockcodemustbe________
A. 5
B. 6
C. 11
D. noneoftheabove
Q.The_____oferrorsismoredifficultthanthe______
A. correction;detection
B. detection;correction
C. creation;correction
D. creation;detection
Numerical
Q. A parity check matrix is given as
H=
suppose that received code word is 110110 then what will be decoded
receivedword?
a. 110010
b. 100110
c. 010110
d. 110111
Numerical
Q. For a linear (6, 3) code whose parity matrix is given as
P
T
Find out how many errors to be detect and correct.
Cyclic Codes
Cyclic codes satisfies two basic properties
a.Linearity
b. Cyclic properties.
Q. Check the given codeword is cyclic or not
C={0000, 0101, 1010, 1111}
Q. Check the given codeword is cyclic or not
C={0000, 0110, 1001, 1111}
Sol. The given code is not cyclic
Q. Code word is given as C=10111 find the cyclic polynomial.
Cyclic Code for Non-SystamaticCodeword
C(x)= m(x) . g(x)…………….. (1)
Where C(x)-code-word polynomial
m(x)-message polynomial
g(x)-generator polynomial
Format of codeword c = [message, parity]-------Systematic code-word
Message and parity bits are not in proper order...>Non-systematic code-word
Numerical Question based on Cyclic-code
Q. Code word C=10111 find the cyclic polynomial.
Q. Construct non-systematic cyclic codes (7,4) using generator
polynomial g(x)= x
3
+x
2
+1with message (1010)
Q. If generator polynomial is given as g(x)= x
3
+x
2
+1
Find out
(a)Cyclic Encoder
(b)Codewords if message is (1110)
Q. Construct non-systematic cyclic codes (7,4) using generator
polynomial g(x)= x
3
+x
2
+1with message (1010)
Quiz
Q1. Check the given code word is cyclic or not
c={0000, 1100, 1001, 1111}
a. Yes
b. No
c. Can not determine
d. None
Q2. Convert the given message bit m=(0110) in polynomial form
a. x
3
+x
b. x
3
+x+1
c. x
2
+x
d. None
Numerical Question based on Cyclic-code
Q. If generator polynomial is given as g(x)= x
3
+x
2
+1
Find out
(a)Cyclic Encoder
(b)Codewords if message is (1110)
Q. If generator polynomial is given as g(x)= x
3
+x
2
+1
Find out
(a)Cyclic Encoder
(b)Codewords if message is (1110)
Sol. Cyclic encoder is designed by Flip-flop and Ex-OR
Cyclic Code for systematic Codewords
In systematic cyclic code the codeword as
C=[message, parity]
C(x)=x
n-k
m(x) + p(x)
Where p(x) = Rem
m(x) is the message polynomial
g(x) is the generator polynomial
C(x) is the codeword polynomial
Q. Construct systematic cyclic code (7,4) using
generator polynomial g(x)= x
3
+x
2
+1 with message
(1010)
Q. Construct systematic cyclic code (7,4) using generator polynomial g(x)=
x
3
+x
2
+1 with message (1010)
Cyclic Code for generator matrix
•1
st
row of parity matrix Rem
�7�
•2
nd
row of parity matrix Rem
�7�
•3
st
row of parity matrix Rem
�7�
•4
st
row of parity matrix Rem
�7�
Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x
3
+x+1
Then construct generator matrix
Hint
Use generator matrix format G = [Ik: P]
The find the parity matrix P rows using the below expression
1
st
row of parity matrix Rem
�
�7�
�(�)
2
nd
row of parity matrix Rem
�
�7�
�(�)
3
st
row of parity matrix Rem
�
�7�
�(�)
4
st
row of parity matrix Rem
�
�7�
�(�)
Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1. Then construct
generatormatrix
Quiz
Q1. Parity Check Matrix (H) is represent in this form
1. H=[I:PT]
2. H=[PT:I]
3. H=[ - PT:I]
4. None
Q2. Code vector is defined in this form
A. C=D.G
B. C=D/G
C. C=D+G
D. C=D-G
Q3. Find the minimum hamming code for the following codes
11001100
01010101
10101000
A. 4
B. 3
C. 2
D. 1
Q. In a (7,4) codeword, g(x)=x
3
+x+1. Find the codeword if m = 1010
Block Codes Convolution Codes Linear Code Nonlinear Code
n-number of bits
K-message
n-k redundant bits
or parity bits
Coding operation is
discrete time
convolution of input
sequence
It accept the
message bits
continuously and
generates the
encoded sequential
continuously.
If two cod words of
the linear code are
added by modulo-2
arithmetic then it
produce third
codeword in the
code
Nonlinear codes
does not necessarily
produce third
codeword.
Types of Codes
Error Correction Capabilities
Name of errorsdetected/ corrected Distance requirement
Detectuptos errorsper word Dmin>s+1
Correct uptot errorsper word Dmin>2t+1
Correct uptot errorsand detects > t
errorsper word
Dmin>t+s+1
Useful formula
Q.Anerrorcontrolcodehasthefollowingparitycheckmatrix
H=
a) DeterminethegeneratormatrixG
b) Findthecodwordthatbeginwith101
c) Decode the received codeword 110110. Comment on error
detectionandcorrectioncapabilityofthiscode.
d) WhatistherelationshipbetweenGandH?Verifythesame.
Numerical
In convolution encoder
n = number of encoded output bits
k = number of message bits
K = Constrain length
For convolution block
k=1 (m only one message bits)
n=2 (encoder output x1 and x2)
K=3(Three shift register)
Basic Terminology in Convolution Codes
1. Code rate r = k/n= number of message bits/ number of encoded output bits
2. Code dimension (n, k ) = (2, 1)
Viterbi Algorithm (Maximum likelihood decoding)
Rules for Viterbi decoding
m0, m1 represent previous shift register
m2 represent current shift register
In trees diagram construction
0 means go to lower or it go to current state
1 means go to upper or it go to equivalent state
Path Section in Viterbi decoding
For higher order hamming weight to be neglect and consider only
lowest order hamming weight
Select the path where hamming weight is less and terminate the path
where hamming weight is more.
Q.Usingthetreesdiagramof3-bitshiftregister,ifthereceivedcode
word as 01 10 00 00 00. Find the error in the received code-words
usingViterbialgorithm
Numerical