OBJECTIVE
• INTRODUCTION
• ADAPTIVE FILTER
• BLOCK DIAGRAM
• LEAST MEAN SQUARE - LMS
• ADVANTAGES AND DISADVANTAGES
• MATLAB CODE
• CONCLUSION
ADAPTIVE NOISE CANCELLATION
➢ Adaptive noise cancellation is the approach used for estimating a de...
NOISE CANCELLATION USING LMS ALGORITHM
OBJECTIVE
• INTRODUCTION
• ADAPTIVE FILTER
• BLOCK DIAGRAM
• LEAST MEAN SQUARE - LMS
• ADVANTAGES AND DISADVANTAGES
• MATLAB CODE
• CONCLUSION
ADAPTIVE NOISE CANCELLATION
➢ Adaptive noise cancellation is the approach used for estimating a desired
signal d(n) from a noise-corrupted observation.
x(n) = d(n) + v1(n)
➢ Usually the method uses a primary input containing the corrupted signal
and a reference input containing noise correlated in some unknown way
with the primary noise.
➢ The reference input v1(n) can be filtered and subtracted from the primary
input to obtain the signal estimate 𝑑 ̂(n).
➢ As the measurement system is a black box, no reference signal that is
correlated with the noise is available.
An adaptive filter is composed of two parts, the digital filter and the
adaptive algorithm.
• A digital filter with adjustable coefficients wn(z) and an adaptive algorithm
which is used to adjust or modify the coefficients of the filter.
• The adaptive filter can be a Finite Impulse Response FIR filter or an
Infinite Impulse Response IIR filter.
ALGORITHMS FOR ADAPTIVE EQUALIZATION
• There are three different types of adaptive filtering algorithms.
➢ Zero forcing (ZF)
➢ least mean square (LMS)
➢ Recursive least square filter (RLS)
• Recursive least square is an adaptive filter algorithm that recursively finds the coefficients
that minimize a weighted linear least squares cost function relating to the input signals.
• This approach is different from the least mean-square algorithm that aim to reduce the
mean-square error.
Least Mean Square - LMS
• The LMS algorithm in general, consists of two basics procedure:
1. Filtering process, which involve, computing the output (d(n - d)) of a linear filter in
response to the input signal and generating an estimation error by comparing this
output with a desired response as follows:
y(n) is filter output and is the desired response at time n
2. Adaptive process, which involves the automatics adjustment of the parameter of the
filter in accordance with the estimation error.
➢ where wn is the estimate of the weight value vector at time n, x(n) is the input
signal vector.
➢ e(n) is the filter error vector and μ is the step-size, which determines the filter
convergence rate and overall behavior.
➢ One of the difficulties in the design and implementation of the LMS adaptive
filter is the selection of the step-size μ. This parameter must lie in a specific
range, so that the LMS algorithm converges.
➢ LMS algorithm, aims to reduce the mean-square error.
The convergence characteristics of the LMS adaptive algorithm depends on two
factors: the step-size μ and the eigenvalue spread of the autocorrelation matrix .
The step-size μ must lie in a specific range
where 𝜆𝑚𝑎𝑥 is the largest eigenvalue of the autocorrelation matrix Rx.
• A large value of the step-size μ will lead to a faster convergence but may be less
stable around the minimum value. T
Size: 584.64 KB
Language: en
Added: Jul 01, 2022
Slides: 20 pages
Slide Content
PONDICHERRY UNIVERSITY
DEPARTMENT OF ELECTRONICS
ENGINEERING
NOISE CANCELLATION USING LMS ALGORITHM
SUBMITTEDTO:
PROF.DR.K.ANUSUDHA
DEPT.OFELECTRONICSENGINEERING
SUBMITTEDBY:
AWANISHKUMAR
M.TECH(ECE)-1
st
Year
21304006
1
CONTENTS
•OBJECTIVE
•INTRODUCTION
•ADAPTIVE FILTER
•BLOCK DIAGRAM
•LEAST MEAN SQUARE -LMS
•ADVANTAGES AND DISADVANTAGES
•MATLAB CODE
•CONCLUSION
2
ALGORITHMS FOR ADAPTIVE EQUALIZATION
•Therearethreedifferenttypesofadaptivefilteringalgorithms.
➢Zero forcing (ZF)
➢least mean square (LMS)
➢Recursive least square filter (RLS)
•Recursiveleastsquareisanadaptivefilteralgorithmthatrecursivelyfindsthecoefficients
thatminimizeaweightedlinearleastsquarescostfunctionrelatingtotheinputsignals.
•Thisapproachisdifferentfromtheleastmean-squarealgorithmthataimtoreducethe
mean-squareerror.
9
Least Mean Square -LMS
•TheLMSalgorithmingeneral,consistsoftwobasicsprocedure:
1.Filteringprocess,whichinvolve,computingtheoutput(d(n-d))ofalinearfilterin
responsetotheinputsignalandgeneratinganestimationerrorbycomparingthis
outputwithadesiredresponseasfollows:
y(n)isfilteroutputandisthedesiredresponseattimen
2. Adaptive process, which involves the automatics adjustment of the parameter of the
filter in accordance with the estimation error.
10
11
Structure of adaptive filter with LMS algorithm
➢LMS algorithm equation
➢where wnis the estimate of the weight value vector at time n, x(n) is the input
signal vector.
➢e(n) is the filter error vector and μ is the step-size, which determines the filter
convergence rate and overall behavior.
➢One of the difficulties in the design and implementation of the LMS adaptive
filter is the selection of the step-size μ. This parameter must lie in a specific
range, so that the LMS algorithm converges.
➢LMS algorithm, aims to reduce the mean-square error.
12
RATE OF CONVERGENCE
⮚Convergence of the LMS adaptive algorithm
•TheconvergencecharacteristicsoftheLMSadaptivealgorithmdependsontwo
factors:thestep-sizeμandtheeigenvaluespreadoftheautocorrelationmatrix.
Thestep-sizeμmustlieinaspecificrange
where????????????????????????isthelargesteigenvalueoftheautocorrelationmatrixRx.
•Alargevalueofthestep-sizeμwillleadtoafasterconvergencebutmaybeless
stablearoundtheminimumvalue.Theconvergenceofthealgorithmisinversely
proportionaltotheeigenvaluespreadofthecorrelationmatrix.
13
ADVANTAGES AND DISADVANTAGES
➢LMSSimpleandcanbeeasilyappliedbutRLSIncreasedcomplexity
andcomputationalcost.
➢LMSTakeslongertoconvergebutRLSfasterconvergence.
➢Largersteadystateerrorwithrespecttotheunknownsystem.
➢Objectiveistominimizethecurrentmeansquareerrorbetweenthe
desiredsignalandtheoutput.
14
MATLAB CODE
clc;
clear;
close all;
% Generation Desired signal
t = 0.001:0.001:1;
D = 2*sin(2*pi*50*t);
% Generating signal corrupted with
noise
n = numel(D);
A = D(1:n)+0.9*randn(1,n);
M = 25;
W = zeros(1,M);
Wi = zeros(1,M);
E = [];
mu = 0.002;
for i= M:n
E(i) = D(i) -Wi*A(i:-1:i-M+1)';
Wi = Wi + 2*mu*E(i)*A(i:-1:i-M+1);
end
Est = zeros(n,1); % Estimation of the signal
for i = M:n
j = A(i:-1:i-M+1);
Est(i) = ((Wi)*(j)');
end
Err =Est’-D; % computing the error signal
%Display of signal
figure(1)
subplot(4,1,1);
plot(D);
title('Desired signal');
subplot(4,1,2);
plot(A);
title('signal corrupted with noise');
subplot(4,1,3);
plot(Est);
title('LMS -Estimated signal');
subplot(4,1,4);
plot(Err);
title('Error signal');
15
16
OUTPUT -LMS ESTIMATED SIGNAL
17
CONCLUSION
The LMS algorithm is simple and has less computational complexity when compared
with the RLS algorithm. But the RLS algorithm has faster convergence rate than LMS
algorithm at the cost of higher computational complexity. The plays a role in
RLS algorithm similar to that of step size parameter in the LMS algorithm .RLS
algorithm has better performance than LMS algorithm in the low signal to noise ratio.
18