CONTENTS What Is Noise And Noise Cancellation? Adaptive Filter Basic Adaptive Filters Applications Of Adaptive Filters Problem Statement Various Adaptive Algorithms For Noise Cancellation LMS Algorithm NLMS Algorithm RLS Algorithm Affine Projection Algorithm SNRI Table Outputs Comparison Conclusion References
WHAT IS NOISE AND NOISE CANCELLATION? Noise consists of unwanted waveforms that can interfere with communication. Noise can be internal or external to the system. Sound noise: interferes with your normal hearing Colored Noise Impulsive Noise W hite noise (AWGN) NOISE CANCELLATION : Noise cancellation is a method to reduce or cancel out undesirable components of the signal
ADAPTIVE FILTERS
Continued…
Basic Adaptive Filter It contains 4 signals: Reference Signal [x(n)] Input Signal[d(n)] Filter output Signal[y(n)] Error Signal[e(n)]
VARIOUS ADAPTIVE ALGORITHMS FOR NOISE CANCELLATION Properties of an ideal algorithm: Practical to implement Adapt to coefficient quickly to minimize error Provide the Desired Performance Different algorithms used are: Least Mean Squares (LMS) algorithm The Normalized Least Mean Squares(NLMS) algorithm The Recursive Least Squares (RLS) algorithm Affine Projection Algorithm(APA)
PROBLEM STATEMENT We have taken an input random signal of N samples as reference signal. We have taken random noise or adaptive Gaussian noise. Then we are adding the noise signal with the input signal. So the problem is to extract the input signal from output signal by eliminating the noise.
LMS ALGORITHM: Adjusts the weight w(n) of the filter Adaptively adjusts the filter tap weights according to equation: w(n+1)=w(n)+µe(n)x(n) Acts as negative feedback to minimize error signal It is robust in nature. Slow in convergence and sensitive to variations in step size parameter. Requires number of iterations equals to dimensionality of the input.
LMS ADAPTIVE FILTER BLOCK DIAGRAM
LMS ALGORITHM STEPS: Filter output : Estimation error: Tap-weight adaptation: Each iteration of LMS involves three steps: STABILITY: Condition for stability is: Larger values for step size:- Increases adaptation rate (faster adaptation) Increases residual mean-squared error
ADVANTAGES AND DISADVANTAGES:
OUTPUT: Mean Square Error For LMS-----
VARIATION OF MSE WITH RESPECT TO µ:
NLMS Algorithm:- In structural terms both NLMS filter is exactly same as a standard LMS filter. From one iteration to the next, the weight of an adaptive filter should be changed in a minimal manner.
NLMS Continued.. One of the drawback of LMS is selection of step size parameter µ. In order to solve this difficulty, we can use the NLMS (Normalized Least Mean Square) algorithm. Here the step size parameter is normalised. So the NLMS algorithm is a time varying step-size algorithm, calculating the convergence factor μ as :
NLMS PARAMETERS: Where || x(n) ||² is the squared Euclidean norm of the input x(n). Here α is the adaption constant, which optimizes the convergence rate of the algorithm Range of alpha is: 0< α <2 c is the constant term for normalization and always c<1. The updated filter weight is:
ADVANTAGES AND DISADVANTAGES:
OUTPUT: Mean Square Error For NLMS-----
RLS ALGORITHM Recursively finds the filter coefficients that minimize a weighted linear least squares cost function relating to the input signals. In this algorithm the filter tap weight vector is updated using:
Continued… Whitens the input data by using inverse correlation matrix of data. The Cost function C(n) should be minimized. C(n)= e( i )=d( i )- w H (n)u( i ) where, β ( n,i ) is weighting vector 0< β ( n,i )<=1 i =1,2,3……,n β ( n,i )= λ n- i , where λ =forgetting factor
Continued… REGULARISATION: C(n)= The sum of weighted error squares: A regularizing term:
Continued… Let Φ (n) is the correlation matrix of input u( i ) Φ (n)= λ n- i u( i ) u H ( i )+ δλ n I Then the average cross correlation vector z(n) is given by- z(n)= Φ (n)ŵ(n) , n=1,2……… Using the matrix inversion lemma, we can find the inverse of correlation matrix, Φ -1 (n) = P(n) (let) Cost function is always expressed in terms of gain where ,K(n) is the gain vector. k(n) = P(n)u(n) = Φ -1 (n)u(n)
Continued… The tap weight vector ŵ(n) ŵ(n)= Φ -1 (n)z(n) From the above equations, we summarize the RLS Algorithm as- k(n) = π (n) = P(n-1)u(n) ξ (n) = d(n) – ŵ H (n-1)u(n) ŵ(n) = ŵ(n-1) + k(n) ξ *(n) P(n) = λ -1 P(n-1) – λ -1 k(n) u H (n) P(n-1)
ADVANTAGES AND DISADVANTAGES OF RLS
MEAN SQUARE ERROR FOR RLS-----
AFFINE PROJECTION ALGORITHM Generalization of the well known normalized least mean square (NLMS) adaptive filtering algorithm. Fast convergence compared to NLMS. Computational complexity increases. Convergence gets better with increase in filter order N. Faster tracking capabilities than NLMS. Better performance in steady state mean square error (MSE) than other algorithms.
APA MATHEMATICAL IMPLEMENTATION… A(n) = input data matrix [N*N] A H (n) =input data matrix in hermitian transpose[N*N] d(n)= desired response [N*1] Error can be computed as- e(n)=d(n)-A(n)ŵ(n) The updated tap weight vector can be calculated as- ŵ(n+1)=ŵ(n)+ μ A H (n)(A(n) A H (n)) -1 e(n)
CONVERGENCE & STABILITY OF APA The learning curve of an APA consists of the sum of exponential terms. It converges at a rate faster than that of a NLMS filter. As more delayed versions of tap input vector is used, the rate of convergence improves, so does the computational complexity. APA is less stable than LMS and NLMS algorithms, whereas it is more stable than RLS algorithm.
OUTPUT: MEAN SQUARE ERROR FOR APA-
SNRI TABLE:- Signal to Noise ratio improvement= Final SNR-Original SNR ALGORITHM SNRI LMS 13.69 NLMS 18.009 APA 20.39 RLS 29.09
COMPARISION FOR CONVERGENCE FOR DIFFERENT ALGORITHMS:
COMPARISON OF MSE FOR DIFFERENT ALGORITHMS:
COMPARISON OF LMS,NLMS,APA AND RLS-----
CONCLUSION We studied the behavior of LMS, NLMS, APA and RLS algorithms by implementing them in the adaptive filter for noise cancellation. LMS was the simplest and easiest to implement but it converges at the slowest rate. NLMS has a normalized step size making it converge faster than LMS but complexity also increases along with convergence rate. APA is the improved version of NLMS with increasing convergence rate.
continued…. RLS is the fastest converging algorithm with maximum computational complexity. But it cancels maximum noise by minimizing error with the rapidest rate. So we are making a tradeoff between computational complexity and convergence rate here to get the most noise free signal. RLS is the best algorithm as it is faster than the other three.
REFERENCES Adaptive Filter Theory by Simon Haykin: 3rd edition, Pearson Education Asia.LPE. Adaptive Signal Processing by John G Proakis, 3rd edition, Perntice Hall of India. B. Widow, "Adaptive noise canceling: principles and applications", Proceedings of the IEEE, vol. 63, pp. 1692-1716, 1975. A Family of Adaptive Filter Algorithms in Noise cancellation for Speech Enhancement By Sayed. A. Hadei, Student Member IEEE and M. lotfizad. Steven L. Gay and Sanjeev Tavathia, “The Fast Affine Projection Algorithm”, Acoustics Research Department, AT&T Bell Laboratories. Sundar G. Sankaran, Student Member, IEEE, and A. A. (Louis) Beex, Senior Member, IEEE” Convergence Behavior of Affine Projection Algorithms”.