Numerical Solution of Eigenvalues Problems Presented by : Nashat Al- ghrairi Subject : Mathematic Supervisor : Assoc. prof MURAT AlTEKIN DATE : 23 , December 2015
Eigenvalue problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, A H , instead of usual transpose, A T
Formulation Matrix expands or shrinks any vector lying in direction of eigenvector by scalar factor Expansion or contraction factor is given by corresponding eigenvalue Eigenvalues and eigenvectors decompose complicated behavior of general linear transformation into simpler actions
Properties of eigenvalue problems Properties of eigenvalue problem affecting choice of algorithm and software Are all eigenvalues needed, or only a few? Are only eigenvalues needed, or are corresponding eigenvectors also needed? Is matrix real or complex? Is matrix relatively small and dense, or large and sparse? Does matrix have any special properties, such as symmetry, or is it general matrix? Condition of eigenvalue problem is sensitivity of eigenvalues and eigenvectors to changes in matrix Conditioning of eigenvalue problem is not same as conditioning of solution to linear system for same matrix Different eigenvalues and eigenvectors are not necessarily equally sensitive to perturbations in matrix
Method of eigenvalue problems Linear Algebra and Eigenvalues Orthogonal Matrices and Similarity Transformations The Power Method Householder’s Method The QR Algorithm Singular Value Decomposition Survey of Methods and Software
Power method The Power method is an iterative technique used to determine the dominant eigenvalue of a matrix—that is, the eigenvalue with the largest magnitude. By modifying the method slightly, it can also used to determine other eigenvalues. One useful feature of the Power method is that it produces not only an eigenvalue, but also an associated eigenvector. In fact, the Power method is often applied to find an eigenvector for an eigenvalue that is determined by some other means.
Outline of power method * Iterative Solutions: Highest Eigenvalue: Power Method Lowest Eigenvalue: Inverse Power Method Other Eigenvalues: Eigenvalue Substitution
Power method To apply the Power method, we assume that the n × n matrix A has n eigenvalues λ 1 , λ 2, . . . , λn with an associated collection of linearly independent eigenvectors { v ( 1 ) , v ( 2 ) , v ( 3 ) , . . . , v (n) }. Moreover, we assume that A has precisely one eigenvalue, λ 1, that is largest in magnitude, so that | λ 1| > | λ 2| ≥ | λ 3| ≥ ・・・ ≥ | λn | ≥ 0 . If x is any vector in R n , the fact that { v ( 1 ) , v ( 2 ) , v ( 3 ) , . . . , v (n) } is linearly independent implies that constants β 1, β 2, . . . , βn exist with
Power Method formulas Target : y=A 𝐱 = 𝜆𝐱 Start with all 1’s x vector: =[1,1,……1 =A = ( Iteration number = 1 ) = element in with highest absolute value = = A , =A = ( Iteration number = 2 ) = element in with highest absolute value = = ……….. =A → = → ≈
Power Method: Dominant Eigenvalue Proof Assume x = + + …..+ Where are linearly independent eigenvectors Ax 𝐱 = λ Ax= + ……+ x = + ……+ After k iterations : x= + ……+ x = + ……+ If is considerably higher than …… : x →
Inverse Power Method: Smallest Absolute Eigenvalue Target : y= = Bx = At iteration k : x → Dominant ∝ is equivalent to smallest absolute λ Use LU factorization to solve for y: A = Find dominant element in y(k) as ∝ Keep on, then least 𝜆 =
Example on Inverse Power Method: lowest Eigenvalue Find the smallest eigenvalue of the following matrix A= det(A) = 13 Solution : A= =9.38 = 1.38 =
Example on Inverse Power Method: lowest Eigenvalue
Example on Highest Eigenvalue ( in Magnitude, sign ignored )
Summary The power method can be used to compute the dominant eigenvalue(real ) and a corresponding eigenvector. Variants of the power method can compute the smallest eigenvalue or the eigenvalue closest to a given number (shift ). General projection methods consist in approximating the eigenvectors of a matrix with vectors belonging to a subspace of approximants with dimension smaller than the dimension of the matrix . Subspace iteration method is a generalization of the power method that computes a given number of dominant eigenvalues and their corresponding eigenvectors .