it is regarding the mathematics for optimization techinques which is basic for engineering
Size: 5.01 MB
Language: en
Added: Jun 19, 2024
Slides: 21 pages
Slide Content
Newton’s Method for Multivariable Optimization Course Name: Optimization Techniques Department of Mathematics Amrita School of Engineering Amrita Vishwa Vidyapeetham, Coimbatore
Outline Motivation Algorithm Example Remarks
Motivation 3 Newton’s method uses second-order derivatives to create search directions. This allows faster convergence to the minimum point. Considering the first three terms in Taylor’s series expansion of a multivariable function, it can be shown that the first-order optimality condition will be satisfied if a search direction is used.
Motivation 4 If is positive semidefinite, the direction must be descent. If is not positive definite, the direction may or may not be descent, depending on whether the quantity is positive or not.
Motivation 5 Thus, the above search direction may not always guarantee a decrease in the function value in the vicinity of the current point. But, the second-order optimality condition suggests that be positive definite for the minimum point . It can be assumed that the matrix is positive definite in the vicinity of the minimum point and the above search direction becomes descent near the minimum point.
Algorithm 6 STEP 1: Choose a maximum number of iterations to be performed, an initial point , termination parameters , and set . STEP 2: Calculate , the first derivative at the point . STEP 3: If , Terminate; Else if ; Terminate; Else go to Step 4.
Example 7 Using Newton’s method, minimize by taking the starting (initial) point as . Solution: Let, , and then
https://www.youtube.com/SukantaNayakedu 8 STEP 4: Perform a unidirectional search to find using such that is minimum. STEP 5: Is ? If Yes, Terminate; Else set and go to STEP 2.
Example 9 Take . Figure 1. Minimization of a quadratic function in one step.
10
Example 11 Hence,
Remarks 12 This method is suitable and efficient when the initial point is close to the optimum point. Since, the function value is not guaranteed to reduce at every iteration, the occasional restart of the algorithm from a different point is often necessary.