Optimization and its applications usefulness for researchers
rupajnayak66
11 views
21 slides
May 01, 2024
Slide 1 of 21
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
About This Presentation
Optimization
Size: 277.4 KB
Language: en
Added: May 01, 2024
Slides: 21 pages
Slide Content
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Optimization for Researchers
A Lecture Note
Rupaj Kumar Nayak
https://sites.google.com/a/iiit- bh.ac.in/r- k- nayak/
International Institute of Information Technology Bhubaneswar
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 1 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Outline
Introduction
Constructing a model
Determining the problem type
Selecting the software
Types of Optimization problems
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Unconstrained Optimization
Constrained Optimization
Some Algorithms
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 2 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
What is Optimization?
Optimization is an important tool in making decisions and in
analyzing physical systems.
In mathematical terms, an optimization problem is the problem of
nding the best solution from among the set of all feasible solutions.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 3 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Mathematical Programming Problem
The general problem ofmathematical programming (MP)problem is as
follows:
minf(x) (1)
subject to gi(x)0;i= 1;2; :::;m
x0
wheref(x);gi(x);i= 1;2; :::;mare real valued functions ofx2R
n
:
Let us Classify MP (1)
Linear and Non linear
Constrained and Unconstrained
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 4 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Steps of Optimization Process
Therst stepin the optimization process is constructing an appropriate
model.
Modelingis the process of identifying and expressing in mathematical
terms theobjective, thevariables, and theconstraintsof the problem.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 5 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Anobjectiveis a quantitative measure of the performance of the
system that we want to minimize or maximize. In manufacturing, we
may want to maximize the prots or minimize the cost of
production, whereas in tting experimental data to a model, we may
want to minimize the total deviation of the observed data from the
predicted data.
Thevariablesor the unknowns are the components of the system for
which we want to nd values. In manufacturing, the variables may
be the amount of each resource consumed or the time spent on each
activity, whereas in data tting, the variables would be the
parameters of the model.
Theconstraintsare the functions that describe the relationships
among the variables and that dene the allowable values for the
variables. In manufacturing, the amount of a resource consumed
cannot exceed the available amount.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 6 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Determining the Problem Type
Thesecond stepin the optimization process is determining in which
category of optimization your model belongs. The page \Types of
Optimization Problems" provides some guidance to help you classify your
optimization model.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 7 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Softwares
Thethird stepin the optimization process is selecting software
appropriate for the type of optimization problem that you are solving.
Optimization software comes in two related but very dierent kinds of
packages:
Solver softwareis concerned with nding a solution to a specic
instance of an optimization model. The solver takes an instance of a
model as input, applies one or more solution methods, and returns
the results.
Modeling softwareformulates optimization models and analyze their
solutions. A modeling system takes as input a description of an
optimization problem in a symbolic form and allows the solution
output to be viewed in similar terms; conversion to the forms
required by the algorithm(s) is done internally.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 8 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Commercial vs. Open Source Solvers
Commercial solversare developed with considerable eort and, while
usually more robust and reliable, they often are quite expensive.
Some commercial systems are available for free under reasonable
conditions for educational purposes and academic research. Many
oer free size-limited student (or demo) versions for experimentation
with small problem instances.
Open source solversmake their source code freely available under
one of the standard open source licenses; many of these are available
through the COIN-OR repository (www.coin-org.org). Many of the
open-source solvers are also available as precompiled binaries for the
more popular platforms.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 9 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Continuous Vs. Discrete Optimization
Some models only make sense if the variables take on values from a
discrete set, often a subset of integers, whereas other models contain
variables that can take on any real value. Models with discrete variables
are discrete optimization problems; models with continuous variables are
continuous optimization problems.
Continuous optimization problems are easier to solve than discrete
optimization problems
However, improvements in algorithms coupled with advancements in
computing technology have dramatically increased the size and
complexity of discrete optimization problems that can be solved
eciently.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 10 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Unconstrained Optimization Vs. Constrained Optimization
Constrained optimization problems can be furthered classied according
to the nature of the constraints (e.g., linear, nonlinear, convex) and the
smoothness of the functions (e.g., dierentiable or non-dierentiable).
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 11 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
None, One or Many Objectives
Most optimization problems have a single objective function.
There are interesting cases when optimization problems have no
objective function or multiple objective functions. Feasibility
problems are problems in which the goal is to nd values for the
variables that satisfy the constraints of a model with no particular
objective to optimize.
Multi-objective optimization problems arise in many elds, when
optimal decisions need to be taken in the presence of trade-os
between two or more conicting objectives. For example, developing
a new component might involve minimizing weight while maximizing
strength or choosing a portfolio might involve maximizing the
expected return while minimizing the risk. In practice, problems with
multiple objectives often are reformulated as single objective
problems by either forming a weighted combination of the dierent
objectives or by replacing some of the objectives by constraints.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 12 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Deterministic Optimization vs. Stochastic Optimization
In deterministic optimization, the data for the given problem are
known accurately. Sometimes, the data cannot be known accurately
for a variety of reasons. The rst reason is due to simple
measurement error. The second and more fundamental reason is
that some data represent information about the future (e. g.,
product demand or price for a future time period) and simply cannot
be known with certainty.
In optimization under uncertainty, or stochastic optimization, the
uncertainty is incorporated into the model. Robust optimization
techniques can be used when the parameters are known only within
certain bounds; the goal is to nd a solution that is feasible for all
data and optimal in some sense. Stochastic programming models
take advantage of the fact that probability distributions governing
the data are known or can be estimated; the goal is to nd some
policy that is feasible for all (or almost all) the possible data
instances and optimizes the expected performance of the model.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 13 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Terminology
Denitions
Feasible solution
Increasing/Decreasing function
Unimodal/Multimodal functions
Stationary point
Saddle point
Point of inexion
Global and local minimum
Convexity
Positive semidenite matrix
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 14 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Optimization Taxonomy
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 15 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Unconstrained Optimization
Line Search Methods
Trust region methods
Quasi-Newton Method
Conjugate Gradient Method
Nonlinear Simplex method
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 16 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Constrained Optimization
Bound Constrained Optimization
1Newton Methods2Gradient Projection Methods
Linear Programming
1Simplex Method2Interior Point Methods
Quadratic Programming
1Algorithms for Quadratic Programming
Nonlinear Programming
1Augmented Lagrangian Methods2Sequential Quadratic Programming3Reduced Gradient Methods4Feasible Sequential Quadratic Programming
Semi-innite Programming
1Central Cutting Plane Methods2Discretization Methods3KKT Reduction Methods4SQP Reduction Methods
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 17 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Unconstrained optimization problems
Letx2R
n
be a real vector withn= 1 components and letf:R
n
!R
be a smooth function. Then, the unconstrained optimization problem is
minxf(x):
Unconstrained optimization problems arise directly in some applications
but they also arise indirectly from reformulations of constrained
optimization problems. Often it is practical to replace the constraints of
an optimization problem with penalized terms in the objective function
and to solve the problem as an unconstrained problem.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 18 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Algorithms
An important aspect of continuous optimization (constrained and
unconstrained) is whether the functions are smooth, by which we mean
that the second derivatives exist and are continuous. There has been
extensive study and development of algorithms for the unconstrained
optimization of smooth functions. At a high level, algorithms for
unconstrained minimization follow this general structure:
Choose a starting pointx0.
Beginning atx0, generate a sequence of iteratesfxkg
1
k=0
with
non-increasing function (f) value until a solution point with
sucient accuracy is found or until no further progress can be made.
To generate the next iteratexk+1, the algorithm uses information
about the function atxkand possibly earlier iterates.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 19 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Newton's method
Newton's method forms a quadratic model of the objective function
around the current iterate. The model function is dened by
qk(s) =f(xk) +rf(xk)
T
s+
1
2
s
T
r
2
f(xk)s:
In the basic Newton method, the next iterate is obtained from the
minimizer ofqk(s). When the Hessian matrix,r
2
f(xk), is positive
denite, the quadratic model has a unique minimizer that can be
obtained by solving the symmetricnnlinear system:
r
2
f(xk)s=rf(xk):
The next iterate is thenxk+1=xk+sk. Convergence is guaranteed if the
starting point is suciently close to a local minimizer x* at which the
Hessian is positive denite.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 20 / 21
Introduction
Some denitions
Optimization Taxonomy
Algorithms by Problem type
Some Algorithms
IIIT Bhubaneswar
Cont..
In most circumstances, however, the basic Newton method has to be
modied to achieve convergence. There are two fundamental strategies
for moving fromxktoxk+1:line search and trust region. Most
algorithms follow one of these two strategies.
Theline-searchmethod modies the search direction to obtain another
downhill, or descent, direction forf. It then tries dierent step lengths
along this direction until it nds a step that not only decreasesfbut also
achieves at least a small fraction of this direction's potential.
Thetrust-regionmethods use the original quadratic model function, but
they constrain the new iterate to stay in a local neighborhood of the
current iterate. To nd the step, it is necessary to minimize the quadratic
function subject to staying in this neighborhood, which is generally
ellipsoidal in shape.
Line-search and trust-region techniques are suitable if the number of
variablesnis not too large, because the cost per iteration is of ordern
3
.
Codes for problems with a large number of variables tend to use
truncated Newton methods, which usually settle for an approximate
minimizer of the quadratic model.
Rupaj Kumar Nayak 2015 Lecture Note on Optimization 21 / 21