Optimization Introduction power point presentation
muskaanbhayana9
9 views
49 slides
May 13, 2025
Slide 1 of 49
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
About This Presentation
Optimisation of the vectors and scalar quantity
Size: 1.04 MB
Language: en
Added: May 13, 2025
Slides: 49 pages
Slide Content
EE/Econ 458
Introduction to Optimization
J. McCalley
1
Real-time
Electricity markets and tools
Day-ahead
SCUC and SCED SCED
Minimize f(x)
subject to
h(x)=c
g(x)< b
BOTH LOOK LIKE THIS
SCUC:
x contains
discrete &
continuous
variables.
SCED:
x contains only
continuous
variables.
2
Optimization Terminology
Minimize f(x)
subject to
h(x)=c
g(x)> b
f(x): Objective function
x: Decision variables
h(x)=c: Equality constraint
g(x)> b: Inequality constraint
An optimization problem or a mathematical program
or a mathematical programming problem.
x*: solution
3
Classification of Optimization Problems
http://www.neos-guide.org/NEOS/index.php/Optimization_Tree
Continuous Optimization
Unconstrained Optimization
Bound Constrained Optimization
Derivative-Free Optimization
Global Optimization
Linear Programming
Network Flow Problems
Nondifferentiable Optimization
Nonlinear Programming
Optimization of Dynamic Systems
Quadratic Constrained Quadratic Programming
Quadratic Programming
Second Order Cone Programming
Semidefinite Programming
Semiinfinite Programming
Discrete and Integer Optimization
Combinatorial Optimization
Traveling Salesman Problem
Integer Programming
Mixed Integer Linear Programming
Mixed Integer Nonlinear Programming
Optimization Under Uncertainty
Robust Optimization
Stochastic Programming
Simulation/Noisy Optimization
Stochastic Algorithms
Complementarity Constraints and
Variational Inequalities
Complementarity Constraints
Game Theory
Linear Complementarity Problems
Mathematical Programs with
Complementarity Constraints
Nonlinear Complementarity
Problems
Systems of Equations
Data Fitting/Robust Estimation
Nonlinear Equations
Nonlinear Least Squares
Systems of Inequalities
Multiobjective Optimization
4
Convex functions
Definition #1: A function f(x) is convex in an interval if its
second derivative is positive on that interval.
Example: f(x)=x
2
is convex since f’(x)=2x, f’’(x)=2>0
5
Convex functions
The second derivative test is sufficient but not necessary.
www.ebyte.it/library/docs/math09/AConvexInequality.html
Definition #2: A function f(x) is convex if a line drawn
between any two points on the function remains on or
above the function in the interval between the two points.
6
Convex functions
Definition #2: A function f(x) is convex if a line drawn
between any two points on the function remains on or
above the function in the interval between the two points.
Is a linear function convex?
Answer is “yes” since a line drawn between any two
points on the function remains on the function. 7
Convex Sets
Definition #3: A set C is convex if a line segment between
any two points in C lies in C.
Ex: Which of the below are convex sets?
The set on the left is convex. The set on the right is not.
8
Convex Sets
Definition #3: A set C is convex if a line segment between
any two points in C lies in C.
S. Boyd and L. Vandenberghe, “Convex optimization,” Cambridge University Press, 2004.
9
Global vs. local optima
Example: Solve the following:
Minimize f(x)=x
2
Solution: f’(x)=2x=0x*=0.
This solution is a local optimum.
It is also the global optimum.
Example: Solve the following:
Minimize f(x)=x
3
-17x
2
+80x-100
Solution: f’(x)=3x
2
-34x+80=0
Solving the above results in x=3.33 and x=8.
Issue#1: Which is the best solution?
Issue#2: Is the best solution the global solution?
10
Global vs. local optima
Example: Solve the following:
Minimize f(x)=x
3
-17x
2
+80x-100
Solution: f’(x)=3x
2
-34x+80=0. Solving results in x=3.33, x=8.
Issue#1: Which is the
best solution?
Issue#2: Is the best
solution the global
solution?
x=8
No! It is unbounded.
11
Convexity & global vs. local optima
When minimizing a function, if we want to be sure that we
can get a global solution via differentiation, we need to
impose some requirements on our objective function.
We will also need to impose some requirements on the
feasible set S (set of possible values the solution x* may take).
Min f(x)
subject to
h(x)=c
g(x)> b
Sx
xf
subject to
)(min
Definition: If f(x) is a convex function, and if S is a convex set, then
the above problem is a convex programming problem.
Definition: If f(x) is not a convex function, or if S is not a convex set,
then the above problem is a non-convex programming problem.
12
Feasible set
Convex vs. nonconvex programming problems
The desirable quality of a convex
programming problem is that any locally
optimal solution is also a globally optimal
solution. If we have a method of
finding a locally optimal solution, that
method also finds for us the globally
optimum solution.
13
The undesirable quality of a non-convex
programming problem is that any
method which finds a locally optimal
solution does not necessarily find the
globally optimum solution.
MATHEMATICAL
PROGRAMMING
Convex
Non-convex
We address convex
programming
problems in
addressing linear
programming.
We will also, later,
address a special form of
non-convex programming
problems called integer
programs.
A convex programming problem
14
cxxh s.t.
xxf
),(
),(min
21
21
c)xh(
xf
s.t.
)( min
c)x(h
xf
s.t.
)( min
Two variables with one
equality-constraint
Multi-variable with one
equality-constraint.
Multi-variable with multiple
equality-constraints.
We focus on this one, but
conclusions we derive will also
apply to the other two. The
benefit of focusing on this one is
that we can visualize it.
Contour maps
15
Definition: A contour map is a 2-dimensional plane, i.e., a
coordinate system in 2 variables, say, x
1, x
2, that illustrates
curves (contours) of constant functional value f(x
1
, x
2
).
Example: Draw the contour map for
2
2
2
121),( xxxxf
.
[X,Y] = meshgrid(-
2.0:.2:2.0,-2.0:.2:2.0);
Z = X.^2+Y.^2;
[c,h]=contour(X,Y,Z);
clabel(c,h);
grid;
xlabel('x1');
ylabel('x2');
Contour maps and 3-D illustrations
16
Example: Draw the 3-D surface for
2
2
2
121
),( xxxxf
.
[X,Y] = meshgrid(-
2.0:.2:2.0,-2.0:.2:2.0);
Z = X.^2+Y.^2;
surfc(X,Y,Z)
xlabel('x1')
ylabel('x2')
zlabel('f(x1,x2)')
Height is f(x)
Contours
Each contour of fixed value f
is the projection onto the x1-
x2 plane of a horizontal slice
made of the 3-D figure at a
value f above the x1-x2 plane.
Solving a convex program: graphical analysis
17
Example: Solve this convex program:
.
6
),(min
2121
2
2
2
121
xx),xh(x s.t.
xxxxf
66
122
xxxx
1
Superimpose this
relation on top of the
contour plot for f(x
1,x
2).
1. f(x
1,x
2) must be minimized, and so we would like the solution
to be as close to the origin as possible;
2. The solution must be on the thick line in the right-hand corner
of the plot, since this line represents the equality constraint.
A straight line is a convex set because a line
segment between any two points on it remain on it.
Solving a convex program: graphical analysis
18
.
)25.1,25.1()*,(*
21
xxx
3)*,(
21 xxf
Solution:
Any contour f<3 does not
intersect the equality
constraint;
Any contour f>3 intersects the
equality constraint at two
points.
The contour f=3 and the
equality constraint just touch
each other at the point x*.
“Just touch”:
The two curves are tangent to one another at the solution point.
Solving a convex program: graphical analysis
19
.
The two curves are tangent to one another at the solution point.
The normal (gradient)
vectors of the two curves, at
the solution (tangent) point,
are parallel.
“Parallel” means that the two vectors have the same direction. We do not know that
they have the same magnitude. To account for this, we equate with a “multiplier” λ:
c),xh(xxxf
*
21
*
21),(
*
*
221
*
2
1
*
2
2
2
121
1
1
6}6*),({
2
2
)*},({
xxxxh
x
x
xxxxf
1
This means the following
two vectors are parallel:
Solving a convex program: graphical analysis
20
.
c),xh(xxxf
*
21
*
21
),(
0),(
*
21
*
21
c),xh(xxxf
Moving everything to the left:
0),(
*
21
*
21 ),xh(xcxxf
Alternately:
0
0
),(
),(
*
2121
2
2121
1
c),xh(xxxf
x
c),xh(xxxf
x
Performing the gradient operation (taking
derivatives with respect to x
1 and x
2) :
In this problem, we already know the solution, but what if we did not?
Then could we use the above equations to find the solution?
Solving a convex program: analytical analysis
21
0
0
),(
),(
*
2121
2
2121
1
c),xh(xxxf
x
c),xh(xxxf
x
In this problem, we already know the solution, but what if we did not?
Then could we use the above equations to find the solution?
NO! Because we only have 2 equations, yet 3 unknowns: x
1, x
2, λ.
So we need another equation. Where do we get that equation?
Recall our equality constraint: h(x
1, x
2)-c=0 . This must be satisfied!
Therefore:
0
0
0
),(
),(
),(
*
21
2121
2
2121
1
cxxh
c),xh(xxxf
x
c),xh(xxxf
x
Three equations,
three unknowns,
we can solve.
Solving a convex program: analytical analysis
22
Observation: The three equations are simply partial derivatives of the
function
This is obviously true for the first two equations , but it is not so
obviously true for the last one. But to see it, observe
0
0
0
),(
),(
),(
*
21
2121
2
2121
1
cxxh
c),xh(xxxf
x
c),xh(xxxf
x
c),xh(xxxf
2121),(
c),xh(xc),xh(x
c),xh(xxxf
2121
2121
0
0),(
Formal approach to solving our problem
23
Define the Lagrangian function:
In a convex programming problem, the “first-order conditions” for
finding the solution is given by
c),xh(xxxfxx
212121
),(),,( L
0),,(
21 xxL
0),,(
0),,(
0),,(
21
21
2
21
1
xx
xx
x
xx
x
L
L
L
OR
Or more
compactly
0),(
0),(
x
x
x
L
L
where we have
used x=(x
1
, x
2
)
Applying to our example
24
Define the Lagrangian function:
6
),(),,(
21
2
2
2
1
212121
xxxx
c),xh(xxxfxx
L
0),,(
21 xxL
0),,(
0),,(
0),,(
21
21
2
21
1
xx
xx
x
xx
x
L
L
L
OR
xxxx
xxx
x
xxx
x
06),,(
02),,(
02),,(
2121
221
2
121
1
L
L
L
A set of 3 linear equations and 3 unknowns;
we can write in the form of Ax=b.
Now, let’s go back to our example with a
nonlinear equality constraint.
Example with nonlinear equality
27
Non-convex because a line connecting two points in the
set do not remain in the set. (see “notes” of this slide)
.
32
),(min
2121
2
2
2
121
xx),xh(x s.t.
xxxxf
1
221
2
3
32
x
xxx
Superimpose this
relation on top of the
contour plot for f(x
1,x
2).
1. f(x
1
,x
2
) must be minimized, and so we would like the solution
to be as close to the origin as possible;
2. The solution must be on the thick line in the right-hand corner
of the plot, since this line represents the equality constraint.
Example with nonlinear equality
28
.
)25.1,25.1()*,(*
21
xxx
3)*,(
21 xxf
Solution:
Any contour f<3 does not
intersect the equality
constraint;
Any contour f>3 intersects the
equality constraint at two
points.
The contour f=3 and the
equality constraint just touch
each other at the point x*.
“Just touch”:
The two curves are tangent to one another at the solution point.
Example with nonlinear equality
29
.
The two curves are tangent to one another at the solution point.
The normal (gradient)
vectors of the two curves, at
the solution (tangent) point,
are parallel.
“Parallel” means that the two vectors have the same direction. We do not know that
they have the same magnitude. To account for this, we equate with a “multiplier” λ:
c),xh(xxxf
*
21
*
21),(
*
1
2*
221
*
2
1
*
2
2
2
121
2
2
32}3*),({
2
2
)*},({
x
x
xxxxh
x
x
xxxxf
1
This means the following
two vectors are parallel:
Example with nonlinear equality
30
0
0
),(
),(
*
2121
2
2121
1
c),xh(xxxf
x
c),xh(xxxf
x
This gives us the following two equations.
And we add the equality constraint to give 3 equations, 3 unknowns:
0
0
0
),(
),(
),(
*
21
2121
2
2121
1
cxxh
c),xh(xxxf
x
c),xh(xxxf
x
Three equations,
three unknowns,
we can solve.
Example with nonlinear equality
31
Define the Lagrangian function:
32
),(),,(
21
2
2
2
1
212121
xxxx
c),xh(xxxfxx
L
0),,(
21 xxL
0),,(
0),,(
0),,(
21
21
2
21
1
xx
xx
x
xx
x
L
L
L
OR
032),,(
022),,(
022),,(
2121
1221
2
2121
1
xxxx
xxxx
x
xxxx
x
L
L
L
You can solve this algebraically to obtain
2247.1
2
3
21
xx
2247.1
2
3
21 xx
and f=3 in
both cases
Example with nonlinear equality
Our approach worked in this case, i.e., we
found a local optimal point that was also a
global optimal point, but because it was
not a convex programming problem, we
had no guarantee that this would happen.
The conditions we established, below, we call first order conditions.
For convex programming problems, they are first order sufficient conditions to provide
the global optimal point.
For nonconvex programming problems, they are first order necessary conditions to
provide the global optimal point.
0),(
0),(
x
x
x
L
L
Multiple equality constraints
33
We assume that f and
h are continuously
differentiable. cxh s.t.
xf
)(
)(min
mmm c)x(h
c)x(hc)x(hxfx
...
)(),(
222111
L
First order necessary conditions that (x*, λ*) solves the above:
0),(
0),(
**
**
x
x
x
L
L
Multiple equality & 1 inequality constraint
34
We assume that f, h, and
g are continuously
differentiable.
Solution approach:
•Ignore the inequality constraint and solve the problem.
(this is just a problem with multiple equality constraints).
•If inequality constraint is satisfied, then problem is solved.
•If inequality constraint is violated, then the inequality constraint
must be binding inequality constraint enforced with equality:
bxg
c)x(h s.t.
xf
)(
)(min
bxg)(
Let’s look at this new problem where the inequality is
binding.
Multiple equality & 1 inequality constraint
35
We assume that f, h, and
g are continuously
differentiable.
bxg
c)x(h s.t.
xf
)(
)(min
bxgc)x(h
c)x(hc)x(hxfx
mmm
)(...
)(),,(
222111
L
First order necessary conditions that (x*, λ*, μ*) solves the above:
0*),,(
0*),,(
0),,(
**
**
***
x
x
x
x
L
L
L
We were able to write down this
solution only after we knew the
inequality constraint was binding.
Can we generalize this approach?
Multiple equality & 1 inequality constraint
36
bxgc)x(h
c)x(hc)x(hxfx
mmm
)(...
)(),,(
222111
L
If inequality is not binding, then apply first order necessary
conditions by ignoring it:
μ=0
g(x)-b≠0 (since it is not binding!)
If inequality is binding, then apply first order necessary
conditions treating inequality constraint as an equality constraint
μ≠0
g(x)-b≠0 (since it is binding!)
Either way:
μ(g(x)-b)=0
This relation encodes our
solution procedure!
It can be used to generalize our
necessary conditions
Multiple equality & multiple inequality constraints
37
We assume that f, h, and
g are continuously
differentiable.
bxg
c)x(h s.t.
xf
)(
)(min
11112111
222111
)(...)()(
...)(),,(
bxgbxgbxg
c)x(h-c)x(hc)x(hxfx
n
mmm
L
First order necessary conditions that (x*, λ*, μ*) solves the above:
k
kbxg
x
x
x
x
k
kk
0
0))((
0*),,(
0*),,(
0),,(
*
**
**
**
***
L
L
L
Complementarity condition:
Inactive constraints have a
zero multiplier.
Nonnegativity
on inequality
multipliers.
These conditions also referred
to as the Kurash-Kuhn-
Tucker (KKT) conditions
An additional requirement
38
We assume that f, h, and
g are continuously
differentiable.
bxg
c)x(h s.t.
xf
)(
)(min
For KKT to guarantee finds a local optimum, we need the Kuhn-
Tucker Constraint Qualification (even under convexity).
This condition imposes a certain restriction on the constraint functions .
Its purpose is to rule out certain irregularities on the boundary of the
feasible set, that would invalidate the Kuhn-Tucker conditions should the
optimal solution occur there.
We will not try to tackle this idea, but know this:
If the feasible region is a convex set formed by linear
constraints only, then the constraint qualification will be met, and
the Kuhn-Tucker conditions will always hold at an optimal
solution.
Generator unit cost function:
COST
i
=
where
COST
i = production cost
P
i = production power
Economic dispatch calculation (EDC)
iiiiiii cPbPaPC
2
Unit capacity limits
iiPP i
i
PP
level generationPP
level generationPP
where
i
i
max
min
:
max
min
Notation: double underline means lower bound.
Double overline means upper bound.
T
tieLOSSD
n
i
i PPPPP
1
Power balance
(no transmission
representation)
n
i
ii
i PCPf
1
)(min
Subject to
iiii
ii
T
n
i
i
PPPP
PP
PP
1
General EDC problem statement.
222222
111111
2
1
2211221121 ,,,,,,
PPPP
PPPP
PPP
PCPCPP
T
L
Two unit system, Lagrangian function:
2,1,0
2,1,0
0,0
0,00
00
00
00
222222
111111
21
22
2
22
2
11
1
11
1
k
k
PPPP
PPPPcxh
PPP
P
PC
P
P
PC
P
k
k
T
L
L
L
Two unit system, KKT conditions:
Unit 1 Unit 2
Generation
Specifications:
Minimum Generation 200 MW 100 MW
Maximum Generation 380 MW 200 MW
Cost Curve Coefficients:
Quadratic Term 0.016 0.019
Linear Term 2.187 2.407
Constant Term 120.312 74.074
200100
380200
400
074.74407.2019.0
312.120187.2016.0,,,,,,
2222
1111
21
2
2
2
1
2
1221121
PP
PP
PP
P P
P P PP
L
LaGrangian function
0200,0100
0380,02000
04000
0407.2038.00
0187.2032.00
2222
1111
21
222
2
111
1
PP
PPcxg
PP
P
P
P
P
L
L
L
KKT conditions
Assume all inequality constraints are non-binding.
This means that
niand ii ,100
0400
0407.2038.0
0187.2032.0
21
2
1
PP
P
P
And KKT conditions become
400
407.2038.0
187.20032.0
21
21
21
PP
PP
PP
Rewrite them as:
And it is easy to see
how to put them
into matrix form for
solution in matlab.
What is = $9.24/MW-hr ???
It is the system “incremental cost.”
It is the cost if the system provides an
additional MW over the next hour.
It is the cost of “increasing” the RHS of the
equality constraint by 1 MW for an hour.
We can verify this.
Verification for meaning of lambda.
• Compute total costs/hr for Pd=400 MW
• Compute total costs/hr for Pd=401 MW
• Find the difference in total costs/hr for
the two demands.
If our interpretation of lambda is correct,
this difference should be $9.24.
hrPC
PC
hrPC
PC
/$25.1120
074.7471.179407.271.179019.0
/$53.1378
312.12029.220187.229.220016.0
22
2
22
11
2
11
78.249825.112053.13782211 PCPCCT
Total cost/hr are C1+C2
Get cost/hr for each unit.
Now solve EDC for Pd=401 MW to get P1,P2
401
407.2
187.2
011
1038.00
10032.0
2
1
P
P
25.9
17.180
83.220
2
1
P
P
hrPC
PC
hrPC
PC
/$51.1124
074.7417.180407.217.180019.0
/$52.1383
312.12083.220187.283.220016.0
22
2
22
11
2
11
Total cost/hr are C1+C2
Get cost/hr for each unit.
03.250851.112452.13832211 PCPCCT
Total cost/hr changed by 2508.03-2498.78 = 9.25 $/hr,
which is in agreement with our interpretation of lambda.