ch02.ppt nicholson mikro ekonomi (ilmu ekonomi)

FarhanPramudya4 78 views 110 slides Sep 04, 2024
Slide 1
Slide 1 of 110
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110

About This Presentation

PTT mikro ekonomi


Slide Content

1
Chapter 2
THE MATHEMATICS OF
OPTIMIZATION
Copyright ©2005 by South-Western, a division of Thomson Learning. All rights reserved.

2
The Mathematics of Optimization
•Many economic theories begin with the
assumption that an economic agent is
seeking to find the optimal value of some
function
–consumers seek to maximize utility
–firms seek to maximize profit
•This chapter introduces the mathematics
common to these problems

3
Maximization of a Function of
One Variable
•Simple example: Manager of a firm
wishes to maximize profits
)(qf
 = f(q)

Quantity
*
q*
Maximum profits of
* occur at q*

4
Maximization of a Function of
One Variable
•The manager will likely try to vary q to see
where the maximum profit occurs
–an increase from q
1 to q
2 leads to a rise in 
 = f(q)

Quantity
*
q*

1
q
1

2
q
2
0


q

5
Maximization of a Function of
One Variable
•If output is increased beyond q*, profit will
decline
–an increase from q* to q
3 leads to a drop in 
 = f(q)

Quantity
*
q*
0


q

3
q
3

6
Derivatives
•The derivative of  = f(q) is the limit of
/q for very small changes in q
h
qfhqf
dq
df
dq
d
h
)()(
lim
11
0




•The value of this ratio depends on the
value of q
1

7
Value of a Derivative at a Point
•The evaluation of the derivative at the
point q = q
1 can be denoted
1qq
dq
d


•In our previous example,
0
1


qq
dq
d
0
3


qq
dq
d
0

*qq
dq
d

8
First Order Condition for a
Maximum
•For a function of one variable to attain
its maximum value at some point, the
derivative at that point must be zero
0
*qq
dq
df

9
Second Order Conditions
•The first order condition (d/dq) is a
necessary condition for a maximum, but
it is not a sufficient condition

Quantity
*
q*
If the profit function was u-shaped,
the first order condition would result
in q* being chosen and  would
be minimized

10
Second Order Conditions
•This must mean that, in order for q* to
be the optimum,
*qq
dq
d


for 0 and
*qq
dq
d


for 0
•Therefore, at q*, d/dq must be
decreasing

11
Second Derivatives
•The derivative of a derivative is called a
second derivative
•The second derivative can be denoted
by
)(" or or
2
2
2
2
qf
dq
fd
dq
d

12
Second Order Condition
•The second order condition to represent
a (local) maximum is
0)("
*
*
2
2




qq
qq
qf
dq
d

13
Rules for Finding Derivatives
0 then constant, a is If 1. 
dx
db
b
)('
)]([
then constant, a is If 2. xbf
dx
xbfd
b 
1
then constant, is If 3.


b
b
bx
dx
dx
b
xdx
xd 1ln
4. 

14
Rules for Finding Derivatives
aaa
dx
da
x
x
constantany for ln 5. 
–a special case of this rule is de
x
/dx = e
x

15
Rules for Finding Derivatives
)(')('
)]()([
6. xgxf
dx
xgxfd


)()(')(')(
)]()([
7. xgxfxgxf
dx
xgxfd


•Suppose that f(x) and g(x) are two
functions of x and f’(x) and g’(x) exist
•Then

16
Rules for Finding Derivatives
0)( that provided

)]([
)(')()()(')(
)(
8.
2











xg
xg
xgxfxgxf
dx
xg
xf
d

17
Rules for Finding Derivatives
dz
dg
dx
df
dz
dx
dx
dy
dz
dy
 9.
•If y = f(x) and x = g(z) and if both f’(x)
and g’(x) exist, then:
•This is called the chain rule. The chain
rule allows us to study how one variable
(z) affects another variable (y) through
its influence on some intermediate
variable (x)

18
Rules for Finding Derivatives
axax
axax
aeae
dx
axd
axd
de
dx
de

)(
)(
10.
•Some examples of the chain rule include
)ln()ln(
)(
)(
)][ln()][ln(
11. axaaax
dx
axd
axd
axd
dx
axd

x
x
xdx
xd
xd
xd
dx
xd 2
2
1)(
)(
)][ln()][ln(
12.
2
2
2
22


19
Example of Profit Maximization
•Suppose that the relationship between
profit and output is
 = 1,000q - 5q
2
•The first order condition for a maximum is
d/dq = 1,000 - 10q = 0
q* = 100
•Since the second derivative is always
-10, q = 100 is a global maximum

20
Functions of Several Variables
•Most goals of economic agents depend
on several variables
–trade-offs must be made
•The dependence of one variable (y) on
a series of other variables (x
1
,x
2
,…,x
n
) is
denoted by
),...,,(
nxxxfy
21

21
•The partial derivative of y with respect to x
1
is denoted by
Partial Derivatives
1
11
1
ff
x
f
x
y
x or or or




•It is understood that in calculating the
partial derivative, all of the other x’s are
held constant

22
•A more formal definition of the partial
derivative is
Partial Derivatives
h
xxxfxxhxf
x
f nn
h
xx n
),...,,(),...,,(
lim
2
1
2
1
0
,...,1
2




23
Calculating Partial Derivatives
212
2
211
1
2
221
2
121
2
2
cxbxf
x
f
bxaxf
x
f
cxxbxaxxxfy







and
then ,),( If 1.
2121
21
2
2
1
1
21
bxaxbxax
bxax
bef
x
f
aef
x
f
exxfy









and
then If 2. ,),(

24
Calculating Partial Derivatives
2
2
21
1
1
2121
x
b
f
x
f
x
a
f
x
f
xbxaxxfy







and
then If 3. ,lnln),(

25
Partial Derivatives
•Partial derivatives are the mathematical
expression of the ceteris paribus
assumption
–show how changes in one variable affect
some outcome when other influences are
held constant

26
Partial Derivatives
•We must be concerned with how
variables are measured
–if q represents the quantity of gasoline
demanded (measured in billions of gallons)
and p represents the price in dollars per
gallon, then q/p will measure the change
in demand (in billiions of gallons per year)
for a dollar per gallon change in price

27
Elasticity
•Elasticities measure the proportional
effect of a change in one variable on
another
–unit free
•The elasticity of y with respect to x is
y
x
x
y
y
x
x
y
x
x
y
y
e
xy 









,

28
Elasticity and Functional Form
•Suppose that
y = a + bx + other terms
•In this case,





bxa
x
b
y
x
b
y
x
x
y
e
xy,
•e
y,x is not constant
–it is important to note the point at which the
elasticity is to be computed

29
Elasticity and Functional Form
•Suppose that
y = ax
b

•In this case,
b
ax
x
abx
y
x
x
y
e
b
b
xy 



1
,

30
Elasticity and Functional Form
•Suppose that
ln y = ln a + b ln x
•In this case,
x
y
b
y
x
x
y
e
xy
ln
ln
,






•Elasticities can be calculated through
logarithmic differentiation

31
Second-Order Partial Derivatives
•The partial derivative of a partial
derivative is called a second-order
partial derivative
ij
ijj
i
f
xx
f
x
xf






2
)/(

32
Young’s Theorem
•Under general conditions, the order in
which partial differentiation is conducted
to evaluate second-order partial
derivatives does not matter
jiij
ff

33
Use of Second-Order Partials
•Second-order partials play an important
role in many economic theories
•One of the most important is a variable’s
own second-order partial, f
ii
–shows how the marginal influence of x
i on
y(y/x
i) changes as the value of x
i increases
–a value of f
ii < 0 indicates diminishing
marginal effectiveness

34
Total Differential
•Suppose that y = f(x
1,x
2,…,x
n)
•If all x’s are varied by a small amount,
the total effect on y will be
n
n
dx
x
f
dx
x
f
dx
x
f
dy








 ...
2
2
1
1
nndxfdxfdxfdy  ...
2211

35
First-Order Condition for a
Maximum (or Minimum)
•A necessary condition for a maximum (or
minimum) of the function f(x
1,x
2,…,x
n) is that
dy = 0 for any combination of small changes
in the x’s
•The only way for this to be true is if
0...
21 
nfff
• A point where this condition holds is
called a critical point

36
Finding a Maximum
•Suppose that y is a function of x
1
and x
2
y = - (x
1 - 1)
2
- (x
2 - 2)
2
+ 10
y = - x
1
2
+ 2x
1 - x
2
2
+ 4x
2 + 5
•First-order conditions imply that
042
022
2
2
1
1






x
x
y
x
x
y
OR
2
1
2
1


*
*
x
x

37
Production Possibility Frontier
•Earlier example: 2x
2
+ y
2
= 225
•Can be rewritten: f(x,y) = 2x
2
+ y
2
- 225 = 0
•Because f
x = 4x and f
y = 2y, the opportunity
cost trade-off between x and y is
y
x
y
x
f
f
dx
dy
y
x
2
2
4 




38
Implicit Function Theorem
•It may not always be possible to solve
implicit functions of the form g(x,y)=0 for
unique explicit functions of the form y = f(x)
–mathematicians have derived the necessary
conditions
–in many economic applications, these
conditions are the same as the second-order
conditions for a maximum (or minimum)

39
The Envelope Theorem
•The envelope theorem concerns how the
optimal value for a particular function
changes when a parameter of the function
changes
•This is easiest to see by using an example

40
The Envelope Theorem
•Suppose that y is a function of x
y = -x
2
+ ax
•For different values of a, this function
represents a family of inverted parabolas
•If a is assigned a specific value, then y
becomes a function of x only and the value
of x that maximizes y can be calculated

41
The Envelope Theorem
Value of a Value of x* Value of y*
0 0 0
1 1/2 1/4
2 1 1
3 3/2 9/4
4 2 4
5 5/2 25/4
6 3 9


Optimal Values of x and y for alternative values of a

42
The Envelope Theorem
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7
a
y*
As a increases,
the maximal value
for y (y*) increases
The relationship
between a and y
is quadratic

43
The Envelope Theorem
•Suppose we are interested in how y*
changes as a changes
•There are two ways we can do this
–calculate the slope of y directly
–hold x constant at its optimal value and
calculate y/a directly

44
The Envelope Theorem
•To calculate the slope of the function, we
must solve for the optimal value of x for
any value of a
dy/dx = -2x + a = 0
x* = a/2
•Substituting, we get
y* = -(x*)
2
+ a(x*) = -(a/2)
2
+ a(a/2)
y* = -a
2
/4 + a
2
/2 = a
2
/4

45
The Envelope Theorem
•Therefore,
dy*/da = 2a/4 = a/2 = x*
•But, we can save time by using the
envelope theorem
–for small changes in a, dy*/da can be
computed by holding x at x* and calculating
y/ a directly from y

46
The Envelope Theorem
y/ a = x
•Holding x = x*
y/ a = x* = a/2
•This is the same result found earlier

47
The Envelope Theorem
•The envelope theorem states that the
change in the optimal value of a function
with respect to a parameter of that function
can be found by partially differentiating the
objective function while holding x (or
several x’s) at its optimal value
)}(*{
*
axx
a
y
da
dy



48
The Envelope Theorem
•The envelope theorem can be extended to
the case where y is a function of several
variables
y = f(x
1,…x
n,a)
•Finding an optimal value for y would consist
of solving n first-order equations
y/x
i
= 0 (i = 1,…,n)

49
The Envelope Theorem
•Optimal values for theses x’s would be
determined that are a function of a
x
1* = x
1*(a)
x
2* = x
2*(a)
x
n
*= x
n
*(a)
.
.
.

50
The Envelope Theorem
•Substituting into the original objective
function yields an expression for the optimal
value of y (y*)
y* = f [x
1*(a), x
2*(a),…,x
n*(a),a]
•Differentiating yields
a
f
da
dx
x
f
da
dx
x
f
da
dx
x
f
da
dy
n
n 










 ...
*
2
2
1
1

51
The Envelope Theorem
•Because of first-order conditions, all terms
except f/a are equal to zero if the x’s are
at their optimal values
•Therefore,
)}(*{
*
axx
a
f
da
dy



52
Constrained Maximization
•What if all values for the x’s are not
feasible?
–the values of x may all have to be positive
–a consumer’s choices are limited by the
amount of purchasing power available
•One method used to solve constrained
maximization problems is the Lagrangian
multiplier method

53
Lagrangian Multiplier Method
•Suppose that we wish to find the values
of x
1
, x
2
,…, x
n
that maximize
y = f(x
1, x
2,…, x
n)
subject to a constraint that permits only
certain values of the x’s to be used
g(x
1, x
2,…, x
n) = 0

54
Lagrangian Multiplier Method
•The Lagrangian multiplier method starts
with setting up the expression
L = f(x
1
, x
2
,…, x
n
) + g(x
1
, x
2
,…, x
n
)
where  is an additional variable called
a Lagrangian multiplier
•When the constraint holds, L = f
because g(x
1, x
2,…, x
n) = 0

55
Lagrangian Multiplier Method
•First-Order Conditions
L/x
1
= f
1
+ g
1
= 0
L/x
2
= f
2
+ g
2
= 0
.
L/x
n
= f
n
+ g
n
= 0
.
.
L/ = g(x
1
, x
2
,…, x
n
) = 0

56
Lagrangian Multiplier Method
•The first-order conditions can generally
be solved for x
1
, x
2
,…, x
n
and 
•The solution will have two properties:
–the x’s will obey the constraint
–these x’s will make the value of L (and
therefore f) as large as possible

57
Lagrangian Multiplier Method
•The Lagrangian multiplier () has an
important economic interpretation
•The first-order conditions imply that
f
1
/-g
1
= f
2
/-g
2
=…= f
n
/-g
n
= 
–the numerators above measure the marginal
benefit that one more unit of x
i will have for the
function f
–the denominators reflect the added burden on
the constraint of using more x
i

58
Lagrangian Multiplier Method
•At the optimal choices for the x’s, the ratio
of the marginal benefit of increasing x
i
to
the marginal cost of increasing x
i
should be
the same for every x
 is the common cost-benefit ratio for all of
the x’s
i
i
x
x
of cost marginal
of benefit marginal


59
Lagrangian Multiplier Method
•If the constraint was relaxed slightly, it
would not matter which x is changed
•The Lagrangian multiplier provides a
measure of how the relaxation in the
constraint will affect the value of y
 provides a “shadow price” to the
constraint

60
Lagrangian Multiplier Method
•A high value of  indicates that y could
be increased substantially by relaxing
the constraint
–each x has a high cost-benefit ratio
•A low value of  indicates that there is
not much to be gained by relaxing the
constraint
=0 implies that the constraint is not
binding

61
Duality
•Any constrained maximization problem
has associated with it a dual problem in
constrained minimization that focuses
attention on the constraints in the
original problem

62
Duality
•Individuals maximize utility subject to a
budget constraint
–dual problem: individuals minimize the
expenditure needed to achieve a given level
of utility
•Firms minimize the cost of inputs to
produce a given level of output
–dual problem: firms maximize output for a
given cost of inputs purchased

63
Constrained Maximization
•Suppose a farmer had a certain length of
fence (P) and wished to enclose the largest
possible rectangular shape
•Let x be the length of one side
•Let y be the length of the other side
•Problem: choose x and y so as to maximize
the area (A = x·y) subject to the constraint
that the perimeter is fixed at P = 2x + 2y

64
Constrained Maximization
•Setting up the Lagrangian multiplier
L = x·y + (P - 2x - 2y)
•The first-order conditions for a maximum
are
L/x = y - 2 = 0
L/y = x - 2 = 0
L/ = P - 2x - 2y = 0

65
Constrained Maximization
•Since y/2 = x/2 = , x must be equal to y
–the field should be square
–x and y should be chosen so that the ratio of
marginal benefits to marginal costs should be
the same
•Since x = y and y = 2, we can use the
constraint to show that
x = y = P/4
 = P/8

66
Constrained Maximization
•Interpretation of the Lagrangian multiplier
–if the farmer was interested in knowing how
much more field could be fenced by adding an
extra yard of fence,  suggests that he could
find out by dividing the present perimeter (P)
by 8
–thus, the Lagrangian multiplier provides
information about the implicit value of the
constraint

67
Constrained Maximization
•Dual problem: choose x and y to minimize
the amount of fence required to surround
the field
minimize P = 2x + 2y subject to A = x·y
•Setting up the Lagrangian:
L
D
= 2x + 2y + 
D
(A - xy)

68
Constrained Maximization
•First-order conditions:
L
D
/x = 2 - 
D
·y = 0
L
D
/y = 2 - 
D
·x = 0
L
D
/
D
= A - x·y = 0
•Solving, we get
x = y = A
1/2
•The Lagrangian multiplier (
D
) = 2A
-1/2

69
Envelope Theorem &
Constrained Maximization
•Suppose that we want to maximize
y = f(x
1,…,x
n;a)
subject to the constraint
g(x
1
,…,x
n
;a) = 0
•One way to solve would be to set up the
Lagrangian expression and solve the first-
order conditions

70
Envelope Theorem &
Constrained Maximization
•Alternatively, it can be shown that
dy*/da = L/a(x
1
*,…,x
n
*;a)
•The change in the maximal value of y that
results when a changes can be found by
partially differentiating L and evaluating
the partial derivative at the optimal point

71
Inequality Constraints
•In some economic problems the
constraints need not hold exactly
•For example, suppose we seek to
maximize y = f(x
1,x
2) subject to
g(x
1,x
2)  0,
x
1  0, and
x
2  0

72
Inequality Constraints
•One way to solve this problem is to
introduce three new variables (a, b, and
c) that convert the inequalities into
equalities
•To ensure that the inequalities continue
to hold, we will square these new
variables to ensure that their values are
positive

73
Inequality Constraints
g(x
1
,x
2
) - a
2
= 0;
x
1
- b
2
= 0; and
x
2 - c
2
= 0
•Any solution that obeys these three
equality constraints will also obey the
inequality constraints

74
Inequality Constraints
•We can set up the Lagrangian
L = f(x
1
,x
2
) + 
1
[g(x
1
,x
2
) - a
2
] + 
2
[x
1
- b
2
] + 
3
[x
2
-
c
2
]
•This will lead to eight first-order
conditions

75
Inequality Constraints
L/x
1 = f
1 + 
1g
1 + 
2 = 0
L/x
2 = f
1 + 
1g
2 + 
3 = 0
L/a = -2a
1 = 0
L/b = -2b
2
= 0
L/c = -2c
3
= 0
L/
1
= g(x
1
,x
2
) - a
2
= 0
L/
2
= x
1
- b
2
= 0
L/
3
= x
2
- c
2
= 0

76
Inequality Constraints
•According to the third condition, either a
or 
1 = 0
–if a = 0, the constraint g(x
1,x
2) holds exactly
–if 
1 = 0, the availability of some slackness
of the constraint implies that its value to the
objective function is 0
•Similar complemetary slackness
relationships also hold for x
1
and x
2

77
Inequality Constraints
•These results are sometimes called
Kuhn-Tucker conditions
–they show that solutions to optimization
problems involving inequality constraints
will differ from similar problems involving
equality constraints in rather simple ways
–we cannot go wrong by working primarily
with constraints involving equalities

78
Second Order Conditions -
Functions of One Variable
•Let y = f(x)
•A necessary condition for a maximum is
that
dy/dx = f ’(x) = 0
•To ensure that the point is a maximum, y
must be decreasing for movements away
from it

79
Second Order Conditions -
Functions of One Variable
•The total differential measures the change
in y
dy = f ’(x) dx
•To be at a maximum, dy must be
decreasing for small increases in x
•To see the changes in dy, we must use
the second derivative of y

80
Second Order Conditions -
Functions of One Variable
•Note that d
2
y < 0 implies that f ’’(x)dx
2
< 0
•Since dx
2
must be positive, f ’’(x) < 0
•This means that the function f must have a
concave shape at the critical point
22
)(")("
])('[
dxxfdxdxxfdx
dx
dxxfd
yd 

81
Second Order Conditions -
Functions of Two Variables
•Suppose that y = f(x
1
, x
2
)
•First order conditions for a maximum are
y/x
1 = f
1 = 0
y/x
2
= f
2
= 0
•To ensure that the point is a maximum, y
must diminish for movements in any direction
away from the critical point

82
Second Order Conditions -
Functions of Two Variables
•The slope in the x
1
direction (f
1
) must be
diminishing at the critical point
•The slope in the x
2
direction (f
2
) must be
diminishing at the critical point
•But, conditions must also be placed on the
cross-partial derivative (f
12 = f
21) to ensure that
dy is decreasing for all movements through the
critical point

83
Second Order Conditions -
Functions of Two Variables
•The total differential of y is given by
dy = f
1
dx
1
+ f
2
dx
2
•The differential of that function is
d
2
y = (f
11dx
1 + f
12dx
2)dx
1 + (f
21dx
1 + f
22dx
2)dx
2
d
2
y = f
11
dx
1
2
+ f
12
dx
2
dx
1
+ f
21
dx
1
dx
2
+ f
22
dx
2
2
•By Young’s theorem, f
12 = f
21 and
d
2
y = f
11dx
1
2
+ 2f
12dx
1dx
2 + f
22dx
2
2

84
Second Order Conditions -
Functions of Two Variables
d
2
y = f
11dx
1
2
+ 2f
12dx
1dx
2 + f
22dx
2
2
•For this equation to be unambiguously negative
for any change in the x’s, f
11 and f
22 must be
negative
•If dx
2 = 0, then d
2
y = f
11 dx
1
2
–for d
2
y < 0, f
11
< 0
•If dx
1 = 0, then d
2
y = f
22 dx
2
2
–for d
2
y < 0, f
22
< 0

85
Second Order Conditions -
Functions of Two Variables
d
2
y = f
11dx
1
2
+ 2f
12dx
1dx
2 + f
22dx
2
2
•If neither dx
1 nor dx
2 is zero, then d
2
y will be
unambiguously negative only if
f
11 f
22 - f
12
2
> 0
–the second partial derivatives (f
11
and f
22
) must be
sufficiently negative so that they outweigh any
possible perverse effects from the cross-partial
derivatives (f
12 = f
21)

86
Constrained Maximization
•Suppose we want to choose x
1 and x
2 to
maximize
y = f(x
1, x
2)
•subject to the linear constraint
c - b
1
x
1
- b
2
x
2
= 0
•We can set up the Lagrangian
L = f(x
1, x
2) + (c - b
1x
1 - b
2x
2)

87
Constrained Maximization
•The first-order conditions are
f
1 - b
1 = 0
f
2 - b
2 = 0
c - b
1x
1 - b
2x
2 = 0
•To ensure we have a maximum, we must
use the “second” total differential
d
2
y = f
11dx
1
2
+ 2f
12dx
1dx
2 + f
22dx
2
2

88
Constrained Maximization
•Only the values of x
1
and x
2
that satisfy the
constraint can be considered valid
alternatives to the critical point
•Thus, we must calculate the total
differential of the constraint
-b
1 dx
1 - b
2 dx
2 = 0
dx
2
= -(b
1
/b
2
)dx
1
•These are the allowable relative changes in
x
1 and x
2

89
Constrained Maximization
•Because the first-order conditions imply
that f
1/f
2 = b
1/b
2, we can substitute and get
dx
2 = -(f
1/f
2) dx
1
•Since
d
2
y = f
11dx
1
2
+ 2f
12dx
1dx
2 + f
22dx
2
2
we can substitute for dx
2
and get
d
2
y = f
11dx
1
2
- 2f
12(f
1/f
2)dx
1
2
+ f
22(f
1
2
/f
2
2
)dx
1
2

90
Constrained Maximization
•Combining terms and rearranging
d
2
y = f
11 f
2
2
- 2f
12f
1f
2 + f
22f
1
2
[dx
1
2
/ f
2
2
]
•Therefore, for d
2
y < 0, it must be true that
f
11 f
2
2
- 2f
12f
1f
2 + f
22f
1
2
< 0
•This equation characterizes a set of
functions termed quasi-concave functions
–any two points within the set can be joined by
a line contained completely in the set

91
Concave and Quasi-
Concave Functions
•The differences between concave and
quasi-concave functions can be
illustrated with the function
y = f(x
1,x
2) = (x
1x
2)
k
where the x’s take on only positive
values and k can take on a variety of
positive values

92
Concave and Quasi-
Concave Functions
•No matter what value k takes, this
function is quasi-concave
•Whether or not the function is concave
depends on the value of k
–if k < 0.5, the function is concave
–if k > 0.5, the function is convex

93
Homogeneous Functions
•A function f(x
1
,x
2
,…x
n
) is said to be
homogeneous of degree k if
f(tx
1,tx
2,…tx
n) = t
k
f(x
1,x
2,…x
n)
–when a function is homogeneous of degree
one, a doubling of all of its arguments
doubles the value of the function itself
–when a function is homogeneous of degree
zero, a doubling of all of its arguments leaves
the value of the function unchanged

94
Homogeneous Functions
•If a function is homogeneous of degree
k, the partial derivatives of the function
will be homogeneous of degree k-1

95
Euler’s Theorem
•If we differentiate the definition for
homogeneity with respect to the
proportionality factor t, we get
kt
k-1
f(x
1
,…,x
n
) = x
1
f
1
(tx
1
,…,tx
n
) + … + x
n
f
n
(x
1
,…,x
n
)
•This relationship is called Euler’s theorem

96
Euler’s Theorem
•Euler’s theorem shows that, for
homogeneous functions, there is a
definite relationship between the
values of the function and the values of
its partial derivatives

97
Homothetic Functions
•A homothetic function is one that is
formed by taking a monotonic
transformation of a homogeneous
function
–they do not possess the homogeneity
properties of their underlying functions

98
Homothetic Functions
•For both homogeneous and homothetic
functions, the implicit trade-offs among
the variables in the function depend
only on the ratios of those variables, not
on their absolute values

99
Homothetic Functions
•Suppose we are examining the simple,
two variable implicit function f(x,y) = 0
•The implicit trade-off between x and y for
a two-variable function is
dy/dx = -f
x/f
y
•If we assume f is homogeneous of degree
k, its partial derivatives will be
homogeneous of degree k-1

100
Homothetic Functions
•The implicit trade-off between x and y is
),(
),(
),(
),(
1
1
tytxf
tytxf
tytxft
tytxft
dx
dy
y
x
y
k
x
k



•If t = 1/y,


































1,
1,
1,'
1,'
y
x
f
y
x
f
y
x
fF
y
x
fF
dx
dy
y
x
y
x

101
Homothetic Functions
•The trade-off is unaffected by the
monotonic transformation and remains
a function only of the ratio x to y

102
Important Points to Note:
•Using mathematics provides a
convenient, short-hand way for
economists to develop their models
–implications of various economic
assumptions can be studied in a
simplified setting through the use of such
mathematical tools

103
Important Points to Note:
•Derivatives are often used in economics
because economists are interested in
how marginal changes in one variable
affect another
–partial derivatives incorporate the ceteris
paribus assumption used in most economic
models

104
Important Points to Note:
•The mathematics of optimization is an
important tool for the development of
models that assume that economic
agents rationally pursue some goal
–the first-order condition for a maximum
requires that all partial derivatives equal
zero

105
Important Points to Note:
•Most economic optimization
problems involve constraints on the
choices that agents can make
–the first-order conditions for a
maximum suggest that each activity be
operated at a level at which the ratio of
the marginal benefit of the activity to its
marginal cost

106
Important Points to Note:
•The Lagrangian multiplier is used to
help solve constrained maximization
problems
–the Lagrangian multiplier can be
interpreted as the implicit value (shadow
price) of the constraint

107
Important Points to Note:
•The implicit function theorem illustrates
the dependence of the choices that
result from an optimization problem on
the parameters of that problem

108
Important Points to Note:
•The envelope theorem examines
how optimal choices will change as
the problem’s parameters change
•Some optimization problems may
involve constraints that are
inequalities rather than equalities

109
Important Points to Note:
•First-order conditions are necessary
but not sufficient for ensuring a
maximum or minimum
–second-order conditions that describe
the curvature of the function must be
checked

110
Important Points to Note:
•Certain types of functions occur in
many economic problems
–quasi-concave functions obey the
second-order conditions of constrained
maximum or minimum problems when
the constraints are linear
–homothetic functions have the property
that implicit trade-offs among the
variables depend only on the ratios of
these variables
Tags