π Bivariate optimization#
β± | words
References and additional materials

WARNING
This section of the lecture notes is still under construction. It will be ready before the lecture.
Maximizers and minimizers#
Consider \(f \colon I \to \mathbb{R}\) where \(I \subset \mathbb{R}^2\)
The set \(\mathbb{R}^2\) is all \((x_1, x_2)\) pairs
Definition
A point \((x_1^*, x_2^*) \in I\) is called a maximizer of \(f\) on \(I\) if
Definition
A point \((x_1^*, x_2^*) \in I\) is called a minimizer of \(f\) on \(I\) if
When they exist, the partial derivatives at \((x_1, x_2) \in I\) are
Example
When \(f(k, \ell) = k^\alpha \ell^\beta\),
Definition
An interior point \((x_1, x_2) \in I\) is called stationary for \(f\) if
Fact
Let \(f \colon I \to \mathbb{R}\) be a continuously differentiable function. If \((x_1^*, x_2^*)\) is either
an interior maximizer of \(f\) on \(I\), or
an interior minimizer of \(f\) on \(I\),
then \((x_1^*, x_2^*)\) is a stationary point of \(f\)
Usage, for maximization:
Compute partials
Set partials to zero to find \(S =\) all stationary points
Evaluate candidates in \(S\) and boundary of \(I\)
Select point \((x^*_1, x_2^*)\) yielding highest value
Example
Setting
gives the unique stationary point \((0, 0)\), at which \(f(0, 0) = 0\)
On the boundary we have \(x_1 + x_2 = 1\), so
Exercise: Show right hand side \(> 0\) for any \(x_1\)
Hence minimizer is \((x_1^*, x_2^*) = (0, 0)\)
Nasty secrets#
Solving for \((x_1, x_2)\) such that \(f_1(x_1, x_2) = 0\) and \(f_2(x_1, x_2) = 0\) can be hard
System of nonlinear equations
Might have no analytical solution
Set of solutions can be a continuum
Example
(Donβt) try to find all stationary points of
Also:
Boundary is often a continuum, not just two points
Things get even harder in higher dimensions
On the other hand:
Most classroom examples are chosen to avoid these problems
Life is still pretty easy if we have concavity / convexity
Clever tricks have been found for certain kinds of problems
Second Order Partials#
Let \(f \colon I \to \mathbb{R}\) and, when they exist, denote
Example: Cobb-Douglas technology with linear costs
If \(\pi(k, \ell) = p k^{\alpha} \ell^{\beta} - w \ell - r k\) then
Fact
If \(f \colon I \to \mathbb{R}\) is twice continuously differentiable at \((x_1, x_2)\), then
Exercise: Confirm the results in the exercise above.
Shape conditions in 2D#
Let \(I\) be an βopenβ set (only interior points β formalities next week)
Let \(f \colon I \to \mathbb{R}\) be twice continuously differentiable
Fact
The function \(f\) is strictly concave on \(I\) if, for any \((x_1, x_2) \in I\)
\(f_{11}(x_1, x_2) < 0\)
\(f_{11}(x_1, x_2) \, f_{22}(x_1, x_2) > f_{12}(x_1, x_2)^2\)
Fact
The function \(f\) is strictly convex on \(I\) if, for any \((x_1, x_2) \in I\)
\(f_{11}(x_1, x_2) > 0\)
\(f_{11}(x_1, x_2) \, f_{22}(x_1, x_2) > f_{12}(x_1, x_2)^2\)
When is stationarity sufficient?
Fact
If \(f\) is differentiable and strictly concave on \(I\), then any stationary point of \(f\) is also a unique maximizer of \(f\) on \(I\)
Fact
If \(f\) is differentiable and strictly convex on \(I\), then any stationary point of \(f\) is also a unique minimizer of \(f\) on \(I\)

Fig. 53 Maximizer of a concave function#

Fig. 54 Minimizer of a convex function#
Example: unconstrained maximization of quadratic utility
Intuitively the solution is \(x_1^*=b_1\) and \(x_2^*=b_2\)
Analysis above leads to the same conclusion
First letβs check first order conditions (F.O.C.)
How about (strict) concavity?
Sufficient condition is
\(u_{11}(x_1, x_2) < 0\)
\(u_{11}(x_1, x_2)u_{22}(x_1, x_2) > u_{12}(x_1, x_2)^2\)
We have
\(u_{11}(x_1, x_2) = -2\)
\(u_{11}(x_1, x_2)u_{22}(x_1, x_2) = 4 > 0 = u_{12}(x_1, x_2)^2\)
Example: Profit maximization with two inputs
where \( \alpha, \beta, p, w\) are all positive and \(\alpha + \beta < 1\)
Derivatives:
\(\pi_1(k, \ell) = p \alpha k^{\alpha-1} \ell^{\beta} - r\)
\(\pi_2(k, \ell) = p \beta k^{\alpha} \ell^{\beta-1} - w\)
\(\pi_{11}(k, \ell) = p \alpha(\alpha-1) k^{\alpha-2} \ell^{\beta}\)
\(\pi_{22}(k, \ell) = p \beta(\beta-1) k^{\alpha} \ell^{\beta-2}\)
\(\pi_{12}(k, \ell) = p \alpha \beta k^{\alpha-1} \ell^{\beta-1}\)
First order conditions: set
and solve simultaneously for \(k, \ell\) to get
Exercise: Verify
Now we check second order conditions, hoping for strict concavity
What we need: for any \(k, \ell > 0\)
\(\pi_{11}(k, \ell) < 0\)
\(\pi_{11}(k, \ell) \, \pi_{22}(k, \ell) > \pi_{12}(k, \ell)^2\)
Exercise: Show both inequalities satisfied when \(\alpha + \beta < 1\)

Fig. 55 Profit function when \(p=5\), \(r=w=2\), \(\alpha=0.4\), \(\beta=0.5\)#

Fig. 56 Optimal choice, \(p=5\), \(r=w=2\), \(\alpha=0.4\), \(\beta=0.5\)#