πŸ“– Bivariate optimization#

⏱ | words

References and additional materials
_images/comingsoon.png

WARNING

This section of the lecture notes is still under construction. It will be ready before the lecture.

Maximizers and minimizers#

Consider \(f \colon I \to \mathbb{R}\) where \(I \subset \mathbb{R}^2\)

The set \(\mathbb{R}^2\) is all \((x_1, x_2)\) pairs

Definition

A point \((x_1^*, x_2^*) \in I\) is called a maximizer of \(f\) on \(I\) if

\[ f(x_1^*, x_2^*) \geq f(x_1, x_2) \quad \text{for all} \quad (x_1, x_2) \in I \]

Definition

A point \((x_1^*, x_2^*) \in I\) is called a minimizer of \(f\) on \(I\) if

\[ f(x_1^*, x_2^*) \leq f(x_1, x_2) \quad \text{for all} \quad (x_1, x_2) \in I \]

When they exist, the partial derivatives at \((x_1, x_2) \in I\) are

\[\begin{split} f_1(x_1, x_2) = \frac{\partial}{\partial x_1} f(x_1, x_2) \\ f_2(x_1, x_2) = \frac{\partial}{\partial x_2} f(x_1, x_2) \end{split}\]

Example

When \(f(k, \ell) = k^\alpha \ell^\beta\),

\[ f_1(k, \ell) = \frac{\partial}{\partial k} f(k, \ell) = \frac{\partial}{\partial k} k^\alpha \ell^\beta = \alpha k^{\alpha-1} \ell^\beta \]

Definition

An interior point \((x_1, x_2) \in I\) is called stationary for \(f\) if

\[ f_1(x_1, x_2) = f_2(x_1, x_2) = 0 \]

Fact

Let \(f \colon I \to \mathbb{R}\) be a continuously differentiable function. If \((x_1^*, x_2^*)\) is either

  • an interior maximizer of \(f\) on \(I\), or

  • an interior minimizer of \(f\) on \(I\),

then \((x_1^*, x_2^*)\) is a stationary point of \(f\)

Usage, for maximization:

  1. Compute partials

  2. Set partials to zero to find \(S =\) all stationary points

  3. Evaluate candidates in \(S\) and boundary of \(I\)

  4. Select point \((x^*_1, x_2^*)\) yielding highest value

Example

\[ f(x_1, x_2) = x_1^2 + 4 x_2^2 \rightarrow \min \quad \mathrm{s.t.} \quad x_1 + x_2 \leq 1 \]

Setting

\[ f_1(x_1, x_2) = 2 x_1 = 0 \quad \text{and} \quad f_2(x_1, x_2) = 8 x_2 = 0 \]

gives the unique stationary point \((0, 0)\), at which \(f(0, 0) = 0\)

On the boundary we have \(x_1 + x_2 = 1\), so

\[ f(x_1, x_2) = f(x_1, 1 - x_1) = x_1^2 + 4 (1 - x_1)^2 \]

Exercise: Show right hand side \(> 0\) for any \(x_1\)

Hence minimizer is \((x_1^*, x_2^*) = (0, 0)\)

Nasty secrets#

Solving for \((x_1, x_2)\) such that \(f_1(x_1, x_2) = 0\) and \(f_2(x_1, x_2) = 0\) can be hard

  • System of nonlinear equations

  • Might have no analytical solution

  • Set of solutions can be a continuum

Example

(Don’t) try to find all stationary points of

\[ f(x_1, x_2) = \frac{\cos(x_1^2 + x_2^2) + x_1^2 + x_1}{2 + p(-x_1^2) + \sin^2(x_2)} \]

Also:

  • Boundary is often a continuum, not just two points

  • Things get even harder in higher dimensions

On the other hand:

  • Most classroom examples are chosen to avoid these problems

  • Life is still pretty easy if we have concavity / convexity

  • Clever tricks have been found for certain kinds of problems

Second Order Partials#

Let \(f \colon I \to \mathbb{R}\) and, when they exist, denote

\[ f_{11}(x_1, x_2) = \frac{\partial^2}{\partial x_1^2} f(x_1, x_2) \]
\[ f_{12}(x_1, x_2) = \frac{\partial^2}{\partial x_1 \partial x_2} f(x_1, x_2) \]
\[ f_{21}(x_1, x_2) = \frac{\partial^2}{\partial x_2 \partial x_1} f(x_1, x_2) \]
\[ f_{22}(x_1, x_2) = \frac{\partial^2}{\partial x_2^2} f(x_1, x_2) \]

Example: Cobb-Douglas technology with linear costs

If \(\pi(k, \ell) = p k^{\alpha} \ell^{\beta} - w \ell - r k\) then

\[ \pi_{11}(k, \ell) = p \alpha(\alpha-1) k^{\alpha-2} \ell^{\beta} \]
\[ \pi_{12}(k, \ell) = p \alpha\beta k^{\alpha-1} \ell^{\beta-1} \]
\[ \pi_{21}(k, \ell) = p \alpha\beta k^{\alpha-1} \ell^{\beta-1} \]
\[ \pi_{22}(k, \ell) = p \beta(\beta-1) k^{\alpha} \ell^{\beta-2} \]

Fact

If \(f \colon I \to \mathbb{R}\) is twice continuously differentiable at \((x_1, x_2)\), then

\[ f_{12}(x_1, x_2) = f_{21}(x_1, x_2) \]

Exercise: Confirm the results in the exercise above.

Shape conditions in 2D#

Let \(I\) be an β€œopen” set (only interior points – formalities next week)

Let \(f \colon I \to \mathbb{R}\) be twice continuously differentiable

Fact

The function \(f\) is strictly concave on \(I\) if, for any \((x_1, x_2) \in I\)

  1. \(f_{11}(x_1, x_2) < 0\)

  2. \(f_{11}(x_1, x_2) \, f_{22}(x_1, x_2) > f_{12}(x_1, x_2)^2\)

Fact

The function \(f\) is strictly convex on \(I\) if, for any \((x_1, x_2) \in I\)

  1. \(f_{11}(x_1, x_2) > 0\)

  2. \(f_{11}(x_1, x_2) \, f_{22}(x_1, x_2) > f_{12}(x_1, x_2)^2\)

When is stationarity sufficient?

Fact

If \(f\) is differentiable and strictly concave on \(I\), then any stationary point of \(f\) is also a unique maximizer of \(f\) on \(I\)

Fact

If \(f\) is differentiable and strictly convex on \(I\), then any stationary point of \(f\) is also a unique minimizer of \(f\) on \(I\)

_images/concave_max.png

Fig. 53 Maximizer of a concave function#

_images/convex_min.png

Fig. 54 Minimizer of a convex function#

Example: unconstrained maximization of quadratic utility

\[ u(x_1, x_2) = - (x_1 - b_1)^2 - (x_2 - b_2)^2 \rightarrow \max_{x_1, x_2} \]

Intuitively the solution is \(x_1^*=b_1\) and \(x_2^*=b_2\)

Analysis above leads to the same conclusion

First let’s check first order conditions (F.O.C.)

\[ \frac{\partial}{\partial x_1} u(x_1, x_2) = -2 (x_1 - b_1) = 0 \quad \implies \quad x_1 = b_1 \]
\[ \frac{\partial}{\partial x_2} u(x_1, x_2) = -2 (x_2 - b_2) = 0 \quad \implies \quad x_2 = b_2 \]

How about (strict) concavity?

Sufficient condition is

  1. \(u_{11}(x_1, x_2) < 0\)

  2. \(u_{11}(x_1, x_2)u_{22}(x_1, x_2) > u_{12}(x_1, x_2)^2\)

We have

  • \(u_{11}(x_1, x_2) = -2\)

  • \(u_{11}(x_1, x_2)u_{22}(x_1, x_2) = 4 > 0 = u_{12}(x_1, x_2)^2\)

Example: Profit maximization with two inputs

\[ \pi(k, \ell) = p k^{\alpha} \ell^{\beta} - w \ell - r k \rightarrow \max_{k, \ell} \]

where \( \alpha, \beta, p, w\) are all positive and \(\alpha + \beta < 1\)

Derivatives:

  • \(\pi_1(k, \ell) = p \alpha k^{\alpha-1} \ell^{\beta} - r\)

  • \(\pi_2(k, \ell) = p \beta k^{\alpha} \ell^{\beta-1} - w\)

  • \(\pi_{11}(k, \ell) = p \alpha(\alpha-1) k^{\alpha-2} \ell^{\beta}\)

  • \(\pi_{22}(k, \ell) = p \beta(\beta-1) k^{\alpha} \ell^{\beta-2}\)

  • \(\pi_{12}(k, \ell) = p \alpha \beta k^{\alpha-1} \ell^{\beta-1}\)

First order conditions: set

\[\begin{split} \pi_1(k, \ell) = 0 \\ \pi_2(k, \ell) = 0 \end{split}\]

and solve simultaneously for \(k, \ell\) to get

\[\begin{split} k^* = \left[ p (\alpha/r)^{1 - \beta} (\beta/w)^{\beta} \right]^{1 / (1 - \alpha - \beta)} \\ \ell^* = \left[ p (\beta/w)^{1 - \alpha} (\alpha/r)^{\alpha} \right]^{1 / (1 - \alpha - \beta)} \end{split}\]

Exercise: Verify

Now we check second order conditions, hoping for strict concavity

What we need: for any \(k, \ell > 0\)

  1. \(\pi_{11}(k, \ell) < 0\)

  2. \(\pi_{11}(k, \ell) \, \pi_{22}(k, \ell) > \pi_{12}(k, \ell)^2\)

Exercise: Show both inequalities satisfied when \(\alpha + \beta < 1\)

_images/optprod.png

Fig. 55 Profit function when \(p=5\), \(r=w=2\), \(\alpha=0.4\), \(\beta=0.5\)#

_images/optprod_contour.png

Fig. 56 Optimal choice, \(p=5\), \(r=w=2\), \(\alpha=0.4\), \(\beta=0.5\)#