📖 Vector and matrix arithmetics#

| words

Sources and reading guide
_images/shsc2016.png

[Sydsæter and Hammond, 2006]

Chapter 15 (pp. 549-590).

What is a matrix?#

A matrix is an array of numbers or variables. It is organised into rows and columns. These form its dimensions.

An \((n \times m)\) matrix has \(n\) rows and \(m\) columns. Note that, while it is possible that \(n=m\), it is also possible that \(n \neq m\). When \(n=m\), we say that the matrix is a square matrix.

An \((n \times m)\) matrix takes the following form:

\[\begin{split} A=\left(\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2 m} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \cdots & a_{n m} \end{array}\right) \end{split}\]

Scalars and vectors#

A scalar is a real number \((a \in \mathbb{R})\).

A column vector is an \((n \times 1)\) matrix of the form

\[\begin{split} A=\left(\begin{array}{c} a_{11} \\ a_{21} \\ a_{31} \\ \vdots \\ a_{n 1} \end{array}\right) \end{split}\]

A row vector is a \((1 \times m)\) matrix of the form

\[ A=\left(\begin{array}{lllll} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \end{array}\right) . \]

Economic examples of matrices and vectors

  • The \((L \times 1)\) price vector for an economy there are \(L\) commodities:

\[\begin{split} p=\left(\begin{array}{c} p_{1} \\ p_{2} \\ p_{3} \\ \vdots \\ p_{L} \end{array}\right) \end{split}\]
  • The \((L \times 1)\) Marshallian demand vector for a consumer whose preferences are defined over bundles of \(L\) commodities:

\[\begin{split} x(p, y)=\left(\begin{array}{c} x_{1}(p, y) \\ x_{2}(p, y) \\ x_{3}(p, y) \\ \vdots \\ x_{L}(p, y) \end{array}\right) \end{split}\]
  • The \((n \times k)\) design matrix (that is, matrix of independent variables) in a multivariate linear regression model of the form

\[ y_{i} = \beta_{1}+\sum_{j=2}^{k} \beta_{j} x_{i, j}+\epsilon_{i} = \sum_{j=1}^{k} \beta_{j} x_{i, j}+\epsilon_{i} \]

where there are observations of all of the observable variables for each of \(n\) sample points.

  • The design matrix takes the form

\[\begin{split} X=\left(\begin{array}{ccccc} 1 & x_{1,2} & x_{1,3} & \cdots & x_{1,k} \\ 1 & x_{2,2} & x_{2,3} & \cdots & x_{2,k} \\ 1 & x_{3,2} & x_{3,3} & \cdots & x_{3,k} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n, 2} & x_{n, 3} & \cdots & x_{n,k} \end{array}\right) \end{split}\]

An overview of matrix arithmetic#

  • Scalar multiplication of a matrix

  • Matrix addition

  • Matrix subtraction

  • Matrix multiplication: The inner, or dot, product

  • The transpose of a matrix and matrix symmetry

  • The additive inverse of a matrix and the null matrix

  • The multiplicative inverse of a matrix and the identity matrix

  • Idempotent matrices

  • Vector inequalities

Scalar multiplication#

Suppose that \(c \in \mathbb{R}\) and \(A\) is an \((n \times m)\) matrix takes the following form:

\[\begin{split} A=\left(\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2 m} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \cdots & a_{n m} \end{array}\right) \end{split}\]

We will assume that \(a_{i j} \in \mathbb{R}\) for all

\[ (i, j) \in\{1,2, \cdots, n\} \times\{1,2, \cdots, m\} . \]

The scalar pre-product of this constant with this matrix is given by

\[\begin{split} \begin{aligned} c A & =c\left(\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2 m} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \cdots & a_{n m} \end{array}\right) \\[20pt] & =\left(\begin{array}{ccccc} c a_{11} & c a_{12} & c a_{13} & \cdots & c a_{1 m} \\ c a_{21} & c a_{22} & c a_{23} & \cdots & c a_{2 m} \\ c a_{31} & c a_{32} & c a_{33} & \cdots & c a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ c a_{n 1} & c a_{n 2} & c a_{n 3} & \cdots & c a_{n m} \end{array}\right) \end{aligned} \end{split}\]

The scalar post-product of the matrix with constant is given by

\[\begin{split} \begin{aligned} A c & =\left(\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2 m} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \cdots & a_{n m} \end{array}\right) c \\[20pt] & =\left(\begin{array}{ccccc} c a_{11} & c a_{12} & c a_{13} & \cdots & c a_{1 m} \\ c a_{21} & c a_{22} & c a_{23} & \cdots & c a_{2 m} \\ c a_{31} & c a_{32} & c a_{33} & \cdots & c a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ c a_{n 1} & c a_{n 2} & c a_{n 3} & \cdots & c a_{n m} \end{array}\right) \end{aligned} \end{split}\]

Note that \(c A=A c\). As such, we can just talk about the scalar product of a constant with a matrix, without specifying the order in which the multiplication takes place.

The following examples come from [Asano, 2013] (pp. 222-224).

Example

\[\begin{split} 2 X=2\left(\begin{array}{ll} 1 & 1 \\ 1 & 2 \end{array}\right)=\left(\begin{array}{ll} 2(1) & 2(1) \\ 2(1) & 2(2) \end{array}\right)=\left(\begin{array}{ll} 2 & 2 \\ 2 & 4 \end{array}\right) \end{split}\]

Example

\[\begin{split} 3 Y=3\left(\begin{array}{ll} 2 & 1 \\ 2 & 4 \end{array}\right)=\left(\begin{array}{ll} 3(2) & 3(1) \\ 3(2) & 3(4) \end{array}\right)=\left(\begin{array}{cc} 6 & 3 \\ 6 & 12 \end{array}\right) \end{split}\]

Example

\[\begin{split} 2 Z=2\left(\begin{array}{cc} -1 & -2 \\ 3 & 1 \end{array}\right)=\left(\begin{array}{cc} 2(-1) & 2(-2) \\ 2(3) & 2(1) \end{array}\right)=\left(\begin{array}{cc} -2 & -4 \\ 6 & 2 \end{array}\right) \end{split}\]

The following examples come from [Sydsæter and Hammond, 2006] (pp. 555-556).

Example

\[\begin{split} \begin{gathered} 3 A=2\left(\begin{array}{ccc} 1 & 2 & 0 \\ 4 & -3 & 1 \end{array}\right)=\left(\begin{array}{ccc} 3(1) & 3(2) & 3(0) \\ 3(4) & 3-(3) & 3(1) \end{array}\right) \\ =\left(\begin{array}{ccc} 3 & 6 & 0 \\ 12 & -9 & 3 \end{array}\right) \end{gathered} \end{split}\]

Example

\[\begin{split} \begin{aligned} & \left(\frac{-1}{2}\right) B=2\left(\begin{array}{lll} 0 & 1 & 2 \\ 1 & 0 & 2 \end{array}\right)=\left(\begin{array}{ccc} \left(\frac{-1}{2}\right)(0) & \left(\frac{-1}{2}\right)(1) & \left(\frac{-1}{2}\right)(2) \\ \left(\frac{-1}{2}\right)(1) & \left(\frac{-1}{2}\right)(0) & \left(\frac{-1}{2}\right)(2) \end{array}\right) \\ & =\left(\begin{array}{ccc} 0 & \frac{-1}{2} & -1 \\ \frac{-1}{2} & 0 & -1 \end{array}\right) \end{aligned} \end{split}\]

Matrix addition#

The sum of two matrices is only defined if the two matrices have exactly the same dimensions.

Suppose that \(A\) is an \((n \times m)\) matrix that takes the following form:

\[\begin{split} A=\left(\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2 m} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \cdots & a_{n m} \end{array}\right) \end{split}\]

Suppose that \(B\) is an \((n \times m)\) matrix that takes the following form:

\[\begin{split} B=\left(\begin{array}{ccccc} b_{11} & b_{12} & b_{13} & \cdots & b_{1 m} \\ b_{21} & b_{22} & b_{23} & \cdots & b_{2 m} \\ b_{31} & b_{32} & b_{33} & \cdots & b_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b_{n 1} & b_{n 2} & b_{n 3} & \cdots & b_{n m} \end{array}\right) \end{split}\]

The matrix sum \((A+B)\) is an \((n \times m)\) matrix that takes the following form:

\[\begin{split} \begin{aligned} A+B &=\left(\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 m} \\ a_{21} & a_{22} & \cdots & a_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n m} \end{array}\right)+\left(\begin{array}{cccc} b_{11} & b_{12} & \cdots & b_{1 m} \\ b_{21} & b_{22} & \cdots & b_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n 1} & b_{n 2} & \cdots & b_{n m} \end{array}\right) \\[20pt] &=\left(\begin{array}{cccc} a_{11}+b_{11} & a_{12}+b_{12} & \cdots & a_{1 m}+b_{1 m} \\ a_{21}+b_{21} & a_{22}+b_{22} & \cdots & a_{2 m}+b_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1}+b_{n 1} & a_{n 2}+b_{n 2} & \cdots & a_{n m}+b_{n m} \end{array}\right) \end{aligned} \end{split}\]

Note that \(A+B=B+A\). (Exercise: Convince yourself of the validity of this claim.)

Example

Suppose that \(A\) is an \((m \times n)\) matrix, \(B\) is an \((n \times m)\) matrix and \(C\) is an \((n \times p)\) matrix, where \(m \neq n, m \neq p\) and \(n \neq p\).

  • Neither the matrix sum \(A+B\) nor the matrix sum \(B+A\) are defined.

  • Neither the matrix sum \(A+C\) nor the matrix sum \(C+A\) are defined.

  • Neither the matrix sum \(B+C\) nor the matrix sum \(C+B\) are defined.

The following examples come from [Asano, 2013] (pp. 222-224).

Example

\[\begin{split} X+Y=\left(\begin{array}{ll} 1 & 1 \\ 1 & 2 \end{array}\right)+\left(\begin{array}{ll} 1 & 0 \\ 1 & 2 \end{array}\right)=\left(\begin{array}{ll} 1+1 & 1+0 \\ 1+1 & 2+2 \end{array}\right)=\left(\begin{array}{ll} 2 & 1 \\ 2 & 4 \end{array}\right) \end{split}\]

Example

\[\begin{split} \begin{aligned} & X+Z=\left(\begin{array}{ll} 1 & 1 \\ 1 & 2 \end{array}\right)+\left(\begin{array}{ll} -1 & 3 \\ -2 & 1 \end{array}\right)=\left(\begin{array}{ll} 1+(-1) & 1+3 \\ 1+(-2) & 2+1 \end{array}\right) =\left(\begin{array}{cc} 0 & 4 \\ -1 & 3 \end{array}\right) \end{aligned} \end{split}\]

The following example comes from [Sydsæter and Hammond, 2006] (pp. 555-556).

Example

\[\begin{split} \begin{gathered} M+N=\left(\begin{array}{ccc} 1 & 2 & 0 \\ 4 & -3 & -1 \end{array}\right)+\left(\begin{array}{lll} 0 & 1 & 2 \\ 1 & 0 & 2 \end{array}\right) =\left(\begin{array}{ccc} 1+0 & 2+1 & 0+2 \\ 4+1 & -3+0 & -1+2 \end{array}\right) =\left(\begin{array}{ccc} 1 & 3 & 2 \\ 5 & -3 & 1 \end{array}\right) \end{gathered} \end{split}\]

Economic example of matrix addition

  • Suppose that there are \(I\) consumers in an economy, each of whom has preferences defined over bundles of the same \(L\) commodities.

  • Suppose that consumer \(i\)’s Marshallian demand vector is given by the column vector

\[\begin{split} x^{i}\left(p, y^{i}\right)=\left(\begin{array}{c} x_{1}^{i}\left(p, y^{i}\right) \\ x_{2}^{i}\left(p, y^{i}\right) \\ x_{3}^{i}\left(p, y^{i}\right) \\ \vdots \\ x_{L}^{i}\left(p, y^{i}\right) \end{array}\right) \end{split}\]
  • The aggegate vector of Marshallian demands (or, if you prefer, the vector of market demands) for this economy is given by

\[\begin{split} \begin{aligned} x\left(p, y^{1}, y^{2}, \cdots, y^{\prime}\right) &=\sum_{i=1}^{I} x^{i}\left(p, y^{i}\right) \\[20pt] &=\sum_{i=1}^{I}\left(\begin{array}{c} x_{1}^{i}\left(p, y^{i}\right) \\ x_{2}^{i}\left(p, y^{i}\right) \\ x_{3}^{i}\left(p, y^{i}\right) \\ \vdots \\ x_{L}^{i}\left(p, y^{i}\right) \end{array}\right) \\[20pt] &= \left(\begin{array}{c} \sum_{i=1}^{I} x_{1}^{i}\left(p, y^{i}\right) \\ \sum_{i=1}^{I} x_{2}^{i}\left(p, y^{i}\right) \\ \sum_{i=1}^{I} x_{3}^{i}\left(p, y^{i}\right) \\ \vdots \\ \sum_{i=1}^{I} x_{L}^{i}\left(p, y^{i}\right) \end{array}\right) \end{aligned} \end{split}\]

Matrix subtraction#

Matrix subtraction involves a combination of (i) scalar multiplication of a matrix, and (ii) matrix addition.

As with matrix addition, the difference of two matrices is only defined if the two matrices have exactly the same dimensions.

Suppose that \(A\) and \(B\) are both \((n \times m)\) matrices. The difference between \(A\) and \(B\) is defined to be

\[ A-B=A+(-1) B \]

Since \(A\) and \(B\) are both \((n \times m)\) matrices, the matrix difference \((A-B)\) is an \((n \times m)\) matrix that takes the following form:

\[\begin{split} \begin{aligned} A-B &= A+(-1) B \\[20pt] &= \left(\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 m} \\ a_{21} & a_{22} & \cdots & a_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n m} \end{array}\right)+(-1)\left(\begin{array}{cccc} b_{11} & b_{12} & \cdots & b_{1 m} \\ b_{21} & b_{22} & \cdots & b_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n 1} & b_{n 2} & \cdots & b_{n m} \end{array}\right) \\[20pt] &= \left(\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 m} \\ a_{21} & a_{22} & \cdots & a_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n m} \end{array}\right)+\left(\begin{array}{cccc} -b_{11} & -b_{12} & \cdots & -b_{1 m} \\ -b_{21} & -b_{22} & \cdots & -b_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ -b_{n 1} & -b_{n 2} & \cdots & -b_{n m} \end{array}\right) \\[20pt] &= \left(\begin{array}{cccc} a_{11}-b_{11} & a_{12}-b_{12} & \cdots & a_{1 m}-b_{1 m} \\ a_{21}-b_{21} & a_{22}-b_{22} & \cdots & a_{2 m}-b_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1}-b_{n 1} & a_{n 2}-b_{n 2} & \cdots & a_{n m}-b_{n m} \end{array}\right) \end{aligned} \end{split}\]

In general, \(A-B \neq B-A\). Thus matrix subtraction does not share all of the properties of matrix addition.

Exercise: Under what circumstances will \(A-B=B-A\) ?

Example

The following example comes from [Asano, 2013] (pp. 222-224).

\[\begin{split} \begin{aligned} X-Y &=\left(\begin{array}{cc} 6 & 3 \\ 6 & 12 \end{array}\right)-\left(\begin{array}{cc} -2 & -4 \\ 6 & 2 \end{array}\right) \\ & =\left(\begin{array}{cc} 6 & 3 \\ 6 & 12 \end{array}\right)+(-1)\left(\begin{array}{cc} -2 & -4 \\ 6 & 2 \end{array}\right) \\ & =\left(\begin{array}{cc} 6 & 3 \\ 6 & 12 \end{array}\right)+\left(\begin{array}{cc} (-1)(-2) & (-1)(-4) \\ (-1)(6) & (-1)(2) \end{array}\right) \\ & =\left(\begin{array}{cc} 6 & 3 \\ 6 & 12 \end{array}\right)+\left(\begin{array}{cc} 2 & 4 \\ -6 & -2 \end{array}\right) \\ & =\left(\begin{array}{cc} 6+2 & 3+4 \\ 6+(-6) & 12+(-2) \end{array}\right) \\ & =\left(\begin{array}{cc} 8 & 7 \\ 0 & 10 \end{array}\right) \end{aligned} \end{split}\]

Matrix multiplication#

  • The standard matrix product is the dot, or inner, product of two matrices.

  • The dot product of two matrices is only defined for cases in which the number of columns of the first listed matrix is identical to the number of rows of the second listed matrix.

  • If the dot product is defined, the solution matrix will have the same number of rows as the first listed matrix and the same number of columns as the second listed matrix.

Suppose that \(X\) is an \((m \times n)\) matrix, \(Y\) is an \((n \times m)\) matrix and \(Z\) is an \((n \times p)\) matrix, where \(m \neq n, m \neq p\) and \(n \neq p\).

  • The matrix product \(X Y\) is defined and will be an \((m \times m)\) matrix.

  • The matrix product \(Y X\) is defined and will be an \((n \times n)\) matrix.

  • The matrix product \(X Z\) is defined and will be an \((m \times p)\) matrix.

  • The matrix products \(Z X, Y Z\) and \(Z Y\) are not defined.

Suppose that \(A\) is an \((n \times m)\) matrix that takes the following form:

\[\begin{split} A=\left(\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2 m} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \cdots & a_{n m} \end{array}\right) \end{split}\]

Suppose that \(B\) is an \((m \times p)\) matrix that takes the following form:

\[\begin{split} B=\left(\begin{array}{ccccc} b_{11} & b_{12} & b_{13} & \cdots & b_{1 p} \\ b_{21} & b_{22} & b_{23} & \cdots & b_{2 p} \\ b_{31} & b_{32} & b_{33} & \cdots & b_{3 p} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b_{m 1} & b_{m 2} & b_{m 3} & \cdots & b_{m p} \end{array}\right) \end{split}\]

The matrix product \(A B\) is defined and will be an \((n \times p)\) matrix. The solution matrix is given by

\[\begin{split} \begin{aligned} AB &=\left(\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 m} \\ a_{21} & a_{22} & \cdots & a_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n m} \end{array}\right) \cdot\left(\begin{array}{cccc} b_{11} & b_{12} & \cdots & b_{1 p} \\ b_{21} & b_{22} & \cdots & b_{2 p} \\ \vdots & \vdots & \ddots & \vdots \\ b_{m 1} & b_{m 2} & \cdots & b_{m p} \end{array}\right) \\[20pt] &=\left(\begin{array}{ccccc} \sum_{k=1}^{m} a_{1 k} b_{k 1} & \sum_{k=1}^{m} a_{1 k} b_{k 2} & \cdots & \sum_{k=1}^{m} a_{1 k} b_{k p} \\ \vdots & a_{2 k} b_{k 1} & \sum_{k=1}^{m} a_{2 k} b_{k 2} & \cdots & \sum_{k=1}^{m} a_{2 k} b_{k p} \\ \sum_{k=1}^{m} a_{n k} b_{k 1} & \sum_{k=1}^{m} a_{n k} b_{k 2} & \cdots & \sum_{k=1}^{m} a_{n k} b_{k p} \end{array}\right) \end{aligned} \end{split}\]

Note that, while it is possible for \(A B=B A\) in some cases, in general we will have \(A B \neq B A\). There are three reasons for this:

  • First, \(B A\) will not necessarily be defined even if \(A B\) is defined.

  • Second, even when both \(A B\) and \(B A\) are defined, they might have different dimensions.

  • Third, even when both \(A B\) and \(B A\) are defined and have the same dimensions, they might have one or more different entries.

These examples come from [Bradley, 2008] (pp. 490-492).

Example

\[\begin{split} \begin{aligned} AB & =\left(\begin{array}{cc} 1 & 2 \\ -2 & 4 \end{array}\right) \cdot\left(\begin{array}{lll} 0 & 2 & 2 \\ 1 & 0 & 5 \end{array}\right) \\ &=\left(\begin{array}{ccc} (1)(0)+(2)(1) & (1)(2)+(2)(0) & (1)(2)+(2)(5) \\ (-2)(0)+(4)(1) & (-2)(2)+(4)(0) & (-2)(2)+(4)(5) \end{array}\right) \\ &=\left(\begin{array}{ccc} 0+2 & 2+0 & 2+10 \\ 0+4 & -4+0 & -4+20 \end{array}\right) \\ &=\left(\begin{array}{ccc} 2 & 2 & 12 \\ 4 & -4 & 16 \end{array}\right) \end{aligned} \end{split}\]

Example

The matrix product \(B A\) is undefined because the number of columns in \(B\) (which is three) does not equal the number of rows in \(A\) (which is two).

Note that \(A B \neq B A\).

Example

\[\begin{split} \begin{aligned} A C &=\left(\begin{array}{cc} 1 & 2 \\ -2 & 4 \end{array}\right) \cdot\left(\begin{array}{cc} 3 & -2 \\ 5 & 0 \end{array}\right) \\ &=\left(\begin{array}{cc} (1)(3)+(2)(5) & (1)(-2)+(2)(0) \\ (-2)(3)+(4)(5) & (-2)(-2)+(4)(0) \end{array}\right) \\ &=\left(\begin{array}{cc} 3+10 & -2+0 \\ -6+20 & 4+0 \end{array}\right) \\ &=\left(\begin{array}{cc} 13 & -2 \\ 14 & 4 \end{array}\right) \end{aligned} \end{split}\]

Example

\[\begin{split} \begin{aligned} C A &=\left(\begin{array}{cc} 3 & -2 \\ 5 & 0 \end{array}\right) \cdot\left(\begin{array}{cc} 1 & 2 \\ -2 & 4 \end{array}\right) \\ &=\left(\begin{array}{cc} (3)(1)+(-2)(-2) & (3)(2)+(-2)(4) \\ (5)(1)+(0)(-2) & (5)(2)+(0)(4) \end{array}\right) \\ &=\left(\begin{array}{cc} 3+4 & 6+(-8) \\ 5+0 & 10+0 \end{array}\right) \\ &=\left(\begin{array}{cc} 7 & -2 \\ 5 & 10 \end{array}\right) \end{aligned} \end{split}\]

Note that \(A C \neq C A\).

Economic example of matrix multiplication

Suppose that a consumer whose preferences are defined over bundles of \(L\) commodities faces a price vector given by the row vector \(p=\left(p_{1}, p_{2}, \cdots, p_{L}\right)\) and chooses to purchase the quantities of each commodity that are given by the column vector

\[\begin{split} q=\left(\begin{array}{c} q_{1} \\ q_{2} \\ q_{3} \\ \vdots \\ q_{L} \end{array}\right) \end{split}\]

The consumer’s total expenditure will be equal to

\[\begin{split} \begin{gathered} p q=\left(p_{1}, p_{2}, \cdots, p_{L}\right)\left(\begin{array}{c} q_{1} \\ q_{2} \\ q_{3} \\ \vdots \\ q_{L} \end{array}\right) \\ =p_{1} q_{1}+p_{2} q_{2}+\cdots+p_{L} q_{L} \\ =\sum_{I=1}^{L} p_{I} q_{I} . \end{gathered} \end{split}\]

Matrix transposition#

Suppose that \(A\) is an \((n \times m)\) matrix. The transpose of the matrix \(A\), which is denoted by \(A^{T}\), is the \((m \times n)\) matrix that is formed by taking the rows of \(A\) and turning them into columns, without changing their order. In other words, the \(i\) th column of \(A^{T}\) is the ith row of \(A\). This also means that the jth row of \(A^{T}\) is the \(j\) th column of \(A\).

Suppose that \(A\) is the \((n \times m)\) matrix that takes the following form:

\[\begin{split} A=\left(\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1 m} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2 m} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3 m} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \cdots & a_{n m} \end{array}\right) \end{split}\]

The transpose of the matrix \(A\) is the \((m \times n)\) matrix that takes the following form:

\[\begin{split} A^{T}=\left(\begin{array}{ccccc} a_{11} & a_{21} & a_{31} & \cdots & a_{n 1} \\ a_{12} & a_{22} & a_{32} & \cdots & a_{n 2} \\ a_{13} & a_{23} & a_{33} & \cdots & a_{n 3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{1 m} & a_{2 m} & a_{3 m} & \cdots & a_{n m} \end{array}\right) \end{split}\]

Example

These examples come from [Shannon, 1995] (p. 139, Example 8).

  • Example 1: If \( x=\left(\begin{array}{l} 1 \\ 3 \\ 5 \end{array}\right) \) then \(X^{T}=(1,3,5)\).


  • Example 2: If \( Y=\left(\begin{array}{ll} 2 & 3 \\ 5 & 9 \\ 7 & 6 \end{array}\right) \) then \( Y^{T}=\left(\begin{array}{lll} 2 & 5 & 7 \\ 3 & 9 & 6 \end{array}\right) \).


  • Example 3: If \( Z=\left(\begin{array}{ccc} 1 & 3 & 7 \\ 4 & 5 & 11 \\ 6 & 8 & 10 \end{array}\right) \) then \( Z^{T}=\left(\begin{array}{ccc} 1 & 4 & 6 \\ 3 & 5 & 8 \\ 7 & 11 & 10 \end{array}\right) \).

Symmetric matrices#

In general, \(A^{T} \neq A\). There are two reasons for this:

  • First, unless \(A\) is a square matrix (that is, unless it has the same number of rows and columns), the dimensions of the matrix \(A^{T}\) will be different to the dimensions of the matrix \(A\).

  • Second, even if \(A\) is a square matrix, in general the \(i\)th row of \(A\) will not be identical to the \(i\)th column of \(A\). As such, in general we will have \(A^{T} \neq A\) even for square matrices.

If it is the case that \(A^{T}=A\), then we say that \(A\) is a symmetric matrix.

Example

  • Example 1: \( A=\left(\begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right)=A^{T} \)

  • Example 2: \( B=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right)=B^{T} \)

  • Example 3: \( C=\left(\begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right)=C^{T} \)

  • Example 4: \( D=\left(\begin{array}{ll} 1 & 0 \\ 0 & 0 \end{array}\right)=D^{T} \)

Null matrices#

A null matrix (or vector) is a matrix that consists solely of zeroes. For example, the \((2 \times 2)\) null matrix is

\[\begin{split} 0=\left(\begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right) \end{split}\]

The \((n \times m)\) null matrix is the ADDITIVE identity matrix for the space of all \((n \times m)\) null matrices. This means that if \(A\) is an \((n \times m)\) matrix, then \(A+0=0+A=A\).

Additive inverses#

Suppose that \(A\) is an \((n \times m)\) matrix and 0 is the \((n \times m)\) null matrix. The \((n \times m)\) matrix \(B\) is the additive inverse of \(A\) if and only if \(A+B=B+A=0\). Suppose that

\[\begin{split} A=\left(\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 m} \\ a_{21} & a_{22} & \cdots & a_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n m} \end{array}\right) \end{split}\]

The additive inverse of \(A\) is

\[\begin{split} B=-A=\left(\begin{array}{cccc} -a_{11} & -a_{12} & \cdots & -a_{1 m} \\ -a_{21} & -a_{22} & \cdots & -a_{2 m} \\ \vdots & \vdots & \ddots & \vdots \\ -a_{n 1} & -a_{n 2} & \cdots & -a_{n m} \end{array}\right) \end{split}\]

Example

Note that

\[\begin{split} \begin{aligned} A+B &=\left(\begin{array}{ll} 1 & 2 \\ 4 & 3 \end{array}\right)+\left(\begin{array}{ll} -1 & -2 \\ -4 & -3 \end{array}\right) \\[20pt] &=\left(\begin{array}{ll} 1+(-1) & 2+(-2) \\ 4+(-4) & 3+(-3) \end{array}\right) \\[20pt] &=\left(\begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right) =0_{(2 \times 2)} \end{aligned} \end{split}\]

Note also that

\[\begin{split} \begin{aligned} B+A &=\left(\begin{array}{ll} -1 & -2 \\ -4 & -3 \end{array}\right)+\left(\begin{array}{ll} 1 & 2 \\ 4 & 3 \end{array}\right) \\[20pt] &=\left(\begin{array}{ll} -1+1 & -2+2 \\ -4+4 & -3+3 \end{array}\right)\\[20pt] &=\left(\begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right)=0_{(2 \times 2)} \end{aligned} \end{split}\]

Since \(A+B=B+A=0_{(2 \times 2)}\), we can conclude that \(B\) is the additive inverse for \(A\).

Identity matrices#

An identity matrix is a square matrix that has ones on the main (north-west to south-east) diagonal and zeros everywhere else. For example, the \((2 \times 2)\) identity matrix is

\[\begin{split} I=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \end{split}\]

The \((n \times n)\) identity matrix is the MULTIPLICATIVE identity matrix for any relevant space of matrices:

  • If \(A\) is an \((n \times n)\) matrix, then \(A I=I A=A\).

  • If \(A\) is an \((m \times n)\) matrix, then \(A I=A\).

  • If \(A\) is an \((n \times m)\) matrix, then \(I A=A\).

Multiplicative inverses#

Only square matrices have any chance of having a multiplicative inverse. Some, but not all, square matrices will have a multiplicative inverse. Suppose that \(A\) is an \((n \times n)\) matrix and \(I\) is the \((n \times n)\) identity matrix.

The \((n \times n)\) matrix \(B\) is the multiplicative inverse (usually just referred to as the inverse) of \(A\) if and only if \(A B=B A=I\).

  • A square matrix that has an inverse is said to be non-singular.

  • A square matrix that does not have an inverse is said to be singular.

  • We will talk about methods for determining whether or not a matrix is non-singular later in this unit.

  • We will talk about methods for finding an inverse matrix, if it exists, later in this unit.

  • Useful fact: “The transpose of the inverse is equal to the inverse of the transpose”.

    • If \(A\) is a non-singular square matrix whose multiplicative inverse is \(A^{-1}\), then we have \(\left(A^{-1}\right)^{T}=\left(A^{T}\right)^{-1}\).

Example

This example comes from [Haeussler Jr and Paul, 1987] (p. 278, Example 1).

  • Note that

\[\begin{split} \begin{aligned} A B &=\left(\begin{array}{ll} 1 & 2 \\ 3 & 7 \end{array}\right) \cdot\left(\begin{array}{cc} 7 & -2 \\ -3 & 1 \end{array}\right) \\[10pt] &=\left(\begin{array}{cc} (1)(7)+(2)(-3) & (1)(-2)+(2)(1) \\ (3)(7)+(7)(-3) & (3)(-2)+(7)(1) \end{array}\right) \\[10pt] &=\left(\begin{array}{cc} 7+(-6) & -2+2 \\ 21+(-21) & -6+7 \end{array}\right) \\[10pt] &=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \\[10pt] &= I \end{aligned} \end{split}\]

Note that

\[\begin{split} \begin{aligned} B A &=\left(\begin{array}{cc} 7 & -2 \\ -3 & 1 \end{array}\right) \cdot\left(\begin{array}{cc} 1 & 2 \\ 3 & 7 \end{array}\right) \\[10pt] &=\left(\begin{array}{cc} (7)(1)+(-2)(3) & (7)(2)+(-2)(7) \\ (-3)(1)+(1)(3) & (-3)(2)+(1)(7) \end{array}\right) \\[10pt] &=\left(\begin{array}{cc} 7+(-6) & 14+(-14) \\ -3+3 & -6+7 \end{array}\right) \\[10pt] &=\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right) \\[10pt] &= I \end{aligned} \end{split}\]

Since \(A B=B A=I\), we can conclude that \(A^{-1}=B\).

Idempotent matrices#

  • A matrix \(A\) is said to be idempotent if and only if \(A A=A\).

  • Clearly a NECESSARY condition for matrix to be idempotent is that \(A\) be a square matrix. Exercise: Explain why

  • However, this is NOT a SUFFICIENT condition for a matrix to be idempotent. In general, \(A A \neq A\), even for square matrices.

  • Two examples of idempotent matrices that you have already encountered are square null matrices and identity matrices.

  • We will shortly encounter two more examples. These are the Hat matrix \((P)\) and the residual-making matrix \((M=I-P)\) from statistics and econometrics.

Example

  • The \((2 \times 2)\) identity matrix:

\[\begin{split} \begin{aligned} l_{2} \cdot I_{2} &=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \cdot\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \\ &=\left(\begin{array}{ll} (1)(1)+(0)(0) & (1)(0)+(0)(1) \\ (0)(1)+(1)(0) & (0)(0)+(1)(1) \end{array}\right) \\ &=\left(\begin{array}{ll} 1+0 & 0+0 \\ 0+0 & 0+1 \end{array}\right) \\ &=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \\ &=I_{2} \end{aligned} \end{split}\]
  • The \((2 \times 2)\) null matrix:

\[\begin{split} \begin{aligned} 0_{(2 \times 2)} \cdot 0_{(2 \times 2)} &=\left(\begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right) \cdot\left(\begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right) \\ &=\left(\begin{array}{cc} (0)(0)+(0)(0) & (0)(0)+(0)(0) \\ (0)(0)+(0)(0) & (0)(0)+(0)(0) \end{array}\right) \\ &=\left(\begin{array}{ll} 0+0 & 0+0 \\ 0+0 & 0+0 \end{array}\right) \\ &=\left(\begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right) \\ &=0_{(2 \times 2)} \end{aligned} \end{split}\]

Econometric application: the classical linear regression model#

One of simplest models that you will encounter in statistics and econometrics is the classical linear regression model (CLRM). This model takes the form

\[ Y=X \beta+\epsilon \]

where \(Y\) is an \((n \times 1)\) vector of \(n\) observations on a single dependent variable, \(X\) is an \((n \times k)\) matrix of \(n\) observations on \(k\) independent variables, \(\beta\) is a \((k \times 1)\) vector of unknown parameters and \(\epsilon\) is an \((n \times 1)\) vector of random disturbances.

In the CLRM, the joint distribution of the random disturbances, conditional on \(X\), is given by

\[ \epsilon \mid X \sim N\left(0, \sigma^{2} I\right) \]

where 0 is an \((n \times 1)\) null vector, \(l\) is an \((n \times n)\) identity matrix and \(\sigma^{2}\) is an unknown parameter.

Matrices associated with the CLRM#

  • The ordinary least squares estimator (and, in the case of the CLRM, maximum likelihood estimator) of the parameter vector \(\beta\) in the CLRM is given by

\[ b=\left(X^{T} X\right)^{-1} X^{T} Y \]
  • The hat matrix for the CLRM is given by

\[ P=X\left(X^{T} X\right)^{-1} X^{T} \]
  • The residual-making matrix for the CLRM is given by

\[\begin{split} \begin{aligned} M & =I-P \\ & =I-X\left(X^{T} X\right)^{-1} X^{T} \end{aligned} \end{split}\]

The hat matrix is symmetric#

\[\begin{split} \begin{aligned} P^{T} & =\left(X\left(X^{T} X\right)^{-1} X^{T}\right)^{T} \\ & =\left(X^{T}\right)^{T}\left(\left(X^{T} X\right)^{-1}\right)^{T}(X)^{T} \\ & =X\left(\left(X^{T} X\right)^{T}\right)^{-1} X^{T} \\ & =X\left((X)^{T}\left(X^{T}\right)^{T}\right)^{-1} X^{T} \\ & =X\left(X^{T} X\right)^{-1} X^{T} \\ & =P \end{aligned} \end{split}\]

The hat matrix is idempotent#

\[\begin{split} \begin{aligned} P P & =\left(x\left(X^{T} X\right)^{-1} X^{T}\right)\left(X\left(X^{T} X\right)^{-1} X^{T}\right) \\ & =X\left(X^{T} X\right)^{-1} X^{T} X\left(X^{T} X\right)^{-1} X^{T} \\ & =X\left(X^{T} X\right)^{-1} I X^{T} \\ & =X\left(X^{T} X\right)^{-1} X^{T} \\ & =P \end{aligned} \end{split}\]

The residual-making matrix is symmetric#

\[\begin{split} \begin{aligned} M^{T} & =(I-P)^{T} \\ & =I^{T}-P^{T} \\ & =I-P \\ & =M \end{aligned} \end{split}\]

The residual-making matrix is idempotent#

\[\begin{split} \begin{aligned} M M & =(I-P)(I-P) \\ & =I I-I P-P I+P P \\ & =I-P-P+P \\ & =I-P \\ & =M \end{aligned} \end{split}\]

Determinant of a matrix#

Determinant is a fundamental characteristic of any matrix \(A\) and a linear operator given by a matrix \(A\)

Note

Determinants are only defined for square matrices, so in this section we only consider square matrices.

  • we use some sort of recursive definition

Definition

For a square \(2 \times 2\) matrix the determinant is given by

\[\begin{split} % \det \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) = ad - bc % \end{split}\]

Notation for the determinant is either \(\det(A)\) or sometimes \(|A|\)

Example

\[\begin{split} % \det \left( \begin{array}{cc} 2 & 0 \\ 7 & -1 \\ \end{array} \right) = (2 \times -1) - (7 \times 0) = -2 % \end{split}\]

We build the definition of the determinants of larger matrices from \(2 \times 2\) case. Think of the next definitions as a ‘induction step’

Definition

Consider an \(n \times n\) matrix \(A\). Denote \(A_{ij}\) a \((n-1) \times (n-1)\) submatrix of \(A\), obtained by deleting the \(i\)-th row and \(j\)-th column of \(A\). Then

  • the \((i,j)\)-th minor of \(A\) denoted \(M_{ij}\) is

\[ M_{ij} = \det(A_{ij}) \]
  • the \((i,j)\)-th cofactor of \(A\) denoted \(C_{ij}\)

\[ C_{ij} = (-1)^{i+j} M_{ij} = (-1)^{i+j} \det(A_{ij}) \]
  • cofactors are signed minors

  • signs alternate in checkerboard pattern

  • for even \(i+j\) minors and cofactors are equal

Definition

The determinant of an \(n \times n\) matrix \(A\) with elements \(\{a_{ij}\}\) is given by

\[ \det(A) = \sum_{i=1}^n a_{ij} C_{ij} = \sum_{j=1}^n a_{ij} C_{ij} \]

for any choice of \(i\) or \(j\).

  • given that the cofactors are lower dimension determinants, we can use the same formula to compute determinants of matrices of all sizes

Example

\[\begin{split} \left| \begin{array}{ccc} a_{11},& a_{12},& a_{13} \\ a_{21},& a_{22},& a_{23} \\ a_{31},& a_{32},& a_{33} \end{array} \right| = a_{11} \left| \begin{array}{ccc} a_{22},& a_{23} \\ a_{32},& a_{33} \end{array} \right| + (-1)^3 a_{21} \left| \begin{array}{ccc} a_{12},& a_{13} \\ a_{32},& a_{33} \end{array} \right| + a_{31} \left| \begin{array}{ccc} a_{12},& a_{13} \\ a_{22},& a_{23} \end{array} \right| = \\ a_{11} (a_{22}a_{33} - a_{23}a_{32}) - a_{21} (a_{12}a_{33}-a_{13}a_{32}) + a_{31} (a_{12}a_{23}-a_{13}a_{22}) = \\ a_{11} a_{22}a_{33} + a_{21}a_{13}a_{32}) + a_{31}a_{12}a_{23} - a_{11}a_{23}a_{32}) - a_{21}a_{12}a_{33} - a_{31}a_{13}a_{22}) \end{split}\]
\[\begin{split} \left| \begin{array}{ccc} a_{11},& a_{12},& a_{13} \\ a_{21},& a_{22},& a_{23} \\ a_{31},& a_{32},& a_{33} \end{array} \right| = a_{11} \left| \begin{array}{ccc} a_{22},& a_{23} \\ a_{32},& a_{33} \end{array} \right| + (-1)^3 a_{12} \left| \begin{array}{ccc} a_{21},& a_{23} \\ a_{31},& a_{33} \end{array} \right| + a_{13} \left| \begin{array}{ccc} a_{21},& a_{22} \\ a_{31},& a_{32} \end{array} \right| = \\ a_{11} (a_{22}a_{33} - a_{23}a_{32}) - a_{12} (a_{21}a_{33}-a_{23}a_{31}) + a_{13} (a_{21}a_{32}-a_{22}a_{31}) = \\ a_{11} a_{22}a_{33} + a_{12}a_{23}a_{31}) + a_{13}a_{21}a_{32} - a_{11}a_{23}a_{32}) - a_{12}a_{21}a_{33} - a_{13}a_{22}a_{31}) \end{split}\]

Fact

Determinant of \(3 \times 3\) matrix can be computed by the triangles rule:

\[\begin{split} \mathrm{det} \left( \begin{array}{ccc} a_{11},& a_{12},& a_{13} \\ a_{21},& a_{22},& a_{23} \\ a_{31},& a_{32},& a_{33} \end{array} \right) = \begin{array}{l} + a_{11}a_{22}a_{33} \\ + a_{12}a_{23}a_{31} \\ + a_{13}a_{21}a_{32} \\ - a_{13}a_{22}a_{31} \\ - a_{12}a_{21}a_{33} \\ - a_{11}a_{23}a_{32} \end{array} \end{split}\]
_images/det33.png

Examples for quick computation

\[\begin{split} \mathrm{det} \left( \begin{array}{cc} 5,& 1 \\ 0,& 1 \end{array} \right) \quad \quad \mathrm{det} \left( \begin{array}{cc} 2,& 1 \\ 1,& 2 \end{array} \right) \end{split}\]
\[\begin{split} \mathrm{det} \left( \begin{array}{ccc} 1,& 5,& 8 \\ 0,& 2,& 1 \\ 0,& -1,& 2 \end{array} \right) \quad \quad \mathrm{det} \left( \begin{array}{ccc} 1,& 0,& 3 \\ 1,& 1,& 0 \\ 0,& 0,& 8 \end{array} \right) \end{split}\]
\[\begin{split} \mathrm{det} \left( \begin{array}{cccc} 1,& 5,& 8,& 17 \\ 0,& -2,& 13,& 0 \\ 0,& 0,& 1,& 2 \\ 0,& 0,& 0,& 2 \end{array} \right) \quad \quad \mathrm{det} \left( \begin{array}{cccc} 2,& 1,& 0,& 0 \\ 1,& 2,& 0,& 0 \\ 0,& 0,& 2,& 0 \\ 0,& 0,& 0,& 2 \end{array} \right) \end{split}\]

Properties of determinants#

Important facts concerning the determinants

Fact

If \(I\) is the \(N \times N\) identity, \(A\) and \(B\) are \(N \times N\) matrices and \(\alpha \in \mathbb{R}\), then

  1. \(\det(I) = 1\)

  2. \(\det(A) = \det(A^T)\)

  3. \(\det(AB) = \det(A) \det(B)\)

  4. \(\det(\alpha A) = \alpha^N \det(A)\)

  5. \(\det(A^{-1}) = (\det(A))^{-1}\)

Example

Compute the determinant of the \((n \times n)\) matrix

\[\begin{split} \det \left( \begin{array}{cccc} K,& 0,& \dots& 0 \\ 0,& K,& \dots& 0 \\ \vdots& \vdots& \ddots& \vdots \\ 0,& 0,& \dots& K \end{array} \right) = K^n \det(I) = K^n \end{split}\]
\[\begin{split} \det \left( \begin{array}{ccccc} 1,& 1,& 1,& \dots& 1 \\ 1,& 2,& 2,& \dots& 2 \\ 1,& 2,& 3,& \dots& 3 \\ \vdots& \vdots& \vdots& \ddots& \vdots \\ 1,& 2,& 3,& \dots& n \end{array} \right) = \det \left[ \left( \begin{array}{cccc} 1,& 0,& \dots& 0 \\ 1,& 1,& \dots& 0 \\ \vdots& \vdots& \ddots& \vdots \\ 1,& 1,& \dots& 1 \end{array} \right) \left( \begin{array}{cccc} 1,& 1,& \dots& 1 \\ 0,& 1,& \dots& 1 \\ \vdots& \vdots& \ddots& \vdots \\ 0,& 0,& \dots& 1 \end{array} \right) \right] = 1 \end{split}\]

Fact

If some row or column of \(A\) is added to another one after being multiplied by a scalar \(\alpha \ne 0\), then the determinant of the resulting matrix is the same as the determinant of \(A\).

In other words, the determinant is invariant under elementary row or column operations of type 3 (see next lecture).

  • we will talk about this next week

Where determinants are used#

  • Fundamental properties of the linear operators given by the corresponding matrix

  • Inversion of matrices

  • Solving systems of linear equations (Cramer’s rule)

  • Finding Eigenvalues and eigenvectors

  • Determining positive definiteness of matrices

  • etc, etc.

Notes from the lecture#

Hand written notes from the lecture

_images/15.png _images/25.png
Further reading and self-learning