Search for probability and statistics terms on Statlect
StatLect

The Laplace expansion, minors, cofactors and adjoints

by , PhD

The Laplace expansion is a formula that allows us to express the determinant of a matrix as a linear combination of determinants of smaller matrices, called minors.

The Laplace expansion also allows us to write the inverse of a matrix in terms of its signed minors, called cofactors. The latter are usually collected in a matrix called adjoint matrix.

Table of Contents

Minors

Let us start by defining minors.

Definition Let A be a $K	imes K$ matrix (with $Kgeq 2$). Denote by $A_{ij}$ the entry of A at the intersection of the i-th row and $j$-th column. The minor of $A_{ij}$ is the determinant of the sub-matrix obtained from A by deleting its i-th row and its $j$-th column.

We now illustrate the definition with an example.

Example Define the $3	imes 3$ matrix [eq1]Take the entry $A_{11}=4$. The sub-matrix obtained by deleting the first row and the first column is[eq2]Thus, the minor of $A_{11}$ is [eq3]The minor of $A_{23}$ is [eq4]

Cofactors

A cofactor is a minor whose sign may have been changed depending on the location of the respective matrix entry.

Definition Let A be a $K	imes K$ matrix (with $Kgeq 2$). Denote by $M_{ij}$ the minor of an entry $A_{ij}$. The cofactor of $A_{ij}$ is[eq5]

As an example, the pattern of sign changes [eq6] of a $4	imes 4$ matrix is[eq7]

Example Consider the $3	imes 3$ matrix [eq8]Take the entry $A_{23}=0$. The minor of $A_{23}$ is [eq9]and its cofactor is[eq10]

The expansion

We are now ready to present the Laplace expansion.

Proposition Let A be a $K	imes K$ matrix (with $Kgeq 2$). Denote by $C_{ij}$ the cofactor of an entry $A_{ij}$. Then, for any row i, the following row expansion holds:[eq11]Similarly, for any column $j$, the following column expansion holds:[eq12]

Proof

Let us start by proving the row expansion[eq13]Denote by $A_{iullet }$ the i-th row of A. We can write[eq14]where $e_{j}$ is the $j$-th vector of the standard basis of $U{211d} ^{K}$, that is a vector such that its $j$-th entry is equal to 1 and all the other entries are equal to 0. Now, denote by $A^{ij}$ the matrix obtained from A by substituting its i-th row with $e_{j}$:[eq15]We can write the i-th row of A as a linear combination as follows:[eq16]Since the determinant is linear in each row, we have that[eq17]Now, the matrix $A^{ij}$ can be transformed into the matrix[eq18]by performing i row interchanges and $j$ column interchanges. As a consequence, by the properties of the determinant of elementary matrices, we have that[eq19]By the definition of determinant, we have [eq20]where: in step $rame{A}$ we have used the fact that transposition does not change the determinant; in step $rame{B}$ we have used the fact that the only non-zero entry of the first column of $B^{ij}$ is the first one, so that [eq21] for all [eq22] and [eq23] for [eq24]; in step $rame{C}$, $M_{ij}$ is the minor of $A_{ij}$, and, by looking at the structure of $B^{ij} $ above, it is clear that, after excluding the first row and the first column of $B^{ij}$ from the computation of its determinant, we are computing the determinant of a matrix obtained from A by deleting its i-th row and its $j$-th column. Thus, [eq25]where $C_{ij}$ is the cofactor of $A_{ij}$. The proof for column expansions is analogous.

In other words, the determinant can be computed by summing all the entries of an arbitrarily chosen row (column) multiplied by their respective cofactors.

Example Define the matrix[eq26]We can use the Laplace expansion along the first column to compute its determinant:[eq27]

Example Define the matrix[eq28]We can use the Laplace expansion along the third row to compute its determinant:[eq29]

Expansions along the wrong row or column

An interesting and useful fact is that while the Laplace expansion gives[eq30]we have[eq31]when $k
eq i$. In other words, if we multiply the elements of row i with the cofactors of a different row k and we add them up, we get zero as a result.

Proof

Define a matrix $B$ whose rows are all equal to the corresponding rows of A, except for the k-th, which is equal to the i-th row of A. Thus, $B$ has two identical rows and, as a consequence, it is singular and it has zero determinant. Denote by $K_{ij}$ the cofactor of $B_{ij}$. Then, [eq32]where: in step $rame{A}$ we have used the fact that the i-th row of A is equal to the i-th row of $B$; in step $rame{B}$ we have used the fact that, although the k-th row of A is different from the k-th row of $B$, we have that $C_{kj}=K_{kj}$ because row k is canceled when forming the sub-matrices used to compute these cofactors.

The same result holds for columns:[eq33]when $k
eq i$. The proof is analogous to the previous one.

Cofactor matrix

We now define the cofactor matrix (or matrix of cofactors).

Definition Let A be a $K	imes K$ matrix. Denote by $C_{ij}$ the cofactor of $A_{ij}$ (defined above). Then, the $K	imes K$ matrix $C$ such that its $left( i,j
ight) $-th entry is equal to $C_{ij}$ for every i and $j$ is called cofactor matrix of A.

Adjoint matrix

The adjoint matrix (or adjugate matrix) is the transpose of the matrix of cofactors.

Definition Let A be a $K	imes K$ matrix and $C$ its cofactor matrix. The adjoint matrix of A, denoted by [eq34], is[eq35]

Adjoint, determinant and inverse

The following proposition is a direct consequence of the Laplace expansion.

Proposition Let A be a $K	imes K$ matrix and [eq36] its adjoint. Then,[eq37]where I is the $K	imes K$ identity matrix.

Proof

Define[eq38]By the definition of matrix multiplication, the $left( i,j
ight) $-th entry of $R$ is[eq39]where in step $rame{A}$ we have used the fact that the adjoint is the transpose of the cofactor matrix. When $i=j$, the expression in step $rame{A}$ is the Laplace expansion of A and it is therefore equal to [eq40]. When $i
eq j$ it is an expansion along the wrong row, and it is therefore equal to 0. Thus,[eq41]When [eq42]we have [eq43]which is a column expansion. Thus, by the same arguments used previously, we have that[eq44]

A consequence of the previous proposition is the following.

Proposition Let A be a $K	imes K$ invertible matrix and [eq45] its adjoint. Then[eq46]

Proof

Since A is invertible, [eq47]. Then, we can rewrite the result[eq48]as[eq49]Thus, by the definition of inverse matrix, the matrix[eq50]is the inverse of A.

Solved exercises

Below you can find some exercises with explained solutions.

Exercise 1

Define the matrix

[eq51]Compute the determinant of A by using Laplace expansion along its third column.

Solution

The expansion is[eq52]

Exercise 2

Define[eq53]Compute the adjoint of A, use it to derive the inverse of A, and verify that the matrix thus obtained is indeed the inverse of A.

Solution

The determinant of A is[eq54]Thus, A is invertible. Note that the sub-matrices obtained by deleting one row and one column of A are $1	imes 1$. Therefore, the matrix of minors of A is[eq55]and the matrix of cofactors is[eq56]The adjoint is obtained by transposing the matrix of cofactors:[eq57]The inverse can be computed as[eq58]Let us multiply it by A in order to check that it is indeed its inverse:[eq59]

How to cite

Please cite as:

Taboga, Marco (2021). "The Laplace expansion, minors, cofactors and adjoints", Lectures on matrix algebra. https://www.statlect.com/matrix-algebra/Laplace-expansion-minors-cofactors-adjoints.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.