lu decompositiongaussian eliminationadjugateminorcofactorrank conditionexistence conditionnon commutativitydeterminant relationtranspose inverseproduct inverseinverse of inverseuniquenessdeterminant conditionsingular matrixinvertible matrixidentity matrixdefinitioninverse matrixcomputationpropertiesfoundations
Definition
Given a square matrix of order $n$, the inverse of $A$, denoted $A^{- 1}$, is the matrix such that:
\[A \cdot A^{- 1} = A^{- 1} \cdot A = I\]where $I$ is the identity matrix of order $n$. When such a matrix exists, it is unique. A square matrix that admits an inverse is called invertible or nonsingular. A matrix that does not admit an inverse is called singular.
The inverse matrix represents the linear transformation that reverses the effect of the original transformation. If $A$ maps a vector $\mathbf{x}$ to $\mathbf{b}$, that is $A \mathbf{x} = \mathbf{b}$, then $A^{- 1}$ maps $\mathbf{b}$ back to $\mathbf{x}$:
\[A \mathbf{x} = \mathbf{b} \Longrightarrow \mathbf{x} = A^{- 1} \mathbf{b}\]This is precisely the principle underlying the solution of systems of linear equations via the inverse matrix.
A square matrix $A$ is invertible if and only if its determinant is nonzero:
\[A \text{is invertible} \Longleftrightarrow det ( A ) \neq 0\]The condition $det ( A ) \neq 0$ is both necessary and sufficient for invertibility. It is equivalent to requiring that the rows (or columns) of $A$ are linearly independent, and that the rank of $A$ equals $n$. The set of all invertible matrices of order $n$ forms a group under matrix multiplication, known as the general linear group $G L ( n , \mathbb{R} )$, discussed in the entry on groups.
Properties of the inverse
The inverse matrix satisfies the following properties, for square matrices $A$ and $B$ of order $n$:
- $( A^{- 1} )^{- 1} = A$. The inverse of the inverse is the original matrix.
- $( A B )^{- 1} = B^{- 1} A^{- 1}$. The inverse of a product reverses the order of the factors.
- $( A^{T} )^{- 1} = ( A^{- 1} )^{T}$. The inverse of the transpose equals the transpose of the inverse.
- $det ( A^{- 1} ) = \frac{1}{det ( A )}$. The determinant of the inverse is the reciprocal of the determinant of $A$.
The reversal of order in $( A B )^{- 1} = B^{- 1} A^{- 1}$ is necessary for the same reason as in the transpose: matrix multiplication is not commutative, so inverting a product requires inverting each factor and reversing their order.
Computing the inverse: the cofactor method
The inverse of a square matrix $A$ of order $n$, when it exists, can be computed using the cofactor method. Given a square matrix $A = ( a_{i j} )$, the minor $M_{i j}$ is the determinant of the $( n - 1 ) \times ( n - 1 )$ submatrix obtained by deleting the $i$-th row and $j$-th column of $A$. The cofactor $C_{i j}$ is defined as:
\[C_{i j} = ( - 1 )^{i + j} \cdot M_{i j}\]The cofactor matrix $C$ is the matrix whose entry in position $( i , j )$ is the cofactor $C_{i j}$. The transpose of the cofactor matrix, denoted $C^{T}$, is called the adjugate of $A$ and written $adj ( A )$. The inverse is then given by:
\[A^{- 1} = \frac{1}{det ( A )} C^{T} = \frac{1}{det ( A )} adj ( A )\]The computation proceeds as follows: calculate the cofactor $C_{i j}$ for every entry of $A$, assemble the cofactor matrix $C$, take its transpose to obtain $adj ( A )$, and divide every entry by $det ( A )$.
Example
Consider the following matrix:
\[A = ( 3 & 0 & 0 \\ 2 & 1 & 0 \\ - 1 & 4 & 2 )\]This is a lower triangular matrix. Its determinant is the product of the diagonal entries:
\[C_{11} & = ( + 1 ) det ( \begin{matrix}1 & 0 \\ 4 & 2\end{matrix} ) & 2 \\ C_{12} & = ( - 1 ) det ( \begin{matrix}2 & 0 \\ - 1 & 2\end{matrix} ) & - 4 \\ C_{13} & = ( + 1 ) det ( \begin{matrix}2 & 1 \\ - 1 & 4\end{matrix} ) & 9 \\ C_{21} & = ( - 1 ) det ( \begin{matrix}0 & 0 \\ 4 & 2\end{matrix} ) & 0 \\ C_{22} & = ( + 1 ) det ( \begin{matrix}3 & 0 \\ - 1 & 2\end{matrix} ) & 6 \\ C_{23} & = ( - 1 ) det ( \begin{matrix}3 & 0 \\ - 1 & 4\end{matrix} ) & - 12 \\ C_{31} & = ( + 1 ) det ( \begin{matrix}0 & 0 \\ 1 & 0\end{matrix} ) & 0 \\ C_{32} & = ( - 1 ) det ( \begin{matrix}3 & 0 \\ 2 & 0\end{matrix} ) & 0 \\ C_{33} & = ( + 1 ) det ( \begin{matrix}3 & 0 \\ 2 & 1\end{matrix} ) & 3\]The cofactor matrix $C$ is assembled from the nine cofactors computed above:
\[C = ( 2 & - 4 & 9 \\ 0 & 6 & - 12 \\ 0 & 0 & 3 )\]Taking the transpose of $C$ gives the adjugate of $A$:
\[adj ( A ) = C^{T} = ( 2 & 0 & 0 \\ - 4 & 6 & 0 \\ 9 & - 12 & 3 )\]Dividing by $det ( A ) = 6$:
\[A^{- 1} = \frac{1}{6} ( 2 & 0 & 0 \\ - 4 & 6 & 0 \\ 9 & - 12 & 3 ) = ( \frac{1}{3} & 0 & 0 \\ - \frac{2}{3} & 1 & 0 \\ \frac{3}{2} & - 2 & \frac{1}{2} )\]The cofactor method is exact but computationally expensive for large matrices, with complexity $O ( n ! )$ due to the determinant evaluations involved. In numerical practice, the inverse is typically computed via Gaussian elimination or LU decomposition, which achieve $O ( n^{3} )$ complexity.