Skip to content

FP3 Chapter 6: Further Matrix Algebra

FP3 Lecture Notes: Further Topics in Matrix Algebra

Section titled “FP3 Lecture Notes: Further Topics in Matrix Algebra”

In previous courses, we have studied 2×2 matrices and their properties. In this lecture, we’ll extend our understanding to 3×3 matrices, which are crucial in solving systems of linear equations with three variables, 3D transformations, and various applications in physics and engineering.

6.2.1 Systems of Linear Equations and Matrices

Section titled “6.2.1 Systems of Linear Equations and Matrices”

Definition. The determinant of a 2×2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} is defined as:

det(A)=A=adbc\det(A) = |A| = ad - bc

Theorem. For a system of linear equations represented by the matrix equation Ax=bA\mathbf{x} = \mathbf{b}:

  • If det(A)0\det(A) \neq 0, the system has exactly one solution.
  • If det(A)=0\det(A) = 0 and the system is consistent, it has infinitely many solutions.
  • If det(A)=0\det(A) = 0 and the system is inconsistent, it has no solution.

Unique solution vs parallel lines

6.3 3×3 Matrices and Systems of Linear Equations

Section titled “6.3 3×3 Matrices and Systems of Linear Equations”

6.3.1 Representation of Systems with Three Variables

Section titled “6.3.1 Representation of Systems with Three Variables”

Three planes intersecting

Definition. For a 3×3 matrix A=(a11a12a13a21a22a23a31a32a33)A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}, the determinant can be calculated by expanding along the first row:

det(A)=a11a22a23a32a33a12a21a23a31a33+a13a21a22a31a32\begin{aligned} \det(A) &= a_{11}\begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - a_{12}\begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + a_{13}\begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix} \end{aligned}

Theorem. As with 2×2 matrices, the determinant of a 3×3 matrix determines the nature of solutions:

  • If det(A)0\det(A) \neq 0, the system has exactly one solution.
  • If det(A)=0\det(A) = 0 and the system is consistent, it has infinitely many solutions.
  • If det(A)=0\det(A) = 0 and the system is inconsistent, it has no solution.

6.3.3 Geometric Interpretation of the Determinant

Section titled “6.3.3 Geometric Interpretation of the Determinant”

Parallelepiped

6.4.1 Cofactor Method for Finding the Inverse

Section titled “6.4.1 Cofactor Method for Finding the Inverse”

Definition. The cofactor CijC_{ij} of an element aija_{ij} in a matrix is (1)i+j(-1)^{i+j} times the determinant of the submatrix obtained by deleting the ii-th row and jj-th column.

Cofactor signs

6.4.2 Application to Solving Linear Equations

Section titled “6.4.2 Application to Solving Linear Equations”

Definition. The transpose of a matrix AA, denoted by ATA^T, is obtained by reflecting the elements of AA across its main diagonal.

Definition. A square matrix QQ is orthogonal if its transpose equals its inverse:

QT=Q1or equivalentlyQTQ=QQT=IQ^T = Q^{-1} \quad \text{or equivalently} \quad Q^T Q = Q Q^T = I

Orthogonal vectors

Theorem. For any orthogonal matrix QQ:

  • det(Q)=±1\det(Q) = \pm 1
  • If det(Q)=1\det(Q) = 1, QQ represents a rotation
  • If det(Q)=1\det(Q) = -1, QQ represents a reflection followed by a rotation

6.6 Matrix Transformations of Lines and Planes

Section titled “6.6 Matrix Transformations of Lines and Planes”

6.7.1 Introduction: Why Eigenvalues Matter

Section titled “6.7.1 Introduction: Why Eigenvalues Matter”

Definition. For a square matrix AA, a non-zero vector v\mathbf{v} is an eigenvector of AA if there exists a scalar λ\lambda (the eigenvalue) such that:

Av=λvA\mathbf{v} = \lambda\mathbf{v}

6.7.3 Finding Eigenvalues and Eigenvectors

Section titled “6.7.3 Finding Eigenvalues and Eigenvectors”

Eigenvalues and Eigenvectors of 2×2 Matrices

Eigenvectors visualization

Eigenvalues and Eigenvectors of 3×3 Matrices