In previous courses, we have studied 2×2 matrices and their properties. In this lecture, we’ll extend our understanding to 3×3 matrices, which are crucial in solving systems of linear equations with three variables, 3D transformations, and various applications in physics and engineering.
Applications of 3×3 Matrices
3×3 matrices have numerous applications across different fields:
Solving systems of three linear equations with three variables
Representing 3D transformations in computer graphics and animation
Analyzing structural problems in engineering
Quantum mechanics calculations in physics
Economic models with three interrelated factors
Network analysis in computer science
Matrix Representation of Linear Systems
A system of two linear equations with two variables can be represented as:
a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 \begin{aligned}
a_{11}x_1 + a_{12}x_2 &= b_1 \\
a_{21}x_1 + a_{22}x_2 &= b_2
\end{aligned} a 11 x 1 + a 12 x 2 a 21 x 1 + a 22 x 2 = b 1 = b 2 This can be written in matrix form as A x = b A\mathbf{x} = \mathbf{b} A x = b , where A A A is the coefficient matrix, x \mathbf{x} x is the vector of variables, and b \mathbf{b} b is the constant vector.
Definition. The determinant of a 2×2 matrix A = ( a b c d ) A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} A = ( a c b d ) is defined as:
det ( A ) = ∣ A ∣ = a d − b c \det(A) = |A| = ad - bc det ( A ) = ∣ A ∣ = a d − b c
Theorem. For a system of linear equations represented by the matrix equation A x = b A\mathbf{x} = \mathbf{b} A x = b :
If det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0 , the system has exactly one solution.
If det ( A ) = 0 \det(A) = 0 det ( A ) = 0 and the system is consistent, it has infinitely many solutions.
If det ( A ) = 0 \det(A) = 0 det ( A ) = 0 and the system is inconsistent, it has no solution.
Computing and Using the Inverse
For a 2×2 matrix A = ( a b c d ) A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} A = ( a c b d ) , the inverse (when det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0 ) is given by:
A − 1 = 1 a d − b c ( d − b − c a ) A^{-1} = \frac{1}{ad-bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix} A − 1 = a d − b c 1 ( d − c − b a ) If det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0 , then the solution to the system A x = b A\mathbf{x} = \mathbf{b} A x = b is given by x = A − 1 b \mathbf{x} = A^{-1}\mathbf{b} x = A − 1 b .
Example
Solve the system:
3 x + 2 y = 7 x − y = 2 \begin{aligned}
3x + 2y &= 7 \\
x - y &= 2
\end{aligned} 3 x + 2 y x − y = 7 = 2 Solution: A = ( 3 2 1 − 1 ) A = \begin{pmatrix} 3 & 2 \\ 1 & -1 \end{pmatrix} A = ( 3 1 2 − 1 ) , b = ( 7 2 ) \mathbf{b} = \begin{pmatrix} 7 \\ 2 \end{pmatrix} b = ( 7 2 ) .
det ( A ) = − 3 − 2 = − 5 ≠ 0 \det(A) = -3 - 2 = -5 \neq 0 det ( A ) = − 3 − 2 = − 5 = 0 , so there is a unique solution.
A − 1 = 1 − 5 ( − 1 − 2 − 1 3 ) = ( 1 5 2 5 1 5 − 3 5 ) A^{-1} = \frac{1}{-5} \begin{pmatrix} -1 & -2 \\ -1 & 3 \end{pmatrix} = \begin{pmatrix} \frac{1}{5} & \frac{2}{5} \\ \frac{1}{5} & -\frac{3}{5} \end{pmatrix} A − 1 = − 5 1 ( − 1 − 1 − 2 3 ) = ( 5 1 5 1 5 2 − 5 3 ) x = A − 1 b = ( 11 5 1 5 ) \mathbf{x} = A^{-1}\mathbf{b} = \begin{pmatrix} \frac{11}{5} \\ \frac{1}{5} \end{pmatrix} x = A − 1 b = ( 5 11 5 1 ) So x = 11 5 x = \frac{11}{5} x = 5 11 and y = 1 5 y = \frac{1}{5} y = 5 1 .
Key Properties of Inverses
For any invertible matrix A A A :
A A − 1 = A − 1 A = I A A^{-1} = A^{-1} A = I A A − 1 = A − 1 A = I
( A − 1 ) − 1 = A (A^{-1})^{-1} = A ( A − 1 ) − 1 = A
( A B ) − 1 = B − 1 A − 1 (AB)^{-1} = B^{-1}A^{-1} ( A B ) − 1 = B − 1 A − 1 (note the reversed order)
( A T ) − 1 = ( A − 1 ) T (A^T)^{-1} = (A^{-1})^T ( A T ) − 1 = ( A − 1 ) T
Matrix Representation of 3D Linear Systems
A system of three linear equations with three variables can be written as A x = b A\mathbf{x} = \mathbf{b} A x = b , where A A A is a 3×3 matrix.
Geometrically, each equation represents a plane in 3D space, and the solution(s) to the system represent the point(s) where all three planes intersect.
Definition. For a 3×3 matrix A = ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} A = a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 , the determinant can be calculated by expanding along the first row:
det ( A ) = a 11 ∣ a 22 a 23 a 32 a 33 ∣ − a 12 ∣ a 21 a 23 a 31 a 33 ∣ + a 13 ∣ a 21 a 22 a 31 a 32 ∣ \begin{aligned}
\det(A) &= a_{11}\begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - a_{12}\begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + a_{13}\begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix}
\end{aligned} det ( A ) = a 11 a 22 a 32 a 23 a 33 − a 12 a 21 a 31 a 23 a 33 + a 13 a 21 a 31 a 22 a 32
Optional: Further Methods for Computing 3×3 Determinants
There are several methods to compute the determinant of a 3×3 matrix:
Expansion by minors along any row or column:
det ( A ) = a i 1 M i 1 − a i 2 M i 2 + a i 3 M i 3 \det(A) = a_{i1}M_{i1} - a_{i2}M_{i2} + a_{i3}M_{i3} det ( A ) = a i 1 M i 1 − a i 2 M i 2 + a i 3 M i 3 where M i j M_{ij} M ij is the minor of element a i j a_{ij} a ij (the determinant of the 2×2 matrix obtained by deleting row i i i and column j j j ).
Theorem. As with 2×2 matrices, the determinant of a 3×3 matrix determines the nature of solutions:
If det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0 , the system has exactly one solution.
If det ( A ) = 0 \det(A) = 0 det ( A ) = 0 and the system is consistent, it has infinitely many solutions.
If det ( A ) = 0 \det(A) = 0 det ( A ) = 0 and the system is inconsistent, it has no solution.
Connecting Algebra and Geometry
When we view the columns of matrix A A A as vectors in 3D space, these vectors define a parallelepiped. If det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0 , these vectors span the entire 3D space (they are linearly independent), meaning there is exactly one way to express b \mathbf{b} b as a linear combination of these column vectors.
When det ( A ) = 0 \det(A) = 0 det ( A ) = 0 , the parallelepiped has zero volume, meaning the column vectors are linearly dependent and lie in the same plane or line.
Volume Interpretation of Determinant
For a 3×3 matrix, the absolute value of the determinant gives the volume of the parallelepiped formed by its column vectors.
When det ( A ) = 0 \det(A) = 0 det ( A ) = 0 , the parallelepiped has zero volume, meaning the three vectors are linearly dependent.
Exercise: Determinants and Systems
(a) Calculate the determinant of ( 2 1 3 0 − 1 2 1 0 4 ) \begin{pmatrix} 2 & 1 & 3 \\ 0 & -1 & 2 \\ 1 & 0 & 4 \end{pmatrix} 2 0 1 1 − 1 0 3 2 4
(b) Using the determinant, determine whether the system below has a unique solution, infinitely many solutions, or no solution:
2 x + y + 3 z = 4 − x + 2 y − z = 3 x + 3 y + 2 z = 1 \begin{aligned}
2x + y + 3z &= 4 \\
-x + 2y - z &= 3 \\
x + 3y + 2z &= 1
\end{aligned} 2 x + y + 3 z − x + 2 y − z x + 3 y + 2 z = 4 = 3 = 1 (c) Interpret geometrically what it means if the determinant of a 3×3 matrix is zero.
Definition. The cofactor C i j C_{ij} C ij of an element a i j a_{ij} a ij in a matrix is ( − 1 ) i + j (-1)^{i+j} ( − 1 ) i + j times the determinant of the submatrix obtained by deleting the i i i -th row and j j j -th column.
Adjugate and Inverse Matrix
The adjugate (or classical adjoint) of a matrix A A A is the transpose of the cofactor matrix:
adj ( A ) = ( C 11 C 21 C 31 C 12 C 22 C 32 C 13 C 23 C 33 ) \text{adj}(A) =
\begin{pmatrix}
C_{11} & C_{21} & C_{31} \\
C_{12} & C_{22} & C_{32} \\
C_{13} & C_{23} & C_{33}
\end{pmatrix} adj ( A ) = C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33 For a 3×3 matrix A A A with det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0 :
A − 1 = 1 det ( A ) ⋅ adj ( A ) A^{-1} = \frac{1}{\det(A)} \cdot \text{adj}(A) A − 1 = det ( A ) 1 ⋅ adj ( A )
When is a Matrix Not Invertible?
A matrix is not invertible (singular) if and only if its determinant is zero. This happens when:
One row or column is a multiple of another
One row or column contains only zeros
One row or column is a linear combination of others
Geometrically, a singular 3×3 matrix represents a transformation that collapses 3D space onto a plane or a line.
Solving Systems Using Matrix Inverse
Once we have the inverse, we can solve the system using x = A − 1 b \mathbf{x} = A^{-1}\mathbf{b} x = A − 1 b .
Example
Solve the system:
x + 2 y + z = 5 3 x + 2 y + z = 7 2 x + y + 2 z = 8 \begin{aligned}
x + 2y + z &= 5 \\
3x + 2y + z &= 7 \\
2x + y + 2z &= 8
\end{aligned} x + 2 y + z 3 x + 2 y + z 2 x + y + 2 z = 5 = 7 = 8 Using the inverse matrix from the previous example:
x = A − 1 b = ( 1 2 3 8 3 ) \mathbf{x} = A^{-1}\mathbf{b} = \begin{pmatrix} 1 \\ \frac{2}{3} \\ \frac{8}{3} \end{pmatrix} x = A − 1 b = 1 3 2 3 8 Therefore, x = 1 x = 1 x = 1 , y = 2 3 y = \frac{2}{3} y = 3 2 , and z = 8 3 z = \frac{8}{3} z = 3 8 .
Beyond the Cofactor Method
While the cofactor method provides a clear formula, it is computationally intensive for larger matrices. Gaussian elimination (row reduction) is more efficient for practical applications.
To find the inverse of a matrix A A A using Gaussian elimination:
Create an augmented matrix [ A ∣ I ] [A|I] [ A ∣ I ] where I I I is the identity matrix
Apply elementary row operations to transform the left side into the identity matrix
The resulting right side will be A − 1 A^{-1} A − 1
Example: Gaussian Elimination
Find the inverse of A = ( 1 2 1 3 2 1 2 1 2 ) A = \begin{pmatrix} 1 & 2 & 1 \\ 3 & 2 & 1 \\ 2 & 1 & 2 \end{pmatrix} A = 1 3 2 2 2 1 1 1 2 using Gaussian elimination.
Starting with the augmented matrix [ A ∣ I ] [A|I] [ A ∣ I ] :
[ 1 2 1 1 0 0 3 2 1 0 1 0 2 1 2 0 0 1 ] \left[\begin{array}{ccc|ccc}
1 & 2 & 1 & 1 & 0 & 0 \\
3 & 2 & 1 & 0 & 1 & 0 \\
2 & 1 & 2 & 0 & 0 & 1
\end{array}\right] 1 3 2 2 2 1 1 1 2 1 0 0 0 1 0 0 0 1 After applying row operations:
[ 1 0 0 − 1 2 1 2 0 0 1 0 2 3 0 − 1 3 0 0 1 1 6 − 1 2 2 3 ] \left[\begin{array}{ccc|ccc}
1 & 0 & 0 & -\frac{1}{2} & \frac{1}{2} & 0 \\
0 & 1 & 0 & \frac{2}{3} & 0 & -\frac{1}{3} \\
0 & 0 & 1 & \frac{1}{6} & -\frac{1}{2} & \frac{2}{3}
\end{array}\right] 1 0 0 0 1 0 0 0 1 − 2 1 3 2 6 1 2 1 0 − 2 1 0 − 3 1 3 2 This matches our result from the cofactor method.
Exercise: Finding Inverses
(a) Use the cofactor method to find the inverse of ( 2 1 0 1 3 1 0 2 2 ) \begin{pmatrix} 2 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 2 & 2 \end{pmatrix} 2 1 0 1 3 2 0 1 2
(b) Write the system as a matrix equation and solve using the inverse:
2 x + y − z = 5 x + 2 y + z = 8 x + y + 3 z = 10 \begin{aligned}
2x + y - z &= 5 \\
x + 2y + z &= 8 \\
x + y + 3z &= 10
\end{aligned} 2 x + y − z x + 2 y + z x + y + 3 z = 5 = 8 = 10 (c) Explain why the matrix ( 1 2 3 2 4 6 3 6 9 ) \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 3 & 6 & 9 \end{pmatrix} 1 2 3 2 4 6 3 6 9 does not have an inverse.
Definition. The transpose of a matrix A A A , denoted by A T A^T A T , is obtained by reflecting the elements of A A A across its main diagonal.
Properties of Transpose
( A T ) T = A (A^T)^T = A ( A T ) T = A
( A + B ) T = A T + B T (A + B)^T = A^T + B^T ( A + B ) T = A T + B T
( A B ) T = B T A T (AB)^T = B^T A^T ( A B ) T = B T A T (Note the reversed order)
det ( A T ) = det ( A ) \det(A^T) = \det(A) det ( A T ) = det ( A )
If A A A is invertible, then ( A T ) − 1 = ( A − 1 ) T (A^T)^{-1} = (A^{-1})^T ( A T ) − 1 = ( A − 1 ) T
Definition. A square matrix Q Q Q is orthogonal if its transpose equals its inverse:
Q T = Q − 1 or equivalently Q T Q = Q Q T = I Q^T = Q^{-1} \quad \text{or equivalently} \quad Q^T Q = Q Q^T = I Q T = Q − 1 or equivalently Q T Q = Q Q T = I
Criteria for Orthogonal Matrices
A matrix Q Q Q is orthogonal if and only if its row vectors (or column vectors) form an orthonormal set:
Each row vector has norm (length) equal to 1
Any two different row vectors are orthogonal to each other
For a 3×3 matrix Q Q Q with rows r 1 \mathbf{r}_1 r 1 , r 2 \mathbf{r}_2 r 2 , and r 3 \mathbf{r}_3 r 3 :
r 1 ⋅ r 1 = r 2 ⋅ r 2 = r 3 ⋅ r 3 = 1 r 1 ⋅ r 2 = r 1 ⋅ r 3 = r 2 ⋅ r 3 = 0 \begin{aligned}
\mathbf{r}_1 \cdot \mathbf{r}_1 = \mathbf{r}_2 \cdot \mathbf{r}_2 = \mathbf{r}_3 \cdot \mathbf{r}_3 &= 1 \\
\mathbf{r}_1 \cdot \mathbf{r}_2 = \mathbf{r}_1 \cdot \mathbf{r}_3 = \mathbf{r}_2 \cdot \mathbf{r}_3 &= 0
\end{aligned} r 1 ⋅ r 1 = r 2 ⋅ r 2 = r 3 ⋅ r 3 r 1 ⋅ r 2 = r 1 ⋅ r 3 = r 2 ⋅ r 3 = 1 = 0
Theorem. For any orthogonal matrix Q Q Q :
det ( Q ) = ± 1 \det(Q) = \pm 1 det ( Q ) = ± 1
If det ( Q ) = 1 \det(Q) = 1 det ( Q ) = 1 , Q Q Q represents a rotation
If det ( Q ) = − 1 \det(Q) = -1 det ( Q ) = − 1 , Q Q Q represents a reflection followed by a rotation
Finding the Inverse of an Orthogonal Matrix
If Q Q Q is orthogonal, then Q − 1 = Q T Q^{-1} = Q^T Q − 1 = Q T
This means to find the inverse of an orthogonal matrix, you simply take its transpose! This is a significant computational advantage.
Example
Determine if the following matrix is orthogonal, and if so, find its inverse:
Q = ( 1 2 1 2 0 − 1 2 1 2 0 0 0 1 ) Q = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ 0 & 0 & 1 \end{pmatrix} Q = 2 1 − 2 1 0 2 1 2 1 0 0 0 1 Solution: Checking that all rows are unit vectors and orthogonal to each other confirms Q Q Q is orthogonal.
Q − 1 = Q T = ( 1 2 − 1 2 0 1 2 1 2 0 0 0 1 ) Q^{-1} = Q^T = \begin{pmatrix} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ 0 & 0 & 1 \end{pmatrix} Q − 1 = Q T = 2 1 2 1 0 − 2 1 2 1 0 0 0 1
Important Examples of Orthogonal Matrices
1. Rotation matrices in 3D space are orthogonal matrices with determinant 1:
Rotation around the x-axis by angle θ \theta θ :
R x ( θ ) = ( 1 0 0 0 cos θ − sin θ 0 sin θ cos θ ) R_x(\theta) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta \\ 0 & \sin\theta & \cos\theta \end{pmatrix} R x ( θ ) = 1 0 0 0 cos θ sin θ 0 − sin θ cos θ Rotation around the y-axis by angle θ \theta θ :
R y ( θ ) = ( cos θ 0 sin θ 0 1 0 − sin θ 0 cos θ ) R_y(\theta) = \begin{pmatrix} \cos\theta & 0 & \sin\theta \\ 0 & 1 & 0 \\ -\sin\theta & 0 & \cos\theta \end{pmatrix} R y ( θ ) = cos θ 0 − sin θ 0 1 0 sin θ 0 cos θ Rotation around the z-axis by angle θ \theta θ :
R z ( θ ) = ( cos θ − sin θ 0 sin θ cos θ 0 0 0 1 ) R_z(\theta) = \begin{pmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{pmatrix} R z ( θ ) = cos θ sin θ 0 − sin θ cos θ 0 0 0 1 2. Reflection matrices are orthogonal matrices with determinant -1. For example, reflection across the xy-plane:
R x y = ( 1 0 0 0 1 0 0 0 − 1 ) R_{xy} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} R x y = 1 0 0 0 1 0 0 0 − 1
Exercise: Matrix Transpose and Orthogonal Matrices
(a) Find the transpose of ( 3 − 1 4 2 0 5 1 7 − 2 ) \begin{pmatrix} 3 & -1 & 4 \\ 2 & 0 & 5 \\ 1 & 7 & -2 \end{pmatrix} 3 2 1 − 1 0 7 4 5 − 2 .
(b) Verify whether ( 3 5 4 5 0 − 4 5 3 5 0 0 0 1 ) \begin{pmatrix} \frac{3}{5} & \frac{4}{5} & 0 \\ -\frac{4}{5} & \frac{3}{5} & 0 \\ 0 & 0 & 1 \end{pmatrix} 5 3 − 5 4 0 5 4 5 3 0 0 0 1 is orthogonal.
(c) If a 3×3 matrix Q Q Q is orthogonal with det ( Q ) = − 1 \det(Q) = -1 det ( Q ) = − 1 , what type of transformation does it represent?
Transforming a Line
When a line r ( t ) = r 0 + t v \mathbf{r}(t) = \mathbf{r}_0 + t\mathbf{v} r ( t ) = r 0 + t v is transformed by a matrix A A A , the resulting line has the parametric form:
T ( r ( t ) ) = A r 0 + t A v T(\mathbf{r}(t)) = A\mathbf{r}_0 + tA\mathbf{v} T ( r ( t )) = A r 0 + t A v This means:
The point r 0 \mathbf{r}_0 r 0 transforms to A r 0 A\mathbf{r}_0 A r 0
The direction vector v \mathbf{v} v transforms to A v A\mathbf{v} A v
Note: If A A A is singular and A v = 0 A\mathbf{v} = \mathbf{0} A v = 0 , the line collapses to a single point A r 0 A\mathbf{r}_0 A r 0 .
Example
The line L L L has parametric equations x = 1 + 2 t x = 1 + 2t x = 1 + 2 t , y = 3 − t y = 3 - t y = 3 − t , z = 2 t z = 2t z = 2 t . Find the equation of the transformed line after applying the matrix A = ( 2 0 0 0 3 0 0 0 1 ) A = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 1 \end{pmatrix} A = 2 0 0 0 3 0 0 0 1 .
Solution:
r 0 ′ = A r 0 = ( 2 9 1 ) , v ′ = A v = ( 4 − 3 4 ) \mathbf{r}_0' = A\mathbf{r}_0 = \begin{pmatrix} 2 \\ 9 \\ 1 \end{pmatrix}, \quad
\mathbf{v}' = A\mathbf{v} = \begin{pmatrix} 4 \\ -3 \\ 4 \end{pmatrix} r 0 ′ = A r 0 = 2 9 1 , v ′ = A v = 4 − 3 4 The transformed line is:
r ′ ( t ) = ( 2 9 1 ) + t ( 4 − 3 4 ) \mathbf{r}'(t) = \begin{pmatrix} 2 \\ 9 \\ 1 \end{pmatrix} + t\begin{pmatrix} 4 \\ -3 \\ 4 \end{pmatrix} r ′ ( t ) = 2 9 1 + t 4 − 3 4 In symmetric form: x − 2 4 = y − 9 − 3 = z − 1 4 \frac{x-2}{4} = \frac{y-9}{-3} = \frac{z-1}{4} 4 x − 2 = − 3 y − 9 = 4 z − 1
Transforming a Plane
When a plane Π 1 \Pi_1 Π 1 is transformed by a non-singular matrix M \mathbf{M} M to a plane Π 2 \Pi_2 Π 2 , there are two approaches:
1. Using the parametric form:
T ( r ( s , t ) ) = A r 0 + s A v 1 + t A v 2 T(\mathbf{r}(s,t)) = A\mathbf{r}_0 + sA\mathbf{v}_1 + tA\mathbf{v}_2 T ( r ( s , t )) = A r 0 + s A v 1 + t A v 2 2. Using the inverse transformation method:
If ( x , y , z ) (x,y,z) ( x , y , z ) on Π 1 \Pi_1 Π 1 maps to ( u , v , w ) (u,v,w) ( u , v , w ) on Π 2 \Pi_2 Π 2 by M \mathbf{M} M , find M − 1 \mathbf{M}^{-1} M − 1 , express ( x , y , z ) (x,y,z) ( x , y , z ) in terms of ( u , v , w ) (u,v,w) ( u , v , w ) , and substitute into the equation of Π 1 \Pi_1 Π 1 .
Example
Consider the plane 3 x − 7 y + 2 z = − 3 3x - 7y + 2z = -3 3 x − 7 y + 2 z = − 3 and the transformation M = ( 2 0 0 0 1 0 0 0 3 ) \mathbf{M} = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 3 \end{pmatrix} M = 2 0 0 0 1 0 0 0 3 .
M − 1 = ( 1 2 0 0 0 1 0 0 0 1 3 ) \mathbf{M}^{-1} = \begin{pmatrix} \frac{1}{2} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \frac{1}{3} \end{pmatrix} M − 1 = 2 1 0 0 0 1 0 0 0 3 1 So x = 1 2 u x = \frac{1}{2}u x = 2 1 u , y = v y = v y = v , z = 1 3 w z = \frac{1}{3}w z = 3 1 w . Substituting:
3 2 u − 7 v + 2 3 w = − 3 ⟹ 9 u − 42 v + 4 w = − 18 \frac{3}{2}u - 7v + \frac{2}{3}w = -3 \implies 9u - 42v + 4w = -18 2 3 u − 7 v + 3 2 w = − 3 ⟹ 9 u − 42 v + 4 w = − 18
Exercise: Matrix Transformation of Planes
Let M = ( 2 0 0 0 1 4 3 − 2 − 3 ) \mathbf{M} = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 1 & 4 \\ 3 & -2 & -3 \end{pmatrix} M = 2 0 3 0 1 − 2 0 4 − 3 .
(a) Determine M − 1 \mathbf{M}^{-1} M − 1 .
The transformation represented by M \mathbf{M} M maps the plane Π 1 \Pi_1 Π 1 to the plane Π 2 \Pi_2 Π 2 . The point ( x , y , z ) (x, y, z) ( x , y , z ) on Π 1 \Pi_1 Π 1 maps to the point ( u , v , w ) (u, v, w) ( u , v , w ) on Π 2 \Pi_2 Π 2 .
(b) Determine x x x , y y y and z z z in terms of u u u , v v v and w w w .
The plane Π 1 \Pi_1 Π 1 has equation 3 x − 7 y + 2 z = − 3 3x - 7y + 2z = -3 3 x − 7 y + 2 z = − 3 .
(c) Find a Cartesian equation for Π 2 \Pi_2 Π 2 in the form a u + b v + c w = d au + bv + cw = d a u + b v + c w = d where a a a , b b b , c c c and d d d are integers.
A Motivating Question: Computing Powers of Matrices
How would you compute A 100 A^{100} A 100 for a matrix A A A ?
Diagonalization provides the answer. If A = P D P − 1 A = PDP^{-1} A = P D P − 1 where D D D is a diagonal matrix, then:
A n = P D n P − 1 A^n = PD^nP^{-1} A n = P D n P − 1 And since D D D is diagonal with eigenvalues λ 1 , λ 2 , … , λ n \lambda_1, \lambda_2, \ldots, \lambda_n λ 1 , λ 2 , … , λ n on its diagonal:
D n = ( λ 1 n 0 ⋯ 0 0 λ 2 n ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ λ n n ) D^n = \begin{pmatrix} \lambda_1^n & 0 & \cdots & 0 \\ 0 & \lambda_2^n & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n^n \end{pmatrix} D n = λ 1 n 0 ⋮ 0 0 λ 2 n ⋮ 0 ⋯ ⋯ ⋱ ⋯ 0 0 ⋮ λ n n This means we can compute A 100 A^{100} A 100 or A 1000 A^{1000} A 1000 with just a few matrix multiplications!
Definition. For a square matrix A A A , a non-zero vector v \mathbf{v} v is an eigenvector of A A A if there exists a scalar λ \lambda λ (the eigenvalue ) such that:
A v = λ v A\mathbf{v} = \lambda\mathbf{v} A v = λ v
Step-by-Step Procedure
Start with the equation A v = λ v A\mathbf{v} = \lambda\mathbf{v} A v = λ v
Rearrange to ( A − λ I ) v = 0 (A - \lambda I)\mathbf{v} = \mathbf{0} ( A − λ I ) v = 0
For non-zero solutions, we need det ( A − λ I ) = 0 \det(A - \lambda I) = 0 det ( A − λ I ) = 0
Solve this characteristic equation to find the eigenvalues λ \lambda λ
For each eigenvalue, find the corresponding eigenvectors by solving ( A − λ I ) v = 0 (A - \lambda I)\mathbf{v} = \mathbf{0} ( A − λ I ) v = 0
Eigenvalues and Eigenvectors of 2×2 Matrices
Example
Find the eigenvalues and eigenvectors of A = ( 3 1 1 3 ) A = \begin{pmatrix} 3 & 1 \\ 1 & 3 \end{pmatrix} A = ( 3 1 1 3 ) .
Step 1: Characteristic equation:
det ( A − λ I ) = ( 3 − λ ) 2 − 1 = λ 2 − 6 λ + 8 = ( λ − 4 ) ( λ − 2 ) = 0 \det(A - \lambda I) = (3-\lambda)^2 - 1 = \lambda^2 - 6\lambda + 8 = (\lambda-4)(\lambda-2) = 0 det ( A − λ I ) = ( 3 − λ ) 2 − 1 = λ 2 − 6 λ + 8 = ( λ − 4 ) ( λ − 2 ) = 0 Eigenvalues: λ 1 = 4 \lambda_1 = 4 λ 1 = 4 and λ 2 = 2 \lambda_2 = 2 λ 2 = 2 .
Step 2: For λ 1 = 4 \lambda_1 = 4 λ 1 = 4 : ( A − 4 I ) v = 0 (A - 4I)\mathbf{v} = \mathbf{0} ( A − 4 I ) v = 0 gives v 1 = v 2 v_1 = v_2 v 1 = v 2 , so v 1 = ( 1 1 ) \mathbf{v}_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} v 1 = ( 1 1 ) .
Step 3: For λ 2 = 2 \lambda_2 = 2 λ 2 = 2 : ( A − 2 I ) v = 0 (A - 2I)\mathbf{v} = \mathbf{0} ( A − 2 I ) v = 0 gives v 1 = − v 2 v_1 = -v_2 v 1 = − v 2 , so v 2 = ( 1 − 1 ) \mathbf{v}_2 = \begin{pmatrix} 1 \\ -1 \end{pmatrix} v 2 = ( 1 − 1 ) .
Eigenvalues and Eigenvectors of 3×3 Matrices
Example
Find the eigenvalues and one eigenvector for each eigenvalue of A = ( 2 0 0 0 3 4 0 4 9 ) A = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & 4 \\ 0 & 4 & 9 \end{pmatrix} A = 2 0 0 0 3 4 0 4 9 .
Step 1: Characteristic polynomial:
det ( A − λ I ) = ( 2 − λ ) [ ( 3 − λ ) ( 9 − λ ) − 16 ] = ( 2 − λ ) ( λ 2 − 12 λ + 11 ) \det(A - \lambda I) = (2-\lambda)[(3-\lambda)(9-\lambda) - 16] = (2-\lambda)(\lambda^2 - 12\lambda + 11) det ( A − λ I ) = ( 2 − λ ) [( 3 − λ ) ( 9 − λ ) − 16 ] = ( 2 − λ ) ( λ 2 − 12 λ + 11 ) Step 2: Eigenvalues: λ 1 = 2 \lambda_1 = 2 λ 1 = 2 , λ 2 = 11 \lambda_2 = 11 λ 2 = 11 , λ 3 = 1 \lambda_3 = 1 λ 3 = 1 .
Step 3: Eigenvectors:
λ 1 = 2 : v 1 = ( 1 0 0 ) λ 2 = 11 : v 2 = ( 0 1 2 ) λ 3 = 1 : v 3 = ( 0 2 − 1 ) \begin{aligned}
\lambda_1 = 2 &: \mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \\
\lambda_2 = 11 &: \mathbf{v}_2 = \begin{pmatrix} 0 \\ 1 \\ 2 \end{pmatrix} \\
\lambda_3 = 1 &: \mathbf{v}_3 = \begin{pmatrix} 0 \\ 2 \\ -1 \end{pmatrix}
\end{aligned} λ 1 = 2 λ 2 = 11 λ 3 = 1 : v 1 = 1 0 0 : v 2 = 0 1 2 : v 3 = 0 2 − 1
Key Properties
The determinant of a matrix equals the product of its eigenvalues: det ( A ) = λ 1 λ 2 ⋯ λ n \det(A) = \lambda_1 \lambda_2 \cdots \lambda_n det ( A ) = λ 1 λ 2 ⋯ λ n .
The trace of a matrix equals the sum of its eigenvalues: tr ( A ) = λ 1 + λ 2 + ⋯ + λ n \text{tr}(A) = \lambda_1 + \lambda_2 + \cdots + \lambda_n tr ( A ) = λ 1 + λ 2 + ⋯ + λ n .
The trace is the sum of the elements on the main diagonal: tr ( A ) = a 11 + a 22 + ⋯ + a n n \text{tr}(A) = a_{11} + a_{22} + \cdots + a_{nn} tr ( A ) = a 11 + a 22 + ⋯ + a nn .
Diagonalization Procedure
A matrix A A A is diagonalizable if it can be written as A = P D P − 1 A = PDP^{-1} A = P D P − 1 , where:
D D D is a diagonal matrix containing the eigenvalues of A A A
P P P is a matrix whose columns are the corresponding eigenvectors
To construct the diagonalization:
Find all eigenvalues of A A A
For each eigenvalue, find a corresponding eigenvector
Form matrix P P P with eigenvectors as columns
Form diagonal matrix D D D with eigenvalues on the diagonal
Verify that P − 1 A P = D P^{-1}AP = D P − 1 A P = D
Normalization of Eigenvectors
To normalize an eigenvector v \mathbf{v} v :
v normalized = v ∣ ∣ v ∣ ∣ \mathbf{v}_{\text{normalized}} = \frac{\mathbf{v}}{||\mathbf{v}||} v normalized = ∣∣ v ∣∣ v For symmetric matrices, if all eigenvectors are normalized, then P − 1 = P T P^{-1} = P^T P − 1 = P T , giving us P T A P = D P^TAP = D P T A P = D .
Example: Diagonalization of a 2×2 Matrix
Diagonalize A = ( 3 1 1 3 ) A = \begin{pmatrix} 3 & 1 \\ 1 & 3 \end{pmatrix} A = ( 3 1 1 3 ) .
From the previous example, eigenvalues are λ 1 = 4 \lambda_1 = 4 λ 1 = 4 and λ 2 = 2 \lambda_2 = 2 λ 2 = 2 , with eigenvectors v 1 = ( 1 1 ) \mathbf{v}_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} v 1 = ( 1 1 ) and v 2 = ( 1 − 1 ) \mathbf{v}_2 = \begin{pmatrix} 1 \\ -1 \end{pmatrix} v 2 = ( 1 − 1 ) .
Normalizing:
P = ( 1 2 1 2 1 2 − 1 2 ) , D = ( 4 0 0 2 ) P = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{pmatrix}, \quad
D = \begin{pmatrix} 4 & 0 \\ 0 & 2 \end{pmatrix} P = ( 2 1 2 1 2 1 − 2 1 ) , D = ( 4 0 0 2 ) Verification: P T A P = D P^TAP = D P T A P = D . ✓
Example: Diagonalization of a 3×3 Matrix
Given that A = ( 5 3 3 3 1 1 3 1 1 ) A = \begin{pmatrix} 5 & 3 & 3 \\ 3 & 1 & 1 \\ 3 & 1 & 1 \end{pmatrix} A = 5 3 3 3 1 1 3 1 1 has eigenvalues 0 0 0 , − 1 -1 − 1 , and 8 8 8 , with eigenvectors ( − 1 1 1 ) \begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix} − 1 1 1 (for λ = − 1 \lambda = -1 λ = − 1 ) and ( 2 1 1 ) \begin{pmatrix} 2 \\ 1 \\ 1 \end{pmatrix} 2 1 1 (for λ = 8 \lambda = 8 λ = 8 ).
Task 1: Find a normalized eigenvector for λ = 0 \lambda = 0 λ = 0 .
Solving A v = 0 A\mathbf{v} = \mathbf{0} A v = 0 : v 3 = ( 0 1 − 1 ) \mathbf{v}_3 = \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} v 3 = 0 1 − 1 , normalized: 1 2 ( 0 1 − 1 ) \frac{1}{\sqrt{2}}\begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} 2 1 0 1 − 1 .
Task 2: Form P P P and D D D :
P = ( 0 − 1 2 1 1 1 − 1 1 1 ) , D = ( 0 0 0 0 − 1 0 0 0 8 ) P = \begin{pmatrix} 0 & -1 & 2 \\ 1 & 1 & 1 \\ -1 & 1 & 1 \end{pmatrix}, \quad
D = \begin{pmatrix} 0 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 8 \end{pmatrix} P = 0 1 − 1 − 1 1 1 2 1 1 , D = 0 0 0 0 − 1 0 0 0 8
Special Case: Symmetric Matrices
When A A A is a symmetric matrix (A = A T A = A^T A = A T ):
All eigenvalues are real numbers
Eigenvectors corresponding to distinct eigenvalues are orthogonal
We can always find an orthonormal basis of eigenvectors
The matrix P P P can be chosen so that P T = P − 1 P^T = P^{-1} P T = P − 1 , giving us P T A P = D P^TAP = D P T A P = D
Example: Symmetric Matrix Diagonalization
Given that 9 9 9 is an eigenvalue of A = ( 7 0 − 2 0 5 − 2 − 2 − 2 6 ) A = \begin{pmatrix} 7 & 0 & -2 \\ 0 & 5 & -2 \\ -2 & -2 & 6 \end{pmatrix} A = 7 0 − 2 0 5 − 2 − 2 − 2 6 .
(a) Find the other two eigenvalues:
det ( A − λ I ) = − ( λ − 9 ) ( λ − 6 ) ( λ − 3 ) \det(A - \lambda I) = -(\lambda - 9)(\lambda - 6)(\lambda - 3) det ( A − λ I ) = − ( λ − 9 ) ( λ − 6 ) ( λ − 3 ) Eigenvalues: 9 9 9 , 6 6 6 , and 3 3 3 .
(b) Find eigenvectors:
λ 1 = 9 : v 1 = ( − 2 − 1 2 ) , u 1 = 1 3 ( − 2 − 1 2 ) λ 2 = 6 : v 2 = ( 2 − 2 1 ) , u 2 = 1 3 ( 2 − 2 1 ) λ 3 = 3 : v 3 = ( 1 2 2 ) , u 3 = 1 3 ( 1 2 2 ) \begin{aligned}
\lambda_1 = 9 &: \mathbf{v}_1 = \begin{pmatrix} -2 \\ -1 \\ 2 \end{pmatrix}, \quad \mathbf{u}_1 = \frac{1}{3}\begin{pmatrix} -2 \\ -1 \\ 2 \end{pmatrix} \\[0.3cm]
\lambda_2 = 6 &: \mathbf{v}_2 = \begin{pmatrix} 2 \\ -2 \\ 1 \end{pmatrix}, \quad \mathbf{u}_2 = \frac{1}{3}\begin{pmatrix} 2 \\ -2 \\ 1 \end{pmatrix} \\[0.3cm]
\lambda_3 = 3 &: \mathbf{v}_3 = \begin{pmatrix} 1 \\ 2 \\ 2 \end{pmatrix}, \quad \mathbf{u}_3 = \frac{1}{3}\begin{pmatrix} 1 \\ 2 \\ 2 \end{pmatrix}
\end{aligned} λ 1 = 9 λ 2 = 6 λ 3 = 3 : v 1 = − 2 − 1 2 , u 1 = 3 1 − 2 − 1 2 : v 2 = 2 − 2 1 , u 2 = 3 1 2 − 2 1 : v 3 = 1 2 2 , u 3 = 3 1 1 2 2 (c) Form P P P and D D D :
P = ( − 2 3 2 3 1 3 − 1 3 − 2 3 2 3 2 3 1 3 2 3 ) , D = ( 9 0 0 0 6 0 0 0 3 ) P = \begin{pmatrix} -\frac{2}{3} & \frac{2}{3} & \frac{1}{3} \\ -\frac{1}{3} & -\frac{2}{3} & \frac{2}{3} \\ \frac{2}{3} & \frac{1}{3} & \frac{2}{3} \end{pmatrix}, \quad
D = \begin{pmatrix} 9 & 0 & 0 \\ 0 & 6 & 0 \\ 0 & 0 & 3 \end{pmatrix} P = − 3 2 − 3 1 3 2 3 2 − 3 2 3 1 3 1 3 2 3 2 , D = 9 0 0 0 6 0 0 0 3 Verification: P T A P = D P^TAP = D P T A P = D . ✓
Exercise: Eigenvalues and Eigenvectors
(a) The matrix A = ( 2 − 1 3 0 2 4 0 2 0 ) A = \begin{pmatrix} 2 & -1 & 3 \\ 0 & 2 & 4 \\ 0 & 2 & 0 \end{pmatrix} A = 2 0 0 − 1 2 2 3 4 0 .
(i) Show that 4 is an eigenvalue of A A A and find the other two eigenvalues.
(ii) Find an eigenvector corresponding to the eigenvalue 4.
(b) Consider A = ( 3 4 − 4 4 5 0 − 4 0 1 ) \mathbf{A} = \begin{pmatrix} 3 & 4 & -4 \\ 4 & 5 & 0 \\ -4 & 0 & 1 \end{pmatrix} A = 3 4 − 4 4 5 0 − 4 0 1 .
(i) Show that 3 is an eigenvalue of A \mathbf{A} A and find the other two eigenvalues.
(ii) Find an eigenvector corresponding to the eigenvalue 3.
(iii) Given that ( 2 2 − 1 ) \begin{pmatrix} 2 \\ 2 \\ -1 \end{pmatrix} 2 2 − 1 and ( 2 − 1 2 ) \begin{pmatrix} 2 \\ -1 \\ 2 \end{pmatrix} 2 − 1 2 are eigenvectors corresponding to the other two eigenvalues, find a matrix P \mathbf{P} P such that P T A P \mathbf{P}^T\mathbf{A}\mathbf{P} P T AP is a diagonal matrix.
Exercise 1
Find the determinant of the following matrices:
A = ( 3 1 2 0 − 1 4 2 2 1 ) , B = ( 2 0 1 4 3 2 2 1 1 ) , C = ( 3 6 − 3 1 2 − 1 2 4 − 2 ) A = \begin{pmatrix} 3 & 1 & 2 \\ 0 & -1 & 4 \\ 2 & 2 & 1 \end{pmatrix}, \quad
B = \begin{pmatrix} 2 & 0 & 1 \\ 4 & 3 & 2 \\ 2 & 1 & 1 \end{pmatrix}, \quad
C = \begin{pmatrix} 3 & 6 & -3 \\ 1 & 2 & -1 \\ 2 & 4 & -2 \end{pmatrix} A = 3 0 2 1 − 1 2 2 4 1 , B = 2 4 2 0 3 1 1 2 1 , C = 3 1 2 6 2 4 − 3 − 1 − 2 For each matrix, explain what the determinant tells you about the corresponding systems of linear equations.
Exercise 2
The matrix P = ( 1 2 1 2 1 − 1 3 1 0 ) P = \begin{pmatrix} 1 & 2 & 1 \\ 2 & 1 & -1 \\ 3 & 1 & 0 \end{pmatrix} P = 1 2 3 2 1 1 1 − 1 0 .
(a) Find the determinant of P P P .
(b) Find the inverse of P P P using the cofactor method.
(c) Hence solve the system:
x + 2 y + z = 6 2 x + y − z = 1 3 x + y = 4 \begin{aligned}
x + 2y + z &= 6 \\
2x + y - z &= 1 \\
3x + y &= 4
\end{aligned} x + 2 y + z 2 x + y − z 3 x + y = 6 = 1 = 4
Exercise 3
For the matrix A = ( 3 1 1 2 k 0 4 − 1 3 ) A = \begin{pmatrix} 3 & 1 & 1 \\ 2 & k & 0 \\ 4 & -1 & 3 \end{pmatrix} A = 3 2 4 1 k − 1 1 0 3 , find:
(a) The value of k k k for which det ( A ) = 0 \det(A) = 0 det ( A ) = 0 .
(b) For this value of k k k , determine whether A x = ( 2 1 3 ) A\mathbf{x} = \begin{pmatrix} 2 \\ 1 \\ 3 \end{pmatrix} A x = 2 1 3 has no solution, a unique solution, or infinitely many solutions.
(c) For k = 2 k = 2 k = 2 , find det ( A ) \det(A) det ( A ) and A − 1 A^{-1} A − 1 .
Exercise 4
The 3D transformation matrix T = ( cos θ − sin θ 0 sin θ cos θ 0 0 0 1 ) T = \begin{pmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{pmatrix} T = cos θ sin θ 0 − sin θ cos θ 0 0 0 1 represents a rotation around the z z z -axis.
(a) Show that det ( T ) = 1 \det(T) = 1 det ( T ) = 1 for any value of θ \theta θ .
(b) Find the inverse of T T T and verify that T − 1 T^{-1} T − 1 represents a rotation by − θ -\theta − θ .
(c) If the point P ( 2 , 1 , 3 ) P(2,1,3) P ( 2 , 1 , 3 ) is rotated by π 4 \frac{\pi}{4} 4 π (45°) around the z z z -axis, find the coordinates of the resulting point P ′ P' P ′ .
Exercise 5
Let T = ( 2 3 7 3 2 6 a 4 b ) \mathbf{T} = \begin{pmatrix} 2 & 3 & 7 \\ 3 & 2 & 6 \\ a & 4 & b \end{pmatrix} T = 2 3 a 3 2 4 7 6 b and U = ( 6 − 1 − 4 15 c − 9 − 8 a 5 ) \mathbf{U} = \begin{pmatrix} 6 & -1 & -4 \\ 15 & c & -9 \\ -8 & a & 5 \end{pmatrix} U = 6 15 − 8 − 1 c a − 4 − 9 5 where a a a , b b b and c c c are constants.
Given that T U = I \mathbf{T}\mathbf{U} = \mathbf{I} TU = I :
(a) Determine the value of a a a , the value of b b b and the value of c c c .
The transformation represented by the matrix T \mathbf{T} T transforms the line l 1 l_1 l 1 to the line l 2 l_2 l 2 .
Given that l 2 l_2 l 2 has equation x − 1 3 = y − 4 = z + 2 \frac{x-1}{3} = \frac{y}{-4} = z+2 3 x − 1 = − 4 y = z + 2 :
(b) Determine a Cartesian equation for l 1 l_1 l 1 .
Exercise 6
Let M = ( 0 − 1 3 − 1 4 − 1 3 − 1 0 ) \mathbf{M} = \begin{pmatrix} 0 & -1 & 3 \\ -1 & 4 & -1 \\ 3 & -1 & 0 \end{pmatrix} M = 0 − 1 3 − 1 4 − 1 3 − 1 0 .
Given that ( 1 − 2 1 ) \begin{pmatrix} 1 \\ -2 \\ 1 \end{pmatrix} 1 − 2 1 is an eigenvector of M \mathbf{M} M :
(a) Determine its corresponding eigenvalue.
Given that − 3 -3 − 3 is an eigenvalue of M \mathbf{M} M :
(b) Determine a corresponding eigenvector.
Hence, given that ( 1 1 1 ) \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} 1 1 1 is also an eigenvector of M \mathbf{M} M :
(c) Determine a diagonal matrix D \mathbf{D} D and an orthogonal matrix P \mathbf{P} P such that D = P T M P \mathbf{D} = \mathbf{P}^T\mathbf{M}\mathbf{P} D = P T MP .