100% FREE Updated: Mar 2026 Linear Algebra Matrix Properties and Decompositions

Specialized Matrices and Properties

Comprehensive study notes on Specialized Matrices and Properties for GATE DA preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

Specialized Matrices and Properties

Overview

Having established the foundational principles of matrix operations and vector spaces, we now advance our study to a class of matrices possessing distinct and powerful structural properties. While general matrices provide a broad framework for linear transformations, certain types of matrices exhibit predictable behaviors that greatly simplify analysis and computation. This chapter is dedicated to the systematic exploration of these specialized matrices, whose unique characteristics are not merely theoretical curiosities but are fundamental to solving a wide range of problems in engineering and data analysis. A firm command of these concepts is indispensable, as they frequently appear in the GATE examination, often in questions that test for a deeper, more intuitive understanding of linear algebra beyond rote computation.

In our investigation, we shall begin with the determinant, a fundamental scalar value that encapsulates critical information about a square matrix, including its invertibility and the volume scaling of a linear transformation. We will then proceed to examine orthogonal matrices, which are central to the study of rotations and transformations that preserve geometric properties such as length and angle. Subsequently, we will explore idempotent and projection matrices, which are instrumental in statistical analysis and machine learning, particularly in the context of regression and dimensionality reduction. Finally, we introduce the concept of partitioned matrices, a practical technique for simplifying the manipulation of large and complex matrices. Understanding the interplay between these matrix types and their properties is crucial for developing the analytical facility required to excel in the GATE examination.

---

Chapter Contents

| # | Topic | What You'll Learn |
|---|-------|-------------------|
| 1 | Determinant | A scalar value revealing matrix properties. |
| 2 | Orthogonal Matrix | Matrices preserving length and angle transformations. |
| 3 | Idempotent Matrix | Matrices where repeated application yields itself. |
| 4 | Projection Matrix | Matrices that project vectors onto a subspace. |
| 5 | Partitioned Matrices | Techniques for manipulating matrices as blocks. |

---

Learning Objectives

By the End of This Chapter

After completing this chapter, you will be able to:

  • Calculate the determinant of a matrix and relate its value to properties such as invertibility and rank.

  • Identify orthogonal, idempotent, and projection matrices and state their defining properties, including their eigenvalues and determinants.

  • Analyze the geometric interpretation of transformations corresponding to orthogonal and projection matrices.

  • Apply the techniques of matrix partitioning to simplify matrix multiplication and inversion for block matrices.

---

We now turn our attention to Determinant...
## Part 1: Determinant

Introduction

The determinant is a fundamental scalar value that can be computed from the elements of a square matrix. This single number encapsulates a wealth of information about the matrix and the linear transformation it represents. From a geometric perspective, the absolute value of the determinant gives the scaling factor by which areas or volumes are multiplied under the associated linear transformation. Algebraically, the determinant provides a criterion for invertibility; a non-zero determinant signifies that the matrix is invertible and that the corresponding system of linear equations has a unique solution.

For the GATE examination, a firm grasp of the determinant, its properties, and its calculation is indispensable. We will explore the methods for computing determinants and, more critically, the algebraic properties that facilitate the solution of complex matrix problems without resorting to lengthy computations. Our focus will be on the application of these properties to solve problems involving matrix expressions and to deduce characteristics of special matrix forms.

📖 Determinant

For a square matrix ARn×nA \in \mathbb{R}^{n \times n}, the determinant, denoted as det(A)\det(A) or A|A|, is a scalar value. For an n×nn \times n matrix A=[aij]A = [a_{ij}], the determinant can be defined recursively using cofactor expansion along any row ii as:

det(A)=j=1n(1)i+jaijMij\det(A) = \sum_{j=1}^{n} (-1)^{i+j} a_{ij} M_{ij}

where MijM_{ij} is the determinant of the (n1)×(n1)(n-1) \times (n-1) submatrix formed by removing the ii-th row and jj-th column of AA. The term Cij=(1)i+jMijC_{ij} = (-1)^{i+j} M_{ij} is known as the cofactor of the element aija_{ij}.

---

Key Concepts

#
## 1. Calculation of Determinants

While the cofactor expansion is the formal definition, for smaller matrices, more direct methods are employed.

Determinant of a 2×22 \times 2 Matrix

For a matrix A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, the determinant is calculated as:

det(A)=adbc\det(A) = ad - bc

Determinant of a 3×33 \times 3 Matrix

For a matrix A=[abcdefghi]A = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}, we can use the Sarrus' rule or cofactor expansion. Using cofactor expansion along the first row:

det(A)=aefhibdfgi+cdegh\det(A) = a \begin{vmatrix} e & f \\ h & i \end{vmatrix} - b \begin{vmatrix} d & f \\ g & i \end{vmatrix} + c \begin{vmatrix} d & e \\ g & h \end{vmatrix}
=a(eifh)b(difg)+c(dheg)= a(ei - fh) - b(di - fg) + c(dh - eg)

Determinant of a Triangular Matrix

A significant simplification arises for triangular (either upper or lower) matrices.

📐 Determinant of a Triangular Matrix
det(A)=i=1naii\det(A) = \prod_{i=1}^{n} a_{ii}

Variables:

    • AA is an n×nn \times n upper or lower triangular matrix.

    • aiia_{ii} are the diagonal elements of AA.


When to use: This is a crucial shortcut. If a matrix is triangular or can be reduced to a triangular form using row operations, its determinant is simply the product of its diagonal entries.

Worked Example:

Problem: Calculate the determinant of the matrix M=[215034004]M = \begin{bmatrix} 2 & 1 & 5 \\ 0 & -3 & 4 \\ 0 & 0 & 4 \end{bmatrix}.

Solution:

Step 1: Identify the type of matrix.
The matrix MM is an upper triangular matrix, as all entries below the main diagonal are zero.

Step 2: Apply the formula for the determinant of a triangular matrix.
The determinant is the product of the diagonal elements.

det(M)=(2)×(3)×(4)\det(M) = (2) \times (-3) \times (4)

Step 3: Compute the final value.

det(M)=24\det(M) = -24

Answer: The determinant of the matrix MM is 24-24.

---

#
## 2. Properties of Determinants

The properties of determinants are far more frequently tested in GATE than direct computation. Mastering these is essential for efficient problem-solving. Let AA and BB be n×nn \times n matrices and kk be a scalar.

  • Multiplicative Property: The determinant of a product of matrices is the product of their determinants.

  • det(AB)=det(A)det(B)\det(AB) = \det(A) \det(B)

  • Transpose Property: The determinant of a matrix is equal to the determinant of its transpose.

  • det(AT)=det(A)\det(A^T) = \det(A)

  • Inverse Property: The determinant of the inverse of a matrix is the reciprocal of its determinant. This is valid only if det(A)0\det(A) \neq 0.

  • det(A1)=1det(A)\det(A^{-1}) = \frac{1}{\det(A)}

  • Scalar Multiplication Property: If a matrix is multiplied by a scalar kk, the determinant is scaled by knk^n.

  • det(kA)=kndet(A)\det(kA) = k^n \det(A)

  • Effect of Row/Column Operations:

  • * Swapping two rows/columns multiplies the determinant by 1-1.
    * Multiplying a single row/column by a scalar kk multiplies the determinant by kk.
    * Adding a multiple of one row/column to another does not change the determinant.

  • Singular Matrix Property: A square matrix AA is singular (non-invertible) if and only if its determinant is zero.

  • det(A)=0    A is singular\det(A) = 0 \iff A \text{ is singular}

  • Determinant and Eigenvalues: The determinant of a matrix is the product of its eigenvalues (λ1,λ2,,λn)(\lambda_1, \lambda_2, \ldots, \lambda_n).

  • det(A)=i=1nλi\det(A) = \prod_{i=1}^{n} \lambda_i

    Worked Example (Based on GATE PYQ concepts):

    Problem: Consider the matrix A=[102213418]A = \begin{bmatrix} 1 & 0 & 2 \\ 2 & -1 & 3 \\ 4 & 1 & 8 \end{bmatrix}. Calculate the determinant of (A2A)(A^2 - A).

    Solution:

    Step 1: Factor the matrix expression inside the determinant.
    We are asked to find det(A2A)\det(A^2 - A). We can factor out the matrix AA.

    det(A2A)=det(A(AI))\det(A^2 - A) = \det(A(A - I))

    where II is the 3×33 \times 3 identity matrix.

    Step 2: Apply the multiplicative property of determinants.

    det(A(AI))=det(A)×det(AI)\det(A(A - I)) = \det(A) \times \det(A - I)

    Step 3: Calculate det(A)\det(A).
    Using cofactor expansion along the first row:

    det(A)=1131802348+22141\det(A) = 1 \begin{vmatrix} -1 & 3 \\ 1 & 8 \end{vmatrix} - 0 \begin{vmatrix} 2 & 3 \\ 4 & 8 \end{vmatrix} + 2 \begin{vmatrix} 2 & -1 \\ 4 & 1 \end{vmatrix}
    det(A)=1((1)(8)(3)(1))0+2((2)(1)(1)(4))\det(A) = 1((-1)(8) - (3)(1)) - 0 + 2((2)(1) - (-1)(4))
    det(A)=1(83)+2(2+4)=11+2(6)=11+12=1\det(A) = 1(-8 - 3) + 2(2 + 4) = -11 + 2(6) = -11 + 12 = 1

    Step 4: Calculate the matrix (AI)(A - I).

    AI=[102213418][100010001]=[002223417]A - I = \begin{bmatrix} 1 & 0 & 2 \\ 2 & -1 & 3 \\ 4 & 1 & 8 \end{bmatrix} - \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 2 \\ 2 & -2 & 3 \\ 4 & 1 & 7 \end{bmatrix}

    Step 5: Calculate det(AI)\det(A - I).
    Using cofactor expansion along the first row:

    det(AI)=0231702347+22241\det(A - I) = 0 \begin{vmatrix} -2 & 3 \\ 1 & 7 \end{vmatrix} - 0 \begin{vmatrix} 2 & 3 \\ 4 & 7 \end{vmatrix} + 2 \begin{vmatrix} 2 & -2 \\ 4 & 1 \end{vmatrix}
    det(AI)=00+2((2)(1)(2)(4))=2(2+8)=2(10)=20\det(A - I) = 0 - 0 + 2((2)(1) - (-2)(4)) = 2(2 + 8) = 2(10) = 20

    Step 6: Compute the final result.

    det(A2A)=det(A)×det(AI)=1×20=20\det(A^2 - A) = \det(A) \times \det(A - I) = 1 \times 20 = 20

    Answer: The determinant of (A2A)(A^2 - A) is 2020.

    ---

    #
    ## 3. The Gram Matrix

    A special type of matrix, the Gram matrix, has a determinant whose properties are directly linked to the linear independence of a set of vectors.

    📖 Gram Matrix

    Given a set of vectors {v1,v2,,vk}\{v_1, v_2, \ldots, v_k\} in Rn\mathbb{R}^n, the Gram matrix GG is a k×kk \times k matrix whose entries are given by the inner products of these vectors:

    Gij=viTvjG_{ij} = v_i^T v_j

    The determinant of the Gram matrix, known as the Gram determinant, has a profound connection to linear independence.

    Gram Determinant and Linear Independence

    Let GG be the Gram matrix of a set of vectors {v1,v2,,vk}\{v_1, v_2, \ldots, v_k\}.

    • The vectors are linearly independent if and only if det(G)0\det(G) \neq 0. In fact, for real vectors, det(G)>0\det(G) > 0.

    • The vectors are linearly dependent if and only if det(G)=0\det(G) = 0.

    This property is extremely powerful. If a problem states that a set of vectors is linearly independent, we can immediately conclude that their Gram matrix is invertible and has a non-zero determinant. Conversely, if the vectors are linearly dependent, the Gram matrix is singular.

    The geometric interpretation is that the square root of the Gram determinant, det(G)\sqrt{\det(G)}, gives the volume of the parallelepiped spanned by the vectors. If the vectors are linearly dependent, they lie in a lower-dimensional subspace, and this volume is zero.








    Area > 0
    v₁
    v₂

    Linearly Independent
    det(G) > 0





    Area = 0
    v₁
    v₂

    Linearly Dependent
    det(G) = 0









    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Factor Before Computing

    For questions involving determinants of matrix polynomials like det(A3+c1A2+c2A)\det(A^3 + c_1 A^2 + c_2 A), always attempt to factor the matrix expression first.

    For example, to compute det(A2+kA)\det(A^2 + kA):

    • Factor the expression: A2+kA=A(A+kI)A^2 + kA = A(A + kI). Remember to include the identity matrix II.

    • Use the multiplicative property: det(A(A+kI))=det(A)det(A+kI)\det(A(A + kI)) = \det(A) \det(A + kI).

    • This breaks the problem into two simpler determinant calculations. This approach is significantly faster and less error-prone than computing A2A^2, performing the matrix addition, and then finding the determinant of the resulting large-numbered matrix.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Incorrect Additivity: Assuming det(A+B)=det(A)+det(B)\det(A + B) = \det(A) + \det(B). This is almost never true.
    Correct Approach: There is no general simplification for det(A+B)\det(A+B). The matrices must be added first, and then the determinant of the resulting matrix must be computed.
      • Incorrect Scalar Factoring: Writing det(kA)=kdet(A)\det(kA) = k \det(A). This is a very common and critical error.
    Correct Approach: Remember the exponent nn, where nn is the dimension of the matrix. The correct property is det(kA)=kndet(A)\det(kA) = k^n \det(A). For a 3×33 \times 3 matrix, det(2A)=23det(A)=8det(A)\det(2A) = 2^3 \det(A) = 8 \det(A).
      • Zero Determinant Misconception: Believing that if det(A)=0\det(A) = 0, the matrix AA must be the zero matrix.
    Correct Approach: det(A)=0\det(A) = 0 only implies that the matrix is singular (non-invertible). Its rows/columns are linearly dependent. Many non-zero matrices have a determinant of zero. For example,
    det(1224)=44=0\det \begin{pmatrix} 1 & 2 \\ 2 & 4 \end{pmatrix} = 4 - 4 = 0

    ---

    Practice Questions

    :::question type="NAT" question="Consider the matrix AA given by:

    A=[200130211]A = \begin{bmatrix}2 & 0 & 0\\1 & 3 & 0\\2 & 1 & 1\end{bmatrix}

    The determinant of the matrix (A24A)(A^2 - 4A) is _______." answer="-36" hint="Factor the matrix expression inside the determinant first. Use the property det(XY)=det(X)det(Y)\det(XY) = \det(X)\det(Y). Note that the matrices are triangular." solution="
    Step 1: Factor the matrix expression.
    We need to compute det(A24A)\det(A^2 - 4A). We can factor this expression as:

    A24A=A(A4I)A^2 - 4A = A(A - 4I)

    where II is the 3×33 \times 3 identity matrix.

    Step 2: Apply the multiplicative property of determinants.

    det(A(A4I))=det(A)×det(A4I)\det(A(A - 4I)) = \det(A) \times \det(A - 4I)

    Step 3: Calculate det(A)\det(A).
    The matrix AA is a lower triangular matrix. Its determinant is the product of its diagonal elements.

    det(A)=(2)×(3)×(1)=6\det(A) = (2) \times (3) \times (1) = 6

    Step 4: Calculate the matrix (A4I)(A - 4I).

    A4I=[200130211]4[100010001]=[200110213]A - 4I = \begin{bmatrix} 2 & 0 & 0\\1 & 3 & 0\\2 & 1 & 1 \end{bmatrix} - 4 \begin{bmatrix} 1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} -2 & 0 & 0\\1 & -1 & 0\\2 & 1 & -3 \end{bmatrix}

    Step 5: Calculate det(A4I)\det(A - 4I).
    The resulting matrix (A4I)(A - 4I) is also a lower triangular matrix. Its determinant is the product of its diagonal elements.

    det(A4I)=(2)×(1)×(3)=6\det(A - 4I) = (-2) \times (-1) \times (-3) = -6

    Step 6: Compute the final result.

    det(A24A)=det(A)×det(A4I)=6×(6)=36\det(A^2 - 4A) = \det(A) \times \det(A - 4I) = 6 \times (-6) = -36

    Result:
    Answer: \boxed{-36}
    "
    :::

    :::question type="MCQ" question="Let {v1,v2,v3}\{v_1, v_2, v_3\} be a set of linearly dependent vectors in R4\mathbb{R}^4. Let a 3×33 \times 3 matrix GG be defined by its elements Gij=viTvjG_{ij} = v_i^T v_j. Which of the following statements about GG is always true?" options=["GG is positive definite","GG is invertible","All eigenvalues of GG are positive","det(GG) = 0"] answer="det(GG) = 0" hint="This question concerns the properties of a Gram matrix. Recall the relationship between the linear dependence of vectors and the determinant of their Gram matrix." solution="
    The matrix GG with elements Gij=viTvjG_{ij} = v_i^T v_j is the Gram matrix of the vectors {v1,v2,v3}\{v_1, v_2, v_3\}.

    A fundamental property of the Gram matrix is that its determinant is zero if and only if the set of vectors is linearly dependent. Since the problem states that the vectors are linearly dependent, we can directly conclude that det(G)=0\det(G) = 0.

    Let us analyze the other options:

    • GG is invertible: A matrix is invertible if and only if its determinant is non-zero. Since det(G)=0\det(G) = 0, GG is not invertible.

    • GG is positive definite: A matrix is positive definite if all its eigenvalues are strictly positive. This implies the matrix is invertible, which we have shown is false. A Gram matrix is positive definite if and only if the vectors are linearly independent. It is positive semi-definite if they are dependent, meaning it will have at least one zero eigenvalue.

    • All eigenvalues of GG are positive: Since det(G)\det(G) is the product of its eigenvalues, and det(G)=0\det(G) = 0, at least one eigenvalue of GG must be zero. Therefore, not all eigenvalues can be positive.


    The only statement that is always true is that the determinant of GG is 0.
    Answer: \boxed{\text{det}(G) = 0}
    "
    :::

    :::question type="MSQ" question="Let PP and QQ be two invertible 4×44 \times 4 matrices such that det(P)=3\det(P) = 3 and det(Q)=2\det(Q) = -2. Which of the following statements is/are correct?" options=["det(PTQ1)=1.5\det(P^T Q^{-1}) = -1.5","det(2P)=24\det(2P) = 24","det(P+Q)=1\det(P+Q) = 1","det(Q3)=8\det(Q^3) = -8"] answer="det(PTQ1)=1.5\det(P^T Q^{-1}) = -1.5,det(Q3)=8\det(Q^3) = -8" hint="Apply the fundamental properties of determinants: det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B), det(AT)=det(A)\det(A^T) = \det(A), det(A1)=1/det(A)\det(A^{-1}) = 1/\det(A), and det(kA)=kndet(A)\det(kA) = k^n \det(A)." solution="
    Let's evaluate each option based on the properties of determinants. The matrices are of size n=4n=4.

    Option A: det(PTQ1)=1.5\det(P^T Q^{-1}) = -1.5
    Using the multiplicative, transpose, and inverse properties:

    det(PTQ1)=det(PT)×det(Q1)\det(P^T Q^{-1}) = \det(P^T) \times \det(Q^{-1})

    =det(P)×1det(Q)= \det(P) \times \frac{1}{\det(Q)}

    =3×12=1.5= 3 \times \frac{1}{-2} = -1.5

    This statement is correct.

    Option B: det(2P)=24\det(2P) = 24
    Using the scalar multiplication property, det(kA)=kndet(A)\det(kA) = k^n \det(A):

    det(2P)=24det(P)\det(2P) = 2^4 \det(P)

    =16×3=48= 16 \times 3 = 48

    The statement claims the value is 24, which is incorrect.

    Option C: det(P+Q)=1\det(P+Q) = 1
    There is no general formula for det(P+Q)\det(P+Q). We cannot assume det(P+Q)=det(P)+det(Q)\det(P+Q) = \det(P) + \det(Q). Without knowing the matrices PP and QQ, we cannot determine det(P+Q)\det(P+Q). This statement is not necessarily correct.

    Option D: det(Q3)=8\det(Q^3) = -8
    Using the property det(Ak)=(det(A))k\det(A^k) = (\det(A))^k:

    det(Q3)=(det(Q))3\det(Q^3) = (\det(Q))^3

    =(2)3=8= (-2)^3 = -8

    This statement is correct.

    Therefore, the correct options are A and D.
    Answer: \boxed{\det(P^T Q^{-1}) = -1.5, \det(Q^3) = -8}
    "
    :::

    :::question type="MCQ" question="An n×nn \times n matrix AA is skew-symmetric. Which of the following is necessarily true?" options=["det(A)=0\det(A) = 0 for any nn","det(A)=0\det(A) = 0 only if nn is even","det(A)=0\det(A) = 0 if nn is odd","det(A)=1\det(A) = 1"] answer="det(A)=0\det(A) = 0 if nn is odd" hint="A matrix is skew-symmetric if AT=AA^T = -A. Use the properties of transpose and scalar multiplication on the determinant." solution="
    Step 1: Use the definition of a skew-symmetric matrix.
    By definition, AT=AA^T = -A.

    Step 2: Take the determinant of both sides.

    det(AT)=det(A)\det(A^T) = \det(-A)

    Step 3: Apply the determinant properties.
    We know that det(AT)=det(A)\det(A^T) = \det(A) and det(kA)=kndet(A)\det(kA) = k^n \det(A). Applying these:

    det(A)=(1)ndet(A)\det(A) = (-1)^n \det(A)

    Step 4: Analyze the equation for different cases of nn.

    det(A)(1)ndet(A)=0\det(A) - (-1)^n \det(A) = 0
    det(A)(1(1)n)=0\det(A) (1 - (-1)^n) = 0

    Case 1: nn is even
    If nn is even, then (1)n=1(-1)^n = 1. The equation becomes:

    det(A)(11)=0\det(A) (1 - 1) = 0
    det(A)×0=0\det(A) \times 0 = 0

    This equation is true for any value of det(A)\det(A). Thus, for an even-dimensional skew-symmetric matrix, the determinant is not necessarily zero. For example,

    (0110)\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}

    has determinant 1.

    Case 2: nn is odd
    If nn is odd, then (1)n=1(-1)^n = -1. The equation becomes:

    det(A)(1(1))=0\det(A) (1 - (-1)) = 0
    det(A)(1+1)=0\det(A) (1 + 1) = 0
    2det(A)=02 \det(A) = 0

    This implies that det(A)=0\det(A) = 0.

    Conclusion:
    The determinant of a skew-symmetric matrix must be zero if its dimension nn is odd.
    Answer: \boxed{\text{det}(A) = 0 \text{ if } n \text{ is odd}}
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Master the Properties: The most critical properties for GATE are the multiplicative property det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B) and the scalar multiplication property det(kA)=kndet(A)\det(kA) = k^n \det(A). These are frequently used to solve problems without full matrix computation.

    • Factor Matrix Expressions: For problems like det(A2+cA)\det(A^2 + cA), always factor the expression to A(A+cI)A(A+cI) before applying determinant properties. This is a common pattern.

    • Link to Linear Independence: Understand the Gram matrix property: a set of vectors is linearly independent if and only if the determinant of their Gram matrix is non-zero. A zero determinant implies linear dependence.

    • Special Matrices Shortcuts: Remember that the determinant of a triangular matrix is the product of its diagonal elements, and the determinant of an odd-dimensional skew-symmetric matrix is always zero.

    ---

    What's Next?

    💡 Continue Learning

    The concept of the determinant is deeply interconnected with other core topics in Linear Algebra.

      • Eigenvalues and Eigenvectors: The determinant is the product of all eigenvalues of a matrix. The characteristic polynomial, used to find eigenvalues, is defined as det(AλI)=0\det(A - \lambda I) = 0.
      • Matrix Invertibility: A matrix AA is invertible if and only if det(A)0\det(A) \neq 0. This is the primary test for invertibility and is fundamental to understanding matrix inverses and solving linear systems.
      • Systems of Linear Equations: For a system Ax=bAx=b with a square matrix AA, a non-zero determinant guarantees a unique solution. This connects to concepts like rank and consistency of systems.
    Mastering these connections will provide a more holistic understanding and enhance your ability to solve complex, multi-concept problems in the GATE exam.

    ---

    💡 Moving Forward

    Now that you understand Determinant, let's explore Orthogonal Matrix which builds on these concepts.

    ---

    Part 2: Orthogonal Matrix

    Introduction

    In the study of linear transformations, a particularly important class of matrices is that which preserves the geometric structure of Euclidean space. Specifically, we are often concerned with transformations that do not alter the lengths of vectors or the angles between them. Such transformations, which include rotations and reflections, are fundamental to fields ranging from computer graphics to quantum mechanics. These operations are represented by a special category of matrices known as orthogonal matrices.

    An orthogonal matrix is a square matrix whose columns and rows are orthonormal vectors. This seemingly simple algebraic definition has profound geometric consequences. As we shall see, the defining property, ATA=IA^T A = I, is precisely the condition required for a matrix transformation to be an isometry—that is, a transformation that preserves distances. A thorough understanding of orthogonal matrices is indispensable for mastering more advanced topics in linear algebra, such as QR decomposition and Singular Value Decomposition (SVD), which are frequently encountered in data analysis and machine learning algorithms.

    📖 Orthogonal Matrix

    A real square matrix AA of size n×nn \times n is said to be an orthogonal matrix if its transpose is equal to its inverse. That is,

    AT=A1A^T = A^{-1}

    Multiplying by AA on the right, we arrive at the most common and practical definition:
    ATA=IA^T A = I

    where II is the n×nn \times n identity matrix. It follows that AAT=IA A^T = I as well.

    ---

    Key Concepts

    1. The Orthonormal Property of Columns and Rows

    The definition ATA=IA^T A = I provides deep insight into the structure of an orthogonal matrix. Let us consider an n×nn \times n matrix AA with columns c1,c2,,cnc_1, c_2, \dots, c_n, and rows r1,r2,,rnr_1, r_2, \dots, r_n.

    A=[c1c2cn]A = \begin{bmatrix} | & | & & | \\ c_1 & c_2 & \dots & c_n \\ | & | & & | \end{bmatrix}

    The transpose of AA, denoted ATA^T, will have the columns of AA as its rows.

    AT=[c1Tc2TcnT]A^T = \begin{bmatrix} \text{---} & c_1^T & \text{---} \\ \text{---} & c_2^T & \text{---} \\ & \vdots & \\ \text{---} & c_n^T & \text{---} \end{bmatrix}

    Now, let us examine the product ATAA^T A:

    ATA=[c1Tc2TcnT][c1c2cn]=[c1Tc1c1Tc2c1Tcnc2Tc1c2Tc2c2TcncnTc1cnTc2cnTcn]A^T A = \begin{bmatrix} c_1^T \\ c_2^T \\ \vdots \\ c_n^T \end{bmatrix} \begin{bmatrix} c_1 & c_2 & \dots & c_n \end{bmatrix} = \begin{bmatrix} c_1^T c_1 & c_1^T c_2 & \dots & c_1^T c_n \\ c_2^T c_1 & c_2^T c_2 & \dots & c_2^T c_n \\ \vdots & \vdots & \ddots & \vdots \\ c_n^T c_1 & c_n^T c_2 & \dots & c_n^T c_n \end{bmatrix}

    The entry in the ii-th row and jj-th column of this product is the dot product ciTcjc_i^T c_j. For ATAA^T A to be the identity matrix II, this product matrix must have 1s on the diagonal and 0s elsewhere. This leads to the following condition:

    ciTcj={1if i=j0if ijc_i^T c_j = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases}

    This is the definition of an orthonormal set of vectors. The vectors are mutually orthogonal (ciTcj=0c_i^T c_j = 0 for iji \neq j) and each has a norm (length) of 1 (ciTci=ci2=1c_i^T c_i = ||c_i||^2 = 1). A similar analysis of AAT=IA A^T = I shows that the rows of AA also form an orthonormal set.

    Must Remember

    A square matrix is orthogonal if and only if its columns form an orthonormal basis. Equivalently, a square matrix is orthogonal if and only if its rows form an orthonormal basis. This is often the most direct way to verify orthogonality in an exam.

    Worked Example:

    Problem: Verify that the following matrix RR is an orthogonal matrix.

    R=[cosθsinθsinθcosθ]R = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}

    Solution:

    Let the columns be c1=[cosθsinθ]c_1 = \begin{bmatrix} \cos\theta \\ \sin\theta \end{bmatrix} and c2=[sinθcosθ]c_2 = \begin{bmatrix} -\sin\theta \\ \cos\theta \end{bmatrix}.

    Step 1: Check if the columns are orthogonal by computing their dot product.

    c1Tc2=(cosθ)(sinθ)+(sinθ)(cosθ)c_1^T c_2 = (\cos\theta)(-\sin\theta) + (\sin\theta)(\cos\theta)
    c1Tc2=sinθcosθ+sinθcosθ=0c_1^T c_2 = -\sin\theta\cos\theta + \sin\theta\cos\theta = 0

    Since the dot product is zero, the columns are orthogonal.

    Step 2: Check if each column has a norm of 1.

    For c1c_1:

    c12=c1Tc1=(cosθ)2+(sinθ)2=cos2θ+sin2θ=1||c_1||^2 = c_1^T c_1 = (\cos\theta)^2 + (\sin\theta)^2 = \cos^2\theta + \sin^2\theta = 1

    For c2c_2:

    c22=c2Tc2=(sinθ)2+(cosθ)2=sin2θ+cos2θ=1||c_2||^2 = c_2^T c_2 = (-\sin\theta)^2 + (\cos\theta)^2 = \sin^2\theta + \cos^2\theta = 1

    Both columns have a norm of 1.

    Answer: Since the columns of RR form an orthonormal set, the matrix RR is an orthogonal matrix. This particular matrix represents a counter-clockwise rotation by an angle θ\theta in the 2D plane.

    ---

    2. Geometric Interpretation: Preservation of Norm and Angle

    The defining characteristic of an orthogonal transformation is that it preserves the geometry of space. This is formalized by stating that it preserves the Euclidean norm (length) of any vector and the dot product (and thus the angle) between any two vectors.

    Let us prove the equivalence between the algebraic definition (ATA=IA^T A = I) and the geometric property (Ax=x||Ax|| = ||x||).

    Proof: ATA=I    Ax=xA^T A = I \implies ||Ax|| = ||x|| for all xRnx \in \mathbb{R}^n

    Step 1: Start with the squared norm of the transformed vector, Ax2||Ax||^2.

    Ax2=(Ax)T(Ax)||Ax||^2 = (Ax)^T (Ax)

    Step 2: Apply the transpose property (AB)T=BTAT(AB)^T = B^T A^T.

    Ax2=(xTAT)(Ax)||Ax||^2 = (x^T A^T) (Ax)

    Step 3: Rearrange the terms using associativity of matrix multiplication.

    Ax2=xT(ATA)x||Ax||^2 = x^T (A^T A) x

    Step 4: Substitute the condition for an orthogonal matrix, ATA=IA^T A = I.

    Ax2=xTIx=xTx||Ax||^2 = x^T I x = x^T x

    Step 5: Recognize that xTxx^T x is the definition of the squared norm of xx.

    Ax2=x2||Ax||^2 = ||x||^2

    Since norms are non-negative, taking the square root gives Ax=x||Ax|| = ||x||. Thus, an orthogonal transformation preserves the length of vectors.

    This property is fundamental and was the central concept tested in the provided PYQ. We can extend this to show that the dot product is also preserved. For any two vectors x,yRnx, y \in \mathbb{R}^n:

    (Ax)T(Ay)=(xTAT)(Ay)=xT(ATA)y=xTIy=xTy(Ax)^T (Ay) = (x^T A^T)(Ay) = x^T (A^T A) y = x^T I y = x^T y

    Since (Ax)(Ay)=xy(Ax) \cdot (Ay) = x \cdot y, the angle between the transformed vectors remains the same as the angle between the original vectors.






    x
    y


    x


    Ax



    ||x||
    ||Ax|| = ||x||








    Orthogonal Transformation (Rotation)
    Vector length is preserved.

    ---

    3. Determinant and Eigenvalues

    The properties of the determinant and eigenvalues of an orthogonal matrix are direct consequences of its defining relation.

    Determinant

    Let AA be an orthogonal matrix. We know that ATA=IA^T A = I.

    Step 1: Take the determinant of both sides of the equation.

    det(ATA)=det(I)\det(A^T A) = \det(I)

    Step 2: Use the properties det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B) and det(AT)=det(A)\det(A^T) = \det(A).

    det(AT)det(A)=1\det(A^T) \det(A) = 1
    (det(A))2=1(\det(A))^2 = 1

    Step 3: Solve for det(A)\det(A).

    det(A)=±1\det(A) = \pm 1

    This is a powerful result. It tells us that orthogonal transformations preserve volume (since det(A)=1|\det(A)| = 1) and are always invertible.

    • If det(A)=+1\det(A) = +1, the transformation is called a proper rotation. It preserves the orientation of space.

    • If det(A)=1\det(A) = -1, the transformation is an improper rotation, which involves a reflection and possibly a rotation. It reverses the orientation of space.


    Eigenvalues


    Let λ\lambda be an eigenvalue of a real orthogonal matrix AA with a corresponding eigenvector vv (which may be in Cn\mathbb{C}^n).

    Av=λvAv = \lambda v

    Step 1: Take the norm of both sides.

    Av=λv||Av|| = ||\lambda v||

    Step 2: Use the property of norms cv=cv||cv|| = |c| ||v|| for a scalar cc.

    Av=λv||Av|| = |\lambda| ||v||

    Step 3: Since AA is orthogonal, we know it preserves the norm, so Av=v||Av|| = ||v||.

    v=λv||v|| = |\lambda| ||v||

    Step 4: Since vv is an eigenvector, it is non-zero, so v0||v|| \neq 0. We can divide by v||v||.

    λ=1|\lambda| = 1

    This means all eigenvalues of an orthogonal matrix must have a modulus (or absolute value) of 1. They lie on the unit circle in the complex plane.

    If an eigenvalue of a real orthogonal matrix is a real number, then it must be either +1+1 or 1-1. However, it is crucial to remember that eigenvalues can be complex. For instance, the 2D rotation matrix for θ=90\theta=90^\circ has eigenvalues ii and i-i, both of which have a modulus of 1.

    ---

    4. Rank and Invertibility

    From the determinant property, we know that det(A)0\det(A) \neq 0 for any orthogonal matrix AA. A square matrix is invertible if and only if its determinant is non-zero. Therefore, every orthogonal matrix is invertible.

    Furthermore, for an n×nn \times n matrix, being invertible is equivalent to having full rank.

    Rank of an Orthogonal Matrix: For any n×nn \times n orthogonal matrix AA,

    rank(A)=n\operatorname{rank}(A) = n

    This means the transformation maps Rn\mathbb{R}^n onto itself, and the null space of AA contains only the zero vector. The columns (and rows) are linearly independent, which is a stronger condition than just being orthogonal and is guaranteed by the orthonormality.

    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Check for Orthogonality

    To determine if a given n×nn \times n matrix AA is orthogonal, you have two primary methods:

    • Compute ATAA^T A: Calculate the product and see if it equals the identity matrix II. This can be computationally intensive for n>2n > 2.

    • Check Column (or Row) Orthonormality: This is often much faster.

    Pick two distinct columns, cic_i and cjc_j, and compute their dot product ciTcjc_i^T c_j. If it's not zero, the matrix is not orthogonal.
    If all pairs are orthogonal, compute the squared norm ci2||c_i||^2 for each column. If any norm is not equal to 1, the matrix is not orthogonal.
    * If all columns are mutually orthogonal and have a norm of 1, the matrix is orthogonal.

    For GATE problems, the second method is typically more efficient.

    ---

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Confusing Orthogonal with Orthonormal: Students often check that the columns are orthogonal (ciTcj=0c_i^T c_j = 0) but forget to check that they are also of unit length (ci=1||c_i|| = 1). An orthogonal matrix requires an orthonormal set of columns/rows.
    Correct Approach: Always check both conditions: orthogonality (dot product is zero) and normalization (norm is one).
      • Assuming Eigenvalues are Always Real: A common mistake is to assume that because the matrix has real entries, its eigenvalues must be ±1\pm 1. This is false. A real matrix can have complex eigenvalues.
    Correct Approach: Remember that eigenvalues of an orthogonal matrix must have a modulus of 1 (λ=1|\lambda|=1). This includes real values ±1\pm 1 as well as complex conjugate pairs on the unit circle (e.g., eiθe^{i\theta}, eiθe^{-i\theta}).
      • Assuming ATA=IA^T A = I implies AA is symmetric: The definition AT=A1A^T = A^{-1} is for orthogonal matrices. The definition for symmetric matrices is AT=AA^T = A. The only matrices that are both orthogonal and symmetric are those where A2=IA^2 = I, such as the identity matrix or reflection matrices.
    Correct Approach: Keep the definitions separate. Orthogonality relates the transpose to the inverse, while symmetry relates the transpose to the matrix itself.

    ---

    Practice Questions

    :::question type="MCQ" question="Which of the following matrices is an orthogonal matrix?" options=["A=[1111]A = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}","B=[1002]B = \begin{bmatrix} 1 & 0 \\ 0 & -2 \end{bmatrix}","C=12[1111]C = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}","D=[1101]D = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}"] answer="C=12[1111]C = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}" hint="Check if the columns of each matrix form an orthonormal set. Remember to account for any scalar multiple outside the matrix." solution="
    Let's analyze each option by checking for column orthonormality.

    Option A: The columns are c1=[1,1]Tc_1 = [1, 1]^T and c2=[1,1]Tc_2 = [1, 1]^T. The norm is c1=12+12=21||c_1|| = \sqrt{1^2+1^2} = \sqrt{2} \neq 1. So, A is not orthogonal.

    Option B: The columns are c1=[1,0]Tc_1 = [1, 0]^T and c2=[0,2]Tc_2 = [0, -2]^T. The norm is c2=02+(2)2=21||c_2|| = \sqrt{0^2+(-2)^2} = 2 \neq 1. So, B is not orthogonal.

    Option C: The matrix is

    C=[1/21/21/21/2]C = \begin{bmatrix} 1/\sqrt{2} & -1/\sqrt{2} \\ 1/\sqrt{2} & 1/\sqrt{2} \end{bmatrix}

    The columns are c1=[1/2,1/2]Tc_1 = [1/\sqrt{2}, 1/\sqrt{2}]^T and c2=[1/2,1/2]Tc_2 = [-1/\sqrt{2}, 1/\sqrt{2}]^T.
    • Dot product:

    c1Tc2=(1/2)(1/2)+(1/2)(1/2)=1/2+1/2=0c_1^T c_2 = (1/\sqrt{2})(-1/\sqrt{2}) + (1/\sqrt{2})(1/\sqrt{2}) = -1/2 + 1/2 = 0

    They are orthogonal.
    • Norms:

    c12=(1/2)2+(1/2)2=1/2+1/2=1||c_1||^2 = (1/\sqrt{2})^2 + (1/\sqrt{2})^2 = 1/2 + 1/2 = 1

    c22=(1/2)2+(1/2)2=1/2+1/2=1||c_2||^2 = (-1/\sqrt{2})^2 + (1/\sqrt{2})^2 = 1/2 + 1/2 = 1

    Since the columns are orthonormal, C is an orthogonal matrix.

    Option D: The columns are c1=[1,0]Tc_1 = [1, 0]^T and c2=[1,1]Tc_2 = [1, 1]^T. The dot product is

    c1Tc2=11+01=10c_1^T c_2 = 1 \cdot 1 + 0 \cdot 1 = 1 \neq 0

    They are not orthogonal. So, D is not orthogonal.

    Answer: \boxed{C = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}}
    "
    :::

    :::question type="NAT" question="Consider the matrix A=[1/2a3/2b]A = \begin{bmatrix} 1/2 & a \\ \sqrt{3}/2 & b \end{bmatrix}. If AA is an orthogonal matrix with a determinant of +1, what is the value of bb?" answer="0.5" hint="For an orthogonal matrix, the columns must be orthonormal. Use this to find the possible values for the second column. Then use the determinant condition to select the correct solution." solution="
    Let the columns be c1=[1/2,3/2]Tc_1 = [1/2, \sqrt{3}/2]^T and c2=[a,b]Tc_2 = [a, b]^T.

    Step 1: Use the orthonormality conditions.
    The norm of the first column is

    c12=(1/2)2+(3/2)2=1/4+3/4=1||c_1||^2 = (1/2)^2 + (\sqrt{3}/2)^2 = 1/4 + 3/4 = 1

    It is a unit vector.

    The second column must also be a unit vector:

    a2+b2=1a^2 + b^2 = 1

    The columns must be orthogonal:

    c1Tc2=12a+32b=0    a=3bc_1^T c_2 = \frac{1}{2}a + \frac{\sqrt{3}}{2}b = 0 \implies a = -\sqrt{3}b

    Step 2: Substitute aa into the norm equation.

    (3b)2+b2=1(-\sqrt{3}b)^2 + b^2 = 1

    3b2+b2=13b^2 + b^2 = 1

    4b2=1    b2=1/4    b=±1/24b^2 = 1 \implies b^2 = 1/4 \implies b = \pm 1/2

    Step 3: Determine the corresponding values of aa.
    If b=1/2b = 1/2, then a=3/2a = -\sqrt{3}/2.
    If b=1/2b = -1/2, then a=3/2a = \sqrt{3}/2.

    Step 4: Use the determinant condition det(A)=1\det(A) = 1.

    det(A)=(1/2)(b)(a)(3/2)\det(A) = (1/2)(b) - (a)(\sqrt{3}/2)

    Case 1: b=1/2,a=3/2b = 1/2, a = -\sqrt{3}/2

    det(A)=(1/2)(1/2)(3/2)(3/2)=1/4+3/4=1\det(A) = (1/2)(1/2) - (-\sqrt{3}/2)(\sqrt{3}/2) = 1/4 + 3/4 = 1

    This is a valid solution.

    Case 2: b=1/2,a=3/2b = -1/2, a = \sqrt{3}/2

    det(A)=(1/2)(1/2)(3/2)(3/2)=1/43/4=1\det(A) = (1/2)(-1/2) - (\sqrt{3}/2)(\sqrt{3}/2) = -1/4 - 3/4 = -1

    This is not the required solution.

    Step 5: The only solution that satisfies all conditions is b=1/2b=1/2.

    Answer: \boxed{0.5}
    "
    :::

    :::question type="MSQ" question="Let QQ be an n×nn \times n real orthogonal matrix where n>1n > 1. Which of the following statements is/are ALWAYS true?" options=["All eigenvalues of QQ are real.","The matrix Q2Q^2 is also an orthogonal matrix.","The trace of QQ, Tr(Q)(Q), must be an integer.","The columns of QQ are linearly independent."] answer="The matrix Q2Q^2 is also an orthogonal matrix.,The columns of QQ are linearly independent." hint="Consider the properties of orthogonal matrices. For statements that might be false, try to construct a 2x2 counterexample, like a rotation matrix." solution="
    Let's analyze each statement.

    Statement A: All eigenvalues of QQ are real.
    This is false. Consider the 2D rotation matrix for θ=90\theta = 90^\circ:

    Q=[0110]Q = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}

    This is an orthogonal matrix. Its characteristic equation is det(QλI)=0\det(Q - \lambda I) = 0, which is
    (λ)(λ)(1)(1)=λ2+1=0(-\lambda)(-\lambda) - (-1)(1) = \lambda^2 + 1 = 0

    The eigenvalues are λ=±i\lambda = \pm i, which are not real.

    Statement B: The matrix Q2Q^2 is also an orthogonal matrix.
    Let's check the defining property for Q2Q^2. We need to show that (Q2)T(Q2)=I(Q^2)^T (Q^2) = I.
    We know that for an orthogonal matrix QQ, QT=Q1Q^T = Q^{-1}.
    The transpose of Q2Q^2 is (Q2)T=(QQ)T=QTQT(Q^2)^T = (Q Q)^T = Q^T Q^T.
    The inverse of Q2Q^2 is (Q2)1=(QQ)1=Q1Q1(Q^2)^{-1} = (Q Q)^{-1} = Q^{-1} Q^{-1}.
    Since Q1=QTQ^{-1} = Q^T, we have:

    (Q2)1=Q1Q1=QTQT=(Q2)T\begin{aligned} (Q^2)^{-1} & = Q^{-1} Q^{-1} \\ & = Q^T Q^T \\ & = (Q^2)^T \end{aligned}

    Since the inverse of Q2Q^2 is equal to its transpose, Q2Q^2 is an orthogonal matrix. This statement is true.

    Statement C: The trace of QQ, Tr(Q)\operatorname{Tr}(Q), must be an integer.
    The trace is the sum of the eigenvalues. As shown in statement A, the eigenvalues can be complex numbers like ii and i-i. The trace of that matrix is 0+0=00+0=0, which is an integer. However, consider a rotation by 4545^\circ:

    Q=[1/21/21/21/2]Q = \begin{bmatrix} 1/\sqrt{2} & -1/\sqrt{2} \\ 1/\sqrt{2} & 1/\sqrt{2} \end{bmatrix}

    The trace is 1/2+1/2=2/2=21/\sqrt{2} + 1/\sqrt{2} = 2/\sqrt{2} = \sqrt{2}, which is not an integer. So, this statement is false.

    Statement D: The columns of QQ are linearly independent.
    Since QQ is an orthogonal matrix, its determinant is ±1\pm 1, which is non-zero. A square matrix has linearly independent columns if and only if its determinant is non-zero. Therefore, the columns of QQ must be linearly independent. This statement is true.

    Answer: \boxed{\text{The matrix } Q^2 \text{ is also an orthogonal matrix.,The columns of } Q \text{ are linearly independent.}}
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Algebraic Definition: An n×nn \times n matrix AA is orthogonal if ATA=IA^T A = I, which implies A1=ATA^{-1} = A^T.

    • Structural Property: The columns (and rows) of an orthogonal matrix form an orthonormal set. They are mutually orthogonal and each has a length of 1.

    • Geometric Property: Orthogonal transformations are isometries. They preserve the Euclidean norm of vectors (Ax=x||Ax||=||x||) and the dot product between vectors ((Ax)(Ay)=xy(Ax)\cdot(Ay) = x \cdot y), thus preserving lengths and angles.

    • Determinant and Eigenvalues: The determinant of an orthogonal matrix is always ±1\pm 1. All its eigenvalues lie on the unit circle in the complex plane, i.e., λ=1|\lambda| = 1.

    • Invertibility: Every orthogonal matrix is invertible and has full rank (nn).

    ---

    What's Next?

    💡 Continue Learning

    A solid understanding of orthogonal matrices is a gateway to several advanced and critical topics in linear algebra for data analysis.

      • QR Decomposition: This is a method to decompose any matrix AA with linearly independent columns into a product A=QRA = QR, where QQ is an orthogonal matrix and RR is an upper triangular matrix. This is fundamental for solving linear systems and in eigenvalue algorithms.
      • Singular Value Decomposition (SVD): The SVD of a matrix AA is given by A=UΣVTA = U \Sigma V^T, where UU and VV are orthogonal matrices. SVD is one of the most important matrix factorizations, used extensively in dimensionality reduction (like PCA), recommender systems, and data compression.
    Mastering the properties of orthogonal matrices will significantly clarify the mechanics and geometric meaning of these powerful decomposition techniques.

    ---

    💡 Moving Forward

    Now that you understand Orthogonal Matrix, let's explore Idempotent Matrix which builds on these concepts.

    ---

    Part 3: Idempotent Matrix

    Introduction

    In our study of linear algebra, we encounter various classes of matrices, each distinguished by specific algebraic properties. While general matrices form the bedrock of the subject, certain specialized matrices, such as symmetric, orthogonal, or nilpotent matrices, provide deeper structural insights and are instrumental in a wide range of applications, from geometry to data analysis. Among these is the class of idempotent matrices.

    An idempotent matrix is defined by a simple yet powerful property related to its self-multiplication. This property, A2=AA^2 = A, might seem abstract at first, but it is the algebraic manifestation of a geometric projection. Understanding idempotent matrices is crucial as they form the foundation for projection operators, which are fundamental in statistics, machine learning (particularly in linear regression models), and signal processing. For the GATE examination, a firm grasp of their definition, key properties related to eigenvalues, trace, and rank is essential for solving targeted problems efficiently.

    📖 Idempotent Matrix

    A square matrix AA of order nn is said to be an idempotent matrix if it satisfies the condition:

    A2=AA^2 = A

    This implies that applying the linear transformation represented by AA twice is equivalent to applying it just once.

    ---

    Key Concepts

    The defining property of an idempotent matrix gives rise to several other important and testable characteristics. Let us explore the most significant of these.

    1. Eigenvalues of an Idempotent Matrix

    One of the most critical properties of an idempotent matrix pertains to its eigenvalues. This property is frequently leveraged in competitive examinations to determine characteristics of a matrix without extensive computation.

    We can prove that the eigenvalues of any idempotent matrix are restricted to only two possible values: 0 or 1.

    Proof:

    Let AA be an idempotent matrix. Let λ\lambda be an eigenvalue of AA and vv be the corresponding non-zero eigenvector.

    By the definition of an eigenvalue and eigenvector, we have:

    Av=λvAv = \lambda v

    Multiplying both sides by AA from the left, we get:

    A(Av)=A(λv)A(Av) = A(\lambda v)
    A2v=λ(Av)A^2 v = \lambda (Av)

    Since AA is idempotent, we know that A2=AA^2 = A. Substituting this into the equation:

    Av=λ(Av)A v = \lambda (Av)

    We also know that Av=λvAv = \lambda v. Substituting this into the right-hand side yields:

    λv=λ(λv)\lambda v = \lambda (\lambda v)
    λv=λ2v\lambda v = \lambda^2 v

    Rearranging the terms, we obtain:

    (λ2λ)v=0(\lambda^2 - \lambda) v = 0

    Since vv is an eigenvector, it is a non-zero vector (v0v \neq 0). Therefore, the scalar multiple must be zero:

    λ2λ=0\lambda^2 - \lambda = 0
    λ(λ1)=0\lambda(\lambda - 1) = 0

    This gives us two possible solutions for λ\lambda:

    λ=0orλ=1\lambda = 0 \quad \text{or} \quad \lambda = 1

    Thus, the only possible eigenvalues for an idempotent matrix are 0 and 1.

    2. Properties of (IA)(I - A)

    If a matrix AA is idempotent, the related matrix (IA)(I-A), where II is the identity matrix of the same order, also exhibits interesting properties.

    Property: If AA is idempotent, then (IA)(I-A) is also idempotent.

    Proof:

    To prove that (IA)(I-A) is idempotent, we must show that (IA)2=(IA)(I-A)^2 = (I-A).

    Step 1: Expand the square term.

    (IA)2=(IA)(IA)(I-A)^2 = (I-A)(I-A)

    Step 2: Apply the distributive property of matrix multiplication.

    (IA)2=I(IA)A(IA)(I-A)^2 = I(I-A) - A(I-A)
    (IA)2=I2IAAI+A2(I-A)^2 = I^2 - IA - AI + A^2

    Step 3: Use the properties I2=II^2=I, IA=AIA=A, AI=AAI=A, and the given condition A2=AA^2=A.

    (IA)2=IAA+A(I-A)^2 = I - A - A + A

    Step 4: Simplify the expression.

    (IA)2=IA(I-A)^2 = I - A

    Since (IA)2=(IA)(I-A)^2 = (I-A), the matrix (IA)(I-A) is idempotent. Furthermore, we observe that A(IA)=AA2=AA=OA(I-A) = A - A^2 = A - A = O, where OO is the null matrix. This means the column spaces of AA and (IA)(I-A) are orthogonal.

    3. Trace and Rank

    A particularly useful property for numerical answer type (NAT) questions connects the trace of an idempotent matrix to its rank.

    📐 Trace-Rank Property of Idempotent Matrices

    For any idempotent matrix AA, its trace is equal to its rank.

    tr(A)=rank(A)\operatorname{tr}(A) = \operatorname{rank}(A)

    Variables:

      • tr(A)\operatorname{tr}(A) = The trace of matrix AA (sum of its diagonal elements).

      • rank(A)\operatorname{rank}(A) = The rank of matrix AA (dimension of its column space).


    When to use: This formula is extremely useful in GATE questions where the trace is given or easily calculable, and the rank is asked, or vice-versa. It provides a direct link between an algebraic property (trace) and a geometric one (rank).

    Justification: The trace of any square matrix is the sum of its eigenvalues. Since the eigenvalues of an idempotent matrix can only be 0 or 1, the trace is simply the count of eigenvalues that are equal to 1. An idempotent matrix is always diagonalizable, and its rank is the number of non-zero eigenvalues. Consequently, the rank is also the count of eigenvalues equal to 1. It follows that the trace and rank must be equal.

    Worked Example:

    Problem: Verify that the matrix A=(2121)A = \begin{pmatrix} 2 & -1 \\ 2 & -1 \end{pmatrix} is idempotent and confirm that its trace equals its rank.

    Solution:

    Step 1: Check for idempotency by computing A2A^2.

    A2=(2121)(2121)A^2 = \begin{pmatrix} 2 & -1 \\ 2 & -1 \end{pmatrix} \begin{pmatrix} 2 & -1 \\ 2 & -1 \end{pmatrix}
    A2=((2)(2)+(1)(2)(2)(1)+(1)(1)(2)(2)+(1)(2)(2)(1)+(1)(1))A^2 = \begin{pmatrix} (2)(2) + (-1)(2) & (2)(-1) + (-1)(-1) \\ (2)(2) + (-1)(2) & (2)(-1) + (-1)(-1) \end{pmatrix}
    A2=(422+1422+1)A^2 = \begin{pmatrix} 4 - 2 & -2 + 1 \\ 4 - 2 & -2 + 1 \end{pmatrix}
    A2=(2121)A^2 = \begin{pmatrix} 2 & -1 \\ 2 & -1 \end{pmatrix}

    Since A2=AA^2=A, the matrix is idempotent.

    Step 2: Calculate the trace of AA.

    tr(A)=2+(1)=1\operatorname{tr}(A) = 2 + (-1) = 1

    Step 3: Calculate the rank of AA.
    The second row is a multiple of the first row (R2=1R1R_2 = 1 \cdot R_1). Therefore, the rows are linearly dependent. The number of linearly independent rows (or columns) is 1.

    rank(A)=1\operatorname{rank}(A) = 1

    Conclusion: We observe that tr(A)=rank(A)=1\operatorname{tr}(A) = \operatorname{rank}(A) = 1, which confirms the property.

    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Quick Verification

    When a problem involves an unknown matrix AA stated to be idempotent, immediately recall its core properties:

    • Eigenvalues: Any question about eigenvalues can be narrowed down to 0 and 1.

    • Trace and Rank: If you are given the trace, you instantly know the rank, and vice versa. This is a powerful shortcut for NAT questions.

    • Powers: Remember that Ak=AA^k = A for any integer k1k \geq 1. If a question involves high powers like A100A^{100}, the answer is simply AA.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Assuming Invertibility: Students often assume a matrix might be invertible. A non-identity idempotent matrix is always singular (non-invertible). This is because if AA is idempotent and AIA \neq I, it must have at least one eigenvalue equal to 0, which implies its determinant is 0.
    Correct Approach: Remember that only the identity matrix II is both idempotent and invertible. For any other idempotent matrix AA, det(A)=0\det(A) = 0.
      • Confusing with Other Matrix Types: The property A2=AA^2=A can be easily confused with others.
    - Involutory: A2=IA^2=I (e.g., reflection matrices) - Nilpotent: Ak=OA^k=O for some integer kk (e.g., strictly upper/lower triangular matrices) ✅ Correct Approach: Associate "idempotent" with "projection." Applying a projection twice is the same as applying it once. This mental model helps secure the definition A2=AA^2=A.

    ---

    Practice Questions

    :::question type="MCQ" question="Let AA be a non-zero idempotent matrix of order nn. Which of the following statements is always true?" options=["AA is invertible", "IAI-A is the zero matrix", "AA must have an eigenvalue of 0", "The rank of AA is equal to its trace"] answer="The rank of AA is equal to its trace" hint="Recall the fundamental properties of eigenvalues, trace, and rank for idempotent matrices. Consider the case where AA is the identity matrix to eliminate some options." solution="Step 1: Analyze the options.

    • Option A: If AA is idempotent and AIA \neq I, it must have an eigenvalue of 0, so det(A)=0\det(A)=0. Thus, AA is not invertible. The identity matrix II is idempotent and invertible, but the statement must be always true. So this option is false.

    • Option B: If IA=OI-A = O, then A=IA=I. But AA can be any non-zero idempotent matrix, not necessarily the identity matrix. So this is false.

    • Option C: The identity matrix II is idempotent, but its only eigenvalue is 1. So this is not always true.

    • Option D: A fundamental property of idempotent matrices is that their trace (sum of eigenvalues) equals their rank (number of non-zero eigenvalues). Since eigenvalues can only be 0 or 1, the trace counts the number of '1' eigenvalues, which is precisely the rank. This statement is always true.


    Result: The correct option is "The rank of AA is equal to its trace".
    "
    :::

    :::question type="NAT" question="An idempotent matrix AA of order 4×44 \times 4 has a trace of 3. What is the rank of the matrix (IA)(I-A)?" answer="1" hint="Use the trace-rank property for both AA and (IA)(I-A). Remember how the trace of (IA)(I-A) relates to the trace of AA." solution="Step 1: Use the trace-rank property for matrix AA.
    For an idempotent matrix, rank(A)=tr(A)\operatorname{rank}(A) = \operatorname{tr}(A).
    Given tr(A)=3\operatorname{tr}(A) = 3.
    Therefore, rank(A)=3\operatorname{rank}(A) = 3.

    Step 2: Determine the trace of (IA)(I-A).
    The matrix AA is of order 4×44 \times 4, so II is the 4×44 \times 4 identity matrix.
    We know that tr(IA)=tr(I)tr(A)\operatorname{tr}(I-A) = \operatorname{tr}(I) - \operatorname{tr}(A).
    The trace of a 4×44 \times 4 identity matrix is 1+1+1+1=41+1+1+1=4.

    tr(IA)=43=1\operatorname{tr}(I-A) = 4 - 3 = 1

    Step 3: Use the trace-rank property for matrix (IA)(I-A).
    If AA is idempotent, then (IA)(I-A) is also idempotent.
    Therefore, rank(IA)=tr(IA)\operatorname{rank}(I-A) = \operatorname{tr}(I-A).

    rank(IA)=1\operatorname{rank}(I-A) = 1

    Result: The rank of the matrix (IA)(I-A) is 1.
    "
    :::

    :::question type="MSQ" question="Let AA be a 3×33 \times 3 idempotent matrix such that AIA \neq I and AOA \neq O. Which of the following statements must be true?" options=["det(A)=0\det(A) = 0", "rank(A)=1\operatorname{rank}(A) = 1 or rank(A)=2\operatorname{rank}(A)=2", "AA is diagonalizable", "A+IA+I is idempotent"] answer="det(A)=0\det(A) = 0,rank(A)=1\operatorname{rank}(A) = 1 or rank(A)=2\operatorname{rank}(A)=2,AA is diagonalizable" hint="Analyze the possible eigenvalues and their implications for the determinant, rank, and diagonalizability." solution="Step 1: Analyze the conditions. AA is a 3×33 \times 3 idempotent matrix. Its eigenvalues (λ1,λ2,λ3)(\lambda_1, \lambda_2, \lambda_3) can only be 0 or 1.

    • AIA \neq I implies not all eigenvalues are 1.

    • AOA \neq O implies not all eigenvalues are 0.

    Therefore, the set of eigenvalues must be a mix of 0s and 1s.

    Step 2: Evaluate each option.

    • Option A: det(A)=0\det(A) = 0. The determinant is the product of eigenvalues. Since at least one eigenvalue must be 0 (because AIA \neq I), the product of eigenvalues will be 0. So, det(A)=0\det(A)=0. This statement is true.


    • Option B: rank(A)=1\operatorname{rank}(A) = 1 or rank(A)=2\operatorname{rank}(A)=2. The rank equals the number of non-zero eigenvalues. The possible sets of eigenvalues are {1,0,0}\{1, 0, 0\} or {1,1,0}\{1, 1, 0\}. In the first case, the rank is 1. In the second case, the rank is 2. The rank cannot be 3 (as AIA \neq I) or 0 (as AOA \neq O). This statement is true.


    • Option C: AA is diagonalizable. A matrix is diagonalizable if its minimal polynomial has distinct linear factors. The eigenvalues of AA are roots of λ2λ=0\lambda^2 - \lambda = 0. The minimal polynomial of an idempotent matrix must divide x2x=x(x1)x^2-x = x(x-1). Since the factors are distinct and linear, any idempotent matrix is diagonalizable. This statement is true.


    • Option D: A+IA+I is idempotent. Let's check: (A+I)2=A2+AI+IA+I2=A+A+A+I=3A+I(A+I)^2 = A^2 + AI + IA + I^2 = A + A + A + I = 3A+I. For A+IA+I to be idempotent, we need 3A+I=A+I3A+I = A+I, which implies 2A=O2A=O, or A=OA=O. But we are given AOA \neq O. So this statement is false.


    Result: The correct options are det(A)=0\det(A) = 0, rank(A)=1\operatorname{rank}(A) = 1 or rank(A)=2\operatorname{rank}(A)=2, and AA is diagonalizable.
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Definition is Key: An idempotent matrix AA satisfies A2=AA^2 = A. This is the starting point for all derivations.

    • Eigenvalues are 0 or 1: This is the most powerful property. It directly impacts the determinant, trace, and rank.

    • Trace Equals Rank: For any idempotent matrix, tr(A)=rank(A)\operatorname{tr}(A) = \operatorname{rank}(A). This is a crucial shortcut for numerical problems.

    • Complementary Idempotent: If AA is idempotent, so is (IA)(I-A). This property often appears in questions involving transformations.

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • Projection Matrices: An idempotent matrix represents a projection. If it is also symmetric (AT=AA^T=A), it represents an orthogonal projection, a concept central to linear regression and the method of least squares. The "hat matrix" in regression is a prime example.

      • Other Special Matrices: Compare the properties of idempotent matrices (A2=AA^2=A) with involutory matrices (A2=IA^2=I) and nilpotent matrices (Ak=OA^k=O). Understanding their distinct eigenvalue structures is key to solving matrix identification problems.


    Master these connections to build a more holistic understanding of matrix properties for the GATE DA examination.

    ---

    ---

    💡 Moving Forward

    Now that you understand Idempotent Matrix, let's explore Projection Matrix which builds on these concepts.

    ---

    Part 4: Projection Matrix

    Introduction

    In our study of linear algebra, we frequently encounter the problem of finding the "best approximation" of a vector within a given subspace. The concept of projection provides a rigorous framework for this endeavor. A projection matrix is a linear transformation that maps a vector from a vector space onto a specified subspace. Geometrically, this operation finds the vector in the subspace that is closest to the original vector, effectively casting a "shadow" of the vector onto the subspace.

    The study of projection matrices is of paramount importance in data analysis and machine learning. They form the theoretical bedrock of fundamental algorithms such as Ordinary Least Squares (OLS) regression, where we project a vector of observations onto the column space of a design matrix to find the best-fit coefficients. Understanding their properties—idempotence, symmetry, and their characteristic eigenvalues and eigenspaces—is not merely an academic exercise; it provides the necessary tools to analyze and solve complex problems that appear in the GATE examination. We shall explore these properties in detail, connecting the geometric intuition with the algebraic formulation.

    📖 Orthogonal Projection

    An orthogonal projection is a linear transformation P:VVP: V \to V from a vector space VV to itself such that for any vector vV\mathbf{v} \in V, its image PvP\mathbf{v} lies in a specified subspace UVU \subseteq V, and the error vector vPv\mathbf{v} - P\mathbf{v} is orthogonal to every vector in UU. The matrix representation of this transformation is called the projection matrix.

    ---

    Key Concepts

    1. Projection onto a Line

    Let us begin with the simplest case: projecting a vector onto a line. Consider a line in Rn\mathbb{R}^n that passes through the origin and is defined by the direction of a non-zero vector a\mathbf{a}. We wish to find the projection of another vector, b\mathbf{b}, onto this line.

    The projection of b\mathbf{b} onto a\mathbf{a}, which we denote as p\mathbf{p}, will be some scalar multiple of a\mathbf{a}. Let us write this as p=x^a\mathbf{p} = \hat{x}\mathbf{a} for some scalar x^\hat{x}. The defining property of an orthogonal projection is that the error vector, e=bp=bx^a\mathbf{e} = \mathbf{b} - \mathbf{p} = \mathbf{b} - \hat{x}\mathbf{a}, must be orthogonal to the direction vector a\mathbf{a}. This orthogonality condition is expressed as:

    aT(bx^a)=0\mathbf{a}^T (\mathbf{b} - \hat{x}\mathbf{a}) = 0

    Expanding this expression, we have:

    aTbx^aTa=0\mathbf{a}^T\mathbf{b} - \hat{x}\mathbf{a}^T\mathbf{a} = 0

    Solving for the scalar x^\hat{x}, we find:

    x^=aTbaTa\hat{x} = \frac{\mathbf{a}^T\mathbf{b}}{\mathbf{a}^T\mathbf{a}}

    Now, substituting this back into our expression for the projection p\mathbf{p}:

    p=x^a=(aTbaTa)a\mathbf{p} = \hat{x}\mathbf{a} = \left( \frac{\mathbf{a}^T\mathbf{b}}{\mathbf{a}^T\mathbf{a}} \right) \mathbf{a}

    To find the projection matrix PP, we rearrange this expression to be of the form p=Pb\mathbf{p} = P\mathbf{b}. Using the associativity of matrix multiplication, we can write:

    p=(aaTaTa)b\mathbf{p} = \left( \frac{\mathbf{a}\mathbf{a}^T}{\mathbf{a}^T\mathbf{a}} \right) \mathbf{b}

    From this, we identify the projection matrix PP.








    Line (Subspace U)


    a


    b


    p = Pb


    e = b-p
















    📐 Projection Matrix onto a Line
    P=aaTaTaP = \frac{\mathbf{a}\mathbf{a}^T}{\mathbf{a}^T\mathbf{a}}

    Variables:

      • aRn\mathbf{a} \in \mathbb{R}^n is a non-zero column vector defining the line (subspace).

      • aT\mathbf{a}^T is the transpose of a\mathbf{a}.

      • aaT\mathbf{a}\mathbf{a}^T is an n×nn \times n matrix (outer product).

      • aTa\mathbf{a}^T\mathbf{a} is a scalar (inner product, or squared norm a2||\mathbf{a}||^2).


    Application: To find the matrix that projects any vector onto the line passing through the origin in the direction of a\mathbf{a}.

    ---

    2. Projection onto a Subspace

    We now generalize this concept to projection onto a kk-dimensional subspace UU of Rn\mathbb{R}^n. Let the subspace UU be spanned by a set of linearly independent vectors {a1,a2,,ak}\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_k\}. We can form a matrix AA whose columns are these basis vectors: A=[a1a2ak]A = [\mathbf{a}_1 | \mathbf{a}_2 | \dots | \mathbf{a}_k].

    Any vector p\mathbf{p} in the subspace UU (the column space of AA) can be written as a linear combination of the basis vectors, p=Ax^\mathbf{p} = A\hat{\mathbf{x}} for some coefficient vector x^Rk\hat{\mathbf{x}} \in \mathbb{R}^k. Similar to the line case, the error vector bp=bAx^\mathbf{b} - \mathbf{p} = \mathbf{b} - A\hat{\mathbf{x}} must be orthogonal to the subspace UU. This means it must be orthogonal to every basis vector of UU:

    aiT(bAx^)=0for i=1,,k\mathbf{a}_i^T (\mathbf{b} - A\hat{\mathbf{x}}) = 0 \quad \text{for } i=1, \dots, k

    We can write these kk equations compactly in matrix form:

    AT(bAx^)=0A^T(\mathbf{b} - A\hat{\mathbf{x}}) = \mathbf{0}

    Distributing ATA^T, we obtain the normal equations:

    ATbATAx^=0    ATAx^=ATbA^T\mathbf{b} - A^TA\hat{\mathbf{x}} = \mathbf{0} \implies A^TA\hat{\mathbf{x}} = A^T\mathbf{b}

    Since the columns of AA are linearly independent, the matrix ATAA^TA is invertible. We can solve for the coefficient vector x^\hat{\mathbf{x}}:

    x^=(ATA)1ATb\hat{\mathbf{x}} = (A^TA)^{-1}A^T\mathbf{b}

    The projected vector is p=Ax^\mathbf{p} = A\hat{\mathbf{x}}. Substituting the expression for x^\hat{\mathbf{x}}:

    p=A((ATA)1ATb)=(A(ATA)1AT)b\mathbf{p} = A((A^TA)^{-1}A^T\mathbf{b}) = (A(A^TA)^{-1}A^T)\mathbf{b}

    This gives us the general formula for the projection matrix.

    📐 General Projection Matrix
    P=A(ATA)1ATP = A(A^TA)^{-1}A^T

    Variables:

      • AA is an n×kn \times k matrix whose columns form a basis for the kk-dimensional subspace.

      • The columns of AA must be linearly independent for (ATA)1(A^TA)^{-1} to exist.


    When to use: To project vectors onto the column space of any matrix AA with linearly independent columns.

    A significant simplification occurs if the basis for the subspace is orthonormal. Let the columns of a matrix Q=[q1q2qk]Q = [\mathbf{q}_1 | \mathbf{q}_2 | \dots | \mathbf{q}_k] form an orthonormal basis for the subspace. By definition of orthonormality, qiTqj=0\mathbf{q}_i^T \mathbf{q}_j = 0 for iji \neq j and qiTqi=1\mathbf{q}_i^T \mathbf{q}_i = 1. This implies that the matrix QTQQ^TQ is the k×kk \times k identity matrix, IkI_k.

    QTQ=IkQ^TQ = I_k

    Substituting QQ for AA in the general formula:

    P=Q(QTQ)1QT=Q(Ik)1QT=QIQT=QQTP = Q(Q^TQ)^{-1}Q^T = Q(I_k)^{-1}Q^T = QIQ^T = QQ^T

    This simplified form is extremely useful and frequently tested in GATE.

    📐 Projection Matrix for Orthonormal Basis
    P=QQTP = QQ^T

    Variables:

      • QQ is an n×kn \times k matrix whose columns form an orthonormal basis for the subspace.


    When to use: When the basis vectors for the subspace are mutually orthogonal and have unit length. This formula is computationally much simpler.

    Worked Example:

    Problem: Find the projection matrix onto the subspace of R3\mathbb{R}^3 spanned by the orthonormal vectors q1\mathbf{q}_1 and q2\mathbf{q}_2, where:

    q1=[1/21/20]andq2=[001]\mathbf{q}_1 = \begin{bmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \\ 0 \end{bmatrix} \quad \text{and} \quad \mathbf{q}_2 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

    Solution:

    Step 1: Form the matrix QQ with the given orthonormal vectors as its columns.

    Q=[1/201/2001]Q = \begin{bmatrix} 1/\sqrt{2} & 0 \\ 1/\sqrt{2} & 0 \\ 0 & 1 \end{bmatrix}

    Step 2: Apply the formula P=QQTP = QQ^T.

    P=[1/201/2001][1/21/20001]P = \begin{bmatrix} 1/\sqrt{2} & 0 \\ 1/\sqrt{2} & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1/\sqrt{2} & 1/\sqrt{2} & 0 \\ 0 & 0 & 1 \end{bmatrix}

    Step 3: Perform the matrix multiplication.

    P=[(1/2)(1/2)+0(1/2)(1/2)+0(1/2)(0)+0(1/2)(1/2)+0(1/2)(1/2)+0(1/2)(0)+0(0)(1/2)+(1)(0)(0)(1/2)+(1)(0)(0)(0)+(1)(1)]P = \begin{bmatrix}(1/\sqrt{2})(1/\sqrt{2}) + 0 & (1/\sqrt{2})(1/\sqrt{2}) + 0 & (1/\sqrt{2})(0) + 0 \\
    (1/\sqrt{2})(1/\sqrt{2}) + 0 & (1/\sqrt{2})(1/\sqrt{2}) + 0 & (1/\sqrt{2})(0) + 0 \\
    (0)(1/\sqrt{2}) + (1)(0) & (0)(1/\sqrt{2}) + (1)(0) & (0)(0) + (1)(1)\end{bmatrix}

    Step 4: Simplify the resulting matrix.

    P=[1/21/201/21/20001]P = \begin{bmatrix}1/2 & 1/2 & 0 \\
    1/2 & 1/2 & 0 \\
    0 & 0 & 1\end{bmatrix}

    Answer:

    P=[1/21/201/21/20001]\boxed{P = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 1/2 & 0 \\ 0 & 0 & 1 \end{bmatrix}}

    ---

    3. Core Algebraic Properties of Projection Matrices

    An orthogonal projection matrix PP possesses two defining algebraic properties.

  • Symmetry: PT=PP^T = P.

  • Let us prove this using the general formula P=A(ATA)1ATP = A(A^TA)^{-1}A^T.
    PT=(A(ATA)1AT)T=(AT)T((ATA)1)TAT=A((ATA)T)1AT=A(ATA)1AT=P\begin{aligned} P^T & = (A(A^TA)^{-1}A^T)^T \\
    & = (A^T)^T ((A^TA)^{-1})^T A^T \\
    & = A ((A^TA)^T)^{-1} A^T \\
    & = A (A^TA)^{-1} A^T \\
    & = P\end{aligned}

  • Idempotence: P2=PP^2 = P.

  • This property has a clear geometric meaning: projecting a vector that is already in the subspace does not change it. If we project b\mathbf{b} to get p\mathbf{p}, projecting p\mathbf{p} again will still yield p\mathbf{p}. Thus, P(Pb)=PbP(P\mathbf{b}) = P\mathbf{b}, which implies P2b=PbP^2\mathbf{b} = P\mathbf{b} for all b\mathbf{b}, so P2=PP^2 = P.

    Let us prove this algebraically for P=A(ATA)1ATP = A(A^TA)^{-1}A^T:

    P2=(A(ATA)1AT)(A(ATA)1AT)=A(ATA)1(ATA)(ATA)1AT=A(ATA)1IAT=A(ATA)1AT=P\begin{aligned} P^2 & = (A(A^TA)^{-1}A^T)(A(A^TA)^{-1}A^T) \\
    & = A(A^TA)^{-1}(A^TA)(A^TA)^{-1}A^T \\
    & = A(A^TA)^{-1} I A^T \\
    & = A(A^TA)^{-1}A^T \\
    & = P\end{aligned}

    Must Remember

    The two defining properties of an orthogonal projection matrix PP are:

    • It is symmetric (PT=PP^T = P).

    • It is idempotent (P2=PP^2 = P).

    Any matrix satisfying these two conditions is an orthogonal projection matrix.

    ---

    4. Eigenvalues and Eigenspaces

    The properties of a projection matrix are elegantly reflected in its eigenvalues and eigenvectors. Let λ\lambda be an eigenvalue of PP with corresponding eigenvector v0\mathbf{v} \neq \mathbf{0}.

    Pv=λvP\mathbf{v} = \lambda\mathbf{v}

    Now, let us apply the matrix PP again:

    P(Pv)=P(λv)P(P\mathbf{v}) = P(\lambda\mathbf{v})
    P2v=λ(Pv)P^2\mathbf{v} = \lambda(P\mathbf{v})

    Using the idempotence property P2=PP^2 = P and the eigenvalue definition Pv=λvP\mathbf{v} = \lambda\mathbf{v}:

    Pv=λ(λv)P\mathbf{v} = \lambda(\lambda\mathbf{v})
    λv=λ2v\lambda\mathbf{v} = \lambda^2\mathbf{v}

    Since v\mathbf{v} is a non-zero vector, we can conclude:

    λ=λ2    λ2λ=0    λ(λ1)=0\lambda = \lambda^2 \implies \lambda^2 - \lambda = 0 \implies \lambda(\lambda - 1) = 0

    This shows that the only possible eigenvalues for a projection matrix are λ=0\lambda = 0 and λ=1\lambda = 1.

    The eigenspaces associated with these eigenvalues have a direct geometric interpretation:
    * Eigenvalue λ=1\lambda = 1: The eigenvectors for λ=1\lambda=1 are vectors that satisfy Pv=1v=vP\mathbf{v} = 1 \cdot \mathbf{v} = \mathbf{v}. These are precisely the vectors that are already in the subspace UU onto which we are projecting. Thus, the eigenspace for λ=1\lambda=1 is the column space of PP, which is the subspace UU.
    * Eigenvalue λ=0\lambda = 0: The eigenvectors for λ=0\lambda=0 are vectors that satisfy Pv=0v=0P\mathbf{v} = 0 \cdot \mathbf{v} = \mathbf{0}. These are the vectors that are mapped to the zero vector by the projection. Geometrically, these are the vectors orthogonal to the subspace UU. Thus, the eigenspace for λ=0\lambda=0 is the null space of PP, which is the orthogonal complement of the column space, UU^\perp.

    ---

    ---

    #
    ## 5. Rank, Trace, and Determinant

    The spectral properties of projection matrices lead to important relationships between their rank, trace, and determinant.

    * Rank: The rank of a matrix is the dimension of its column space. For a projection matrix PP that projects onto a subspace UU, we have rank(P)=dim(U)\operatorname{rank}(P) = \operatorname{dim}(U). The rank is also equal to the number of non-zero eigenvalues. Since the only non-zero eigenvalue is 11, the rank is the multiplicity of the eigenvalue λ=1\lambda = 1.

    * Trace: The trace of a square matrix is the sum of its diagonal elements, which is also equal to the sum of its eigenvalues. For a projection matrix PRn×nP \in \mathbb{R}^{n \times n} with rank kk:

    tr(P)=i=1nλi=(1+1++1)k times+(0+0++0)nk times=k\operatorname{tr}(P) = \sum_{i=1}^n \lambda_i = \underbrace{(1 + 1 + \dots + 1)}_{k \text{ times}} + \underbrace{(0 + 0 + \dots + 0)}_{n-k \text{ times}} = k

    💡 Exam Shortcut

    The rank of a projection matrix is equal to its trace.

    rank(P)=tr(P)\operatorname{rank}(P) = \operatorname{tr}(P)

    This is a very useful property for quickly determining the dimension of the subspace of projection in GATE questions.

    * Determinant: The determinant of a matrix is the product of its eigenvalues.

    det(P)=i=1nλi\det(P) = \prod_{i=1}^n \lambda_i

    If the projection is onto a proper subspace of Rn\mathbb{R}^n (i.e., URnU \neq \mathbb{R}^n), then the dimension of the subspace k<nk < n. This means there must be at least one eigenvalue equal to 00. Consequently, the product of the eigenvalues will be 00. The only exception is if PP projects onto the entire space Rn\mathbb{R}^n, in which case PP is the identity matrix II, and its determinant is 11.

    * Singular Values: For a symmetric positive semi-definite matrix, the singular values are equal to its eigenvalues. An orthogonal projection matrix PP is symmetric (PT=PP^T = P) and positive semi-definite (since

    xTPx=xTPTPx=(Px)T(Px)=Px20\mathbf{x}^TP\mathbf{x} = \mathbf{x}^TP^TP\mathbf{x} = (P\mathbf{x})^T(P\mathbf{x}) = \|P\mathbf{x}\|^2 \ge 0

    ). Therefore, the singular values of an orthogonal projection matrix are the same as its eigenvalues, which are either 00 or 11.

    ---

    Problem-Solving Strategies

    When faced with a problem involving a matrix that is described as a projection, or has the form A=UUTA=UU^T where UU has orthonormal columns, immediately recall its fundamental properties.

    💡 GATE Strategy

    • Check for Idempotence: If you need to verify if a matrix MM is a projection matrix, the quickest algebraic test is to compute M2M^2 and check if M2=MM^2 = M.

    • Use Eigenvalue Properties: Once you know a matrix PP is for projection, you know its eigenvalues can only be 00 or 11. This immediately helps in questions about its determinant (is it 00?), trace (it's an integer), or invertibility (it's singular unless P=IP=I).

    • Relate Rank and Nullity: For a projection PRn×nP \in \mathbb{R}^{n \times n} onto a subspace UU, remember that rank(P)=dim(U)\operatorname{rank}(P) = \operatorname{dim}(U) and nullity(P)=dim(U)\operatorname{nullity}(P) = \operatorname{dim}(U^\perp). The Rank-Nullity Theorem states rank(P)+nullity(P)=n\operatorname{rank}(P) + \operatorname{nullity}(P) = n. This allows you to find the dimension of the null space if you know the dimension of the projection space, and vice-versa.

    • Trace equals Rank: To find the dimension of the subspace of projection, simply calculate the trace of the matrix. This is often faster than finding the rank by row reduction.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • ❌ Using the formula P=QQTP = QQ^T when the columns of QQ are not orthonormal.
    ✅ If the columns of a matrix AA are merely a basis (not orthonormal), you must use the full formula P=A(ATA)1ATP = A(A^TA)^{-1}A^T.
      • ❌ Assuming any idempotent matrix (M2=MM^2 = M) is an orthogonal projection.
    ✅ An idempotent matrix represents a projection, but it is only an orthogonal projection if it is also symmetric (MT=MM^T = M). GATE questions typically deal with orthogonal projections.
      • ❌ Confusing the dimension of the ambient space with the dimension of the subspace.
    ✅ A projection matrix PRn×nP \in \mathbb{R}^{n \times n} acts on vectors in Rn\mathbb{R}^n. Its rank, kk, is the dimension of the subspace it projects onto, where knk \le n. The nullity will be nkn-k.
      • ❌ Calculating the determinant to be non-zero for a projection onto a proper subspace.
    ✅ Unless the projection matrix is the identity matrix II, it will have at least one zero eigenvalue, making its determinant zero. A projection matrix is almost always singular.

    ---

    Practice Questions

    :::question type="MCQ" question="Let PR4×4P \in \mathbb{R}^{4 \times 4} be the matrix that projects vectors onto a 2-dimensional subspace UR4U \subset \mathbb{R}^4. Which of the following statements is necessarily true?" options=["det(P)=1\det(P) = 1","P is invertible","tr(P)=2\operatorname{tr}(P) = 2","The nullity of P is 4"] answer="tr(P) = 2" hint="Relate the trace of a projection matrix to the dimension of the subspace it projects onto." solution="
    Step 1: Recall the properties of a projection matrix PP. The rank of PP is equal to the dimension of the subspace it projects onto.

    rank(P)=dim(U)\operatorname{rank}(P) = \operatorname{dim}(U)

    Given that UU is a 2-dimensional subspace, we have rank(P)=2\operatorname{rank}(P) = 2.

    Step 2: A key property of projection matrices is that their trace is equal to their rank.

    tr(P)=rank(P)\operatorname{tr}(P) = \operatorname{rank}(P)

    Therefore, tr(P)=2\operatorname{tr}(P) = 2.

    Step 3: Analyze the other options.

    • Since the rank is 2, which is less than the matrix dimension 4, PP is not full rank. A non-full-rank matrix is singular (not invertible) and has a determinant of 0. So, det(P)=0\det(P) = 0 and P is not invertible.

    • By the Rank-Nullity Theorem, rank(P)+nullity(P)=4\operatorname{rank}(P) + \operatorname{nullity}(P) = 4. Since rank(P)=2\operatorname{rank}(P) = 2, the nullity is 42=24 - 2 = 2.

    • Thus, the only statement that is necessarily true is tr(P)=2\operatorname{tr}(P) = 2.

    Answer: \boxed{\operatorname{tr}(P) = 2}
    "
    :::

    :::question type="NAT" question="Consider the vector a=[122]\mathbf{a} = \begin{bmatrix} 1 \\ 2 \\ -2 \end{bmatrix}. Let PP be the projection matrix onto the line spanned by a\mathbf{a}. What is the value of the trace of PP?" answer="1" hint="The rank of a matrix projecting onto a line is always 1. The trace equals the rank." solution="
    Step 1: Identify the subspace. The projection is onto a line spanned by a single non-zero vector a\mathbf{a}. A line is a 1-dimensional subspace.

    Step 2: The rank of a projection matrix is equal to the dimension of the subspace it projects onto.

    rank(P)=dim(line)=1\operatorname{rank}(P) = \operatorname{dim}(\text{line}) = 1

    Step 3: The trace of a projection matrix is equal to its rank.

    tr(P)=rank(P)\operatorname{tr}(P) = \operatorname{rank}(P)

    Therefore, tr(P)=1\operatorname{tr}(P) = 1.

    Alternative Calculation:
    We can also compute the matrix PP explicitly and find its trace.

    P=aaTaTaP = \frac{\mathbf{a}\mathbf{a}^T}{\mathbf{a}^T\mathbf{a}}

    aTa=(1)2+(2)2+(2)2=1+4+4=9\mathbf{a}^T\mathbf{a} = (1)^2 + (2)^2 + (-2)^2 = 1 + 4 + 4 = 9

    aaT=[122][122]=[122244244]\mathbf{a}\mathbf{a}^T = \begin{bmatrix} 1 \\ 2 \\ -2 \end{bmatrix} \begin{bmatrix} 1 & 2 & -2 \end{bmatrix} = \begin{bmatrix} 1 & 2 & -2 \\ 2 & 4 & -4 \\ -2 & -4 & 4 \end{bmatrix}

    P=19[122244244]=[1/92/92/92/94/94/92/94/94/9]P = \frac{1}{9} \begin{bmatrix} 1 & 2 & -2 \\ 2 & 4 & -4 \\ -2 & -4 & 4 \end{bmatrix} = \begin{bmatrix} 1/9 & 2/9 & -2/9 \\ 2/9 & 4/9 & -4/9 \\ -2/9 & -4/9 & 4/9 \end{bmatrix}

    tr(P)=19+49+49=99=1\operatorname{tr}(P) = \frac{1}{9} + \frac{4}{9} + \frac{4}{9} = \frac{9}{9} = 1

    Answer: \boxed{1}
    "
    :::

    :::question type="MSQ" question="Let q1,q2,q3\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 be orthonormal vectors in R7\mathbb{R}^7. Let the matrix M=q1q1T+q2q2TM = \mathbf{q}_1\mathbf{q}_1^T + \mathbf{q}_2\mathbf{q}_2^T. Which of the following statements is/are correct?" options=["The rank of MM is 2","M is an invertible matrix","The eigenvalues of MM are 0 and 1","M3=MM^3 = M"] answer="The rank of MM is 2,The eigenvalues of MM are 0 and 1,M3=MM^3 = M" hint="Recognize that M is a projection matrix onto the subspace spanned by q1 and q2. Then apply the properties of projection matrices." solution="
    Step 1: Identify the matrix MM. The matrix M=q1q1T+q2q2TM = \mathbf{q}_1\mathbf{q}_1^T + \mathbf{q}_2\mathbf{q}_2^T can be written as M=QQTM = QQ^T, where Q=[q1q2]Q = [\mathbf{q}_1 \mid \mathbf{q}_2]. Since q1\mathbf{q}_1 and q2\mathbf{q}_2 are orthonormal, MM is the orthogonal projection matrix onto the subspace spanned by these two vectors.

    Step 2: Analyze the rank. The subspace is spanned by two orthonormal (and thus linearly independent) vectors. Therefore, the dimension of the subspace is 2. The rank of the projection matrix equals this dimension.

    rank(M)=2\operatorname{rank}(M) = 2

    So, "The rank of MM is 2" is correct.

    Step 3: Analyze invertibility. The matrix MM is a 7×77 \times 7 matrix. Since its rank is 2, which is less than 7, the matrix is singular (not invertible). So, "M is an invertible matrix" is incorrect.

    Step 4: Analyze eigenvalues. Since MM is a projection matrix, its eigenvalues can only be 00 or 11. The multiplicity of eigenvalue 11 is the rank (2), and the multiplicity of eigenvalue 00 is nrank=72=5n - \operatorname{rank} = 7 - 2 = 5. So, "The eigenvalues of MM are 0 and 1" is correct.

    Step 5: Analyze the property M3=MM^3 = M. The defining property of a projection matrix is idempotence, i.e., M2=MM^2 = M. We can use this to evaluate M3M^3.

    M3=MM2=MM=M2=MM^3 = M \cdot M^2 = M \cdot M = M^2 = M

    So, "M3=MM^3 = M" is correct.
    "
    :::

    :::question type="NAT" question="Let P=13[211121112]P = \frac{1}{3} \begin{bmatrix} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{bmatrix} be a projection matrix. What is the dimension of the null space of PP?" answer="1" hint="First find the rank of the matrix, which is equal to its trace. Then use the Rank-Nullity theorem." solution="
    Step 1: The matrix PP is given as a projection matrix. We can find its rank by calculating its trace.

    tr(P)=13(2)+13(2)+13(2)=23+23+23=63=2\operatorname{tr}(P) = \frac{1}{3}(2) + \frac{1}{3}(2) + \frac{1}{3}(2) = \frac{2}{3} + \frac{2}{3} + \frac{2}{3} = \frac{6}{3} = 2

    Step 2: The rank of a projection matrix is equal to its trace.

    rank(P)=tr(P)=2\operatorname{rank}(P) = \operatorname{tr}(P) = 2

    Step 3: The matrix PP is a 3×33 \times 3 matrix, so it operates on R3\mathbb{R}^3. We apply the Rank-Nullity Theorem: rank(P)+nullity(P)=n\operatorname{rank}(P) + \operatorname{nullity}(P) = n.

    2+nullity(P)=32 + \operatorname{nullity}(P) = 3

    Step 4: Solve for the nullity.

    nullity(P)=32=1\operatorname{nullity}(P) = 3 - 2 = 1

    The dimension of the null space is the nullity of the matrix.

    Answer: \boxed{1}
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Definition and Forms: An orthogonal projection matrix PP is symmetric (PT=PP^T = P) and idempotent (P2=PP^2 = P). The two key formulas are P=A(ATA)1ATP = A(A^TA)^{-1}A^T for a general basis AA, and the simplified P=QQTP = QQ^T for an orthonormal basis QQ.

    • Eigenvalues are Binary: The eigenvalues of any projection matrix are exclusively 00 or 11. This has direct consequences for the determinant and singular values.

    • Rank, Trace, and Dimension: The rank of PP equals the dimension of the subspace it projects onto. Crucially, this is also equal to the trace of PP. rank(P)=tr(P)=dim(Col(P))\operatorname{rank}(P) = \operatorname{tr}(P) = \operatorname{dim}(\operatorname{Col}(P)).

    • Subspaces: The column space of PP, Col(P)\operatorname{Col}(P), is the subspace onto which vectors are projected (eigenspace for λ=1\lambda = 1). The null space of PP, Null(P)\operatorname{Null}(P), is the orthogonal complement of the column space (eigenspace for λ=0\lambda = 0).

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • Least Squares Approximation: The solution to the least squares problem minAxb22\min \|A\mathbf{x} - \mathbf{b}\|_2^2 is found by orthogonally projecting the vector b\mathbf{b} onto the column space of AA. The projection matrix is the central operator in this process.

      • Singular Value Decomposition (SVD): SVD provides a decomposition of any matrix AA into UΣVTU\Sigma V^T. The expression A=σiuiviTA = \sum \sigma_i \mathbf{u}_i \mathbf{v}_i^T can be seen as a sum of rank-one matrices, which have connections to projection-like structures. Understanding projections solidifies the geometric intuition behind SVD.


    Master these connections for comprehensive GATE preparation!

    ---

    ---

    💡 Moving Forward

    Now that you understand Projection Matrix, let's explore Partitioned Matrices which builds on these concepts.

    ---

    Part 5: Partitioned Matrices

    Introduction

    In our study of linear algebra, we often encounter large matrices whose manipulation can be computationally intensive and conceptually cumbersome. A powerful technique for simplifying such problems is matrix partitioning, also known as blocking. This method involves dividing a matrix into smaller, more manageable sub-matrices called blocks or cells. By treating these blocks as individual elements, we can perform operations such as addition, multiplication, and inversion in a structured manner.

    This approach is not merely a notational convenience; it often reveals underlying structures within the matrix and can lead to significant computational efficiencies. For the GATE examination, a firm understanding of how to operate on partitioned matrices—particularly multiplication and the calculation of determinants and inverses for special block structures—is essential for solving certain complex problems with elegance and speed. We will explore the fundamental operations and properties associated with these matrices.

    📖 Partitioned (or Block) Matrix

    A partitioned matrix, or a block matrix, is a matrix that is interpreted as being broken down into sections called blocks or sub-matrices. These blocks are themselves matrices, and the original matrix can be written in terms of these blocks.

    For example, a matrix MM can be partitioned into four blocks as:

    M=[ABCD]M = \begin{bmatrix} A & B \\ C & D \end{bmatrix}

    where A,B,C,A, B, C, and DD are sub-matrices of appropriate dimensions. The horizontal and vertical lines partitioning the matrix are conceptual, not a formal part of the matrix itself.

    ---

    Key Concepts

    The primary utility of partitioned matrices arises from our ability to perform standard matrix operations at the block level, provided certain dimensional constraints are met.

    1. Addition of Partitioned Matrices

    Two matrices partitioned in the same way can be added block-by-block. Let us consider two matrices, MM and NN, partitioned conformably:

    M=[ABCD]andN=[EFGH]M = \begin{bmatrix} A & B \\ C & D \end{bmatrix} \quad \text{and} \quad N = \begin{bmatrix} E & F \\ G & H \end{bmatrix}

    For addition to be defined, the matrices MM and NN must have the same dimensions, and the corresponding blocks must also have the same dimensions (i.e., AA and EE are same-sized, BB and FF, etc.). The sum is then computed as follows:

    M+N=[A+EB+FC+GD+H]M + N = \begin{bmatrix} A+E & B+F \\ C+G & D+H \end{bmatrix}

    This is a direct extension of standard matrix addition.

    2. Multiplication of Partitioned Matrices

    Block matrix multiplication follows a rule analogous to the standard row-by-column matrix multiplication, but with blocks as elements. This is permissible only if the partitioning of the matrices is conformable for multiplication.

    Consider two partitioned matrices MM and NN as defined above. For the product MNMN to be defined, the number of columns in each block of a row of MM must match the number of rows in the corresponding block of a column of NN. More simply, the column partitioning of the first matrix must match the row partitioning of the second matrix.

    📐 Block Matrix Multiplication

    If the partitions are conformable, the product MNMN is:

    MN=[ABCD][EFGH]=[AE+BGAF+BHCE+DGCF+DH]MN = \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} E & F \\ G & H \end{bmatrix} = \begin{bmatrix} AE+BG & AF+BH \\ CE+DG & CF+DH \end{bmatrix}

    Variables:

      • A,B,C,D,E,F,G,HA, B, C, D, E, F, G, H are sub-matrices.


    When to use:
    This formula is used when multiplying large matrices that can be conveniently partitioned. All matrix products within the formula (e.g., AE,BGAE, BG) must be well-defined.

    Worked Example:

    Problem: Let matrices MM and NN be partitioned as follows:

    M=[121010101]=[ABCD],N=[100121]=[EG]M = \left[ \begin{array}{cc|c} 1 & 2 & 1 \\ 0 & 1 & 0 \\ \hline 1 & 0 & 1 \end{array} \right] = \begin{bmatrix} A & B \\ C & D \end{bmatrix}, \quad N = \left[ \begin{array}{c|c} 1 & 0 \\ 0 & 1 \\ \hline 2 & 1 \end{array} \right] = \begin{bmatrix} E \\ G \end{bmatrix}

    Compute the product MNMN using block multiplication.

    Solution:

    Step 1: Identify the blocks and their dimensions.
    The partitioning gives:
    A=[1201]A = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix}, B=[10]B = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, C=[10]C = \begin{bmatrix} 1 & 0 \end{bmatrix}, D=[1]D = \begin{bmatrix} 1 \end{bmatrix}
    E=[1001]E = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, G=[21]G = \begin{bmatrix} 2 & 1 \end{bmatrix}
    The column partition of MM (2 columns, 1 column) matches the row partition of NN (2 rows, 1 row), so the multiplication is conformable.

    Step 2: Apply the block multiplication formula.
    The product MNMN is partitioned as [AE+BGCE+DG]\begin{bmatrix} AE+BG \\ CE+DG \end{bmatrix}.

    Step 3: Calculate the individual block products.

    AE=[1201][1001]=[1201]AE = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix}

    BG=[10][21]=[2100]BG = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \begin{bmatrix} 2 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 0 & 0 \end{bmatrix}
    CE=[10][1001]=[10]CE = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \end{bmatrix}
    DG=[1][21]=[21]DG = \begin{bmatrix} 1 \end{bmatrix} \begin{bmatrix} 2 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 \end{bmatrix}

    Step 4: Combine the results.

    AE+BG=[1201]+[2100]=[3301]AE+BG = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 2 & 1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 3 & 3 \\ 0 & 1 \end{bmatrix}

    CE+DG=[10]+[21]=[31]CE+DG = \begin{bmatrix} 1 & 0 \end{bmatrix} + \begin{bmatrix} 2 & 1 \end{bmatrix} = \begin{bmatrix} 3 & 1 \end{bmatrix}

    Step 5: Form the final partitioned matrix.

    MN=[AE+BGCE+DG]=[330131]MN = \begin{bmatrix} AE+BG \\ CE+DG \end{bmatrix} = \left[ \begin{array}{cc} 3 & 3 \\ 0 & 1 \\ \hline 3 & 1 \end{array} \right]

    Answer:

    [330131]\begin{bmatrix} 3 & 3 \\ 0 & 1 \\ 3 & 1 \end{bmatrix}

    ---

    3. Determinant and Inverse of Block Matrices

    Calculating the determinant and inverse of a general partitioned matrix can be complex. However, for certain special structures, particularly block triangular and block diagonal matrices, the computations simplify considerably.

    A block diagonal matrix is a partitioned matrix where the off-diagonal blocks are zero matrices.

    M=[A00D]M = \begin{bmatrix} A & 0 \\ 0 & D \end{bmatrix}

    A block triangular matrix has zero blocks either above or below the main block diagonal.

    Mupper=[AB0D]orMlower=[A0CD]M_{upper} = \begin{bmatrix} A & B \\ 0 & D \end{bmatrix} \quad \text{or} \quad M_{lower} = \begin{bmatrix} A & 0 \\ C & D \end{bmatrix}

    For these formulas to be applicable, the diagonal blocks AA and DD must be square matrices.

    📐 Determinant of a Block Triangular Matrix

    For a block triangular matrix M=[AB0D]M = \begin{bmatrix} A & B \\ 0 & D \end{bmatrix} or M=[A0CD]M = \begin{bmatrix} A & 0 \\ C & D \end{bmatrix}, where AA and DD are square matrices:

    det(M)=det(A)det(D)\det(M) = \det(A) \cdot \det(D)

    When to use:
    This is a highly efficient way to compute the determinant of large matrices that exhibit a block triangular structure.

    📐 Inverse of a Block Diagonal Matrix

    For a block diagonal matrix M=[A00D]M = \begin{bmatrix} A & 0 \\ 0 & D \end{bmatrix}, where AA and DD are invertible square matrices:

    M1=[A100D1]M^{-1} = \begin{bmatrix} A^{-1} & 0 \\ 0 & D^{-1} \end{bmatrix}

    When to use:
    This simplifies the inversion process by breaking it down into inverting smaller, independent blocks.

    Must Remember

    The determinant of a block matrix [ABCD]\begin{bmatrix} A & B \\ C & D \end{bmatrix} is NOT det(A)det(D)det(B)det(C)\det(A)\det(D) - \det(B)\det(C) in general. This is a common misconception. The simple product formula only holds if one of the off-diagonal blocks (BB or CC) is a zero matrix.

    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Look for Structure

    When faced with a large matrix in a GATE problem, always inspect it for a block structure before attempting a full-scale calculation.

    • Identify Zero Blocks: Look for large blocks of zeros. This might indicate a block triangular or block diagonal form, which drastically simplifies determinant and inverse calculations.

    • Check for Conformability: If multiplying partitioned matrices, quickly verify that the inner dimensions of the blocks match. The number of columns in the blocks of the first matrix must equal the number of rows in the corresponding blocks of the second matrix.

    • Simplify with Identity Blocks: If a block is an identity matrix (II) or a zero matrix (00), the block multiplication simplifies significantly, as AI=AAI=A, IA=AIA=A, A0=0A0=0, and 0A=00A=0.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Incorrect Determinant Formula: Assuming det(ABCD)=det(A)det(D)det(B)det(C)\det\begin{pmatrix} A & B \\ C & D \end{pmatrix} = \det(A)\det(D) - \det(B)\det(C). This is almost always incorrect.
    Correct Approach: Use the formula det(A)det(D)\det(A)\det(D) only for block triangular matrices. For the general case, the formula is det(A)det(DCA1B)\det(A) \det(D - CA^{-1}B), which is more complex and less likely to be tested directly without a simplifying structure.
      • Multiplying Non-Conformable Blocks: Performing block multiplication without checking if the column partitions of the first matrix match the row partitions of the second.
    Correct Approach: Always verify the dimensions of the sub-matrices before multiplying. For a product AEAE to be valid, the number of columns of AA must equal the number of rows of EE.

    ---

    Practice Questions

    :::question type="MCQ" question="Let M=[AB0D]M = \begin{bmatrix} A & B \\ 0 & D \end{bmatrix}, where A=[2111]A = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix} and D=[3243]D = \begin{bmatrix} 3 & 2 \\ 4 & 3 \end{bmatrix}. What is the determinant of MM?" options=["1", "2", "3", "4"] answer="1" hint="The matrix M is block upper triangular. The determinant of such a matrix is the product of the determinants of its diagonal blocks." solution="
    Step 1: Identify the structure of matrix MM.
    MM is a block upper triangular matrix with diagonal blocks AA and DD.

    Step 2: Apply the formula for the determinant of a block triangular matrix.
    The formula is det(M)=det(A)det(D)\det(M) = \det(A) \cdot \det(D).

    Step 3: Calculate the determinant of block AA.

    det(A)=(2)(1)(1)(1)=21=1\det(A) = (2)(1) - (1)(1) = 2 - 1 = 1

    Step 4: Calculate the determinant of block DD.

    det(D)=(3)(3)(2)(4)=98=1\det(D) = (3)(3) - (2)(4) = 9 - 8 = 1

    Step 5: Compute the final determinant.

    det(M)=det(A)det(D)=11=1\det(M) = \det(A) \cdot \det(D) = 1 \cdot 1 = 1

    Result:
    Answer: \boxed{1}
    "
    :::

    :::question type="NAT" question="Consider the block diagonal matrix P=[A00B]P = \begin{bmatrix} A & 0 \\ 0 & B \end{bmatrix}, where A=[1201]A = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} and B=[0.5]B = \begin{bmatrix} 0.5 \end{bmatrix}. If P1=[C00D]P^{-1} = \begin{bmatrix} C & 0 \\ 0 & D \end{bmatrix}, what is the sum of all elements in matrix CC?" answer="0" hint="For a block diagonal matrix, the inverse is the block diagonal matrix of the inverses. Find the inverse of block A first." solution="
    Step 1: Recall the formula for the inverse of a block diagonal matrix.
    If P=[A00B]P = \begin{bmatrix} A & 0 \\ 0 & B \end{bmatrix}, then P1=[A100B1]P^{-1} = \begin{bmatrix} A^{-1} & 0 \\ 0 & B^{-1} \end{bmatrix}.
    From the problem statement, C=A1C = A^{-1}.

    Step 2: Calculate the inverse of matrix AA.
    For a 2×22 \times 2 matrix [abcd]\begin{bmatrix} a & b \\ c & d \end{bmatrix}, the inverse is 1adbc[dbca]\frac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}.
    For A=[1201]A = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix}:

    det(A)=(1)(1)(2)(0)=1\det(A) = (1)(1) - (2)(0) = 1

    A1=11[1201]=[1201]A^{-1} = \frac{1}{1} \begin{bmatrix} 1 & -2 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & -2 \\ 0 & 1 \end{bmatrix}

    Step 3: Identify matrix CC.
    C=A1=[1201]C = A^{-1} = \begin{bmatrix} 1 & -2 \\ 0 & 1 \end{bmatrix}.

    Step 4: Calculate the sum of all elements in CC.
    Sum = 1+(2)+0+1=01 + (-2) + 0 + 1 = 0.

    Result:
    Answer: \boxed{0}
    "
    :::

    :::question type="MSQ" question="Let M=[102013456]=[IABC]M = \left[ \begin{array}{cc|c} 1 & 0 & 2 \\ 0 & 1 & 3 \\ \hline 4 & 5 & 6 \end{array} \right] = \begin{bmatrix} I & A \\ B & C \end{bmatrix} and N=[111]=[XY]N = \left[ \begin{array}{c} 1 \\ 1 \\ \hline 1 \end{array} \right] = \begin{bmatrix} X \\ Y \end{bmatrix}. Which of the following statements are correct?" options=["The block multiplication MNMN is conformable.", "The top block of the product MNMN is [34]\begin{bmatrix} 3 \\ 4 \end{bmatrix}.", "The bottom block of the product MNMN is [15]\begin{bmatrix} 15 \end{bmatrix}.", "The resulting matrix MNMN is a 3×13 \times 1 matrix."] answer="A,B,C,D" hint="First, check if the dimensions are conformable for block multiplication. Then, compute the product block by block using the formula [IX+AYBX+CY]\begin{bmatrix} IX + AY \\ BX + CY \end{bmatrix}." solution="
    Statement A: Conformability
    The column partition of MM is (2 columns | 1 column).
    The row partition of NN is (2 rows | 1 row).
    Since the partitions match, the block multiplication is conformable. Thus, statement A is correct.

    Statement B & C: Block Multiplication
    The product is MN=[IABC][XY]=[IX+AYBX+CY]MN = \begin{bmatrix} I & A \\ B & C \end{bmatrix} \begin{bmatrix} X \\ Y \end{bmatrix} = \begin{bmatrix} IX + AY \\ BX + CY \end{bmatrix}.
    Let's identify the blocks:
    I=[1001]I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, A=[23]A = \begin{bmatrix} 2 \\ 3 \end{bmatrix}, B=[45]B = \begin{bmatrix} 4 & 5 \end{bmatrix}, C=[6]C = \begin{bmatrix} 6 \end{bmatrix}.
    X=[11]X = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, Y=[1]Y = \begin{bmatrix} 1 \end{bmatrix}.

    Top Block Calculation: IX+AYIX + AY

    IX=[1001][11]=[11]IX = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}

    AY=[23][1]=[23]AY = \begin{bmatrix} 2 \\ 3 \end{bmatrix} \begin{bmatrix} 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}

    IX+AY=[11]+[23]=[34]IX + AY = \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} 2 \\ 3 \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \end{bmatrix}

    Thus, statement B is correct.

    Bottom Block Calculation: BX+CYBX + CY

    BX=[45][11]=[4(1)+5(1)]=[9]BX = \begin{bmatrix} 4 & 5 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 4(1)+5(1) \end{bmatrix} = \begin{bmatrix} 9 \end{bmatrix}

    CY=[6][1]=[6]CY = \begin{bmatrix} 6 \end{bmatrix} \begin{bmatrix} 1 \end{bmatrix} = \begin{bmatrix} 6 \end{bmatrix}

    BX+CY=[9]+[6]=[15]BX + CY = \begin{bmatrix} 9 \end{bmatrix} + \begin{bmatrix} 6 \end{bmatrix} = \begin{bmatrix} 15 \end{bmatrix}

    Thus, statement C is correct.

    Statement D: Resulting Matrix Dimension
    The final matrix is

    MN=[3415]MN = \begin{bmatrix} 3 \\ 4 \\ 15 \end{bmatrix}

    This is a 3×13 \times 1 matrix.
    Thus, statement D is correct.

    All four statements are correct.
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Block Multiplication: Operations on partitioned matrices mimic standard matrix operations, but at a block level. For multiplication, ensure the partitions are conformable.

    • Block Triangular Determinant: The determinant of a block triangular matrix is the product of the determinants of its diagonal blocks. This is a crucial shortcut.

    • Block Diagonal Inverse: The inverse of a block diagonal matrix is a block diagonal matrix composed of the inverses of the original diagonal blocks.

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • Matrix Decompositions (LU, QR): Partitioning is a conceptual foundation for understanding how matrices can be broken down into simpler, structured components. Block LU decomposition is a direct extension of these ideas.

      • Linear Transformations: A block diagonal matrix corresponds to a linear transformation that maps certain subspaces into themselves, effectively decoupling the vector space into independent subspaces. This provides a deeper geometric intuition for their simple structure.


    Master these connections for a more comprehensive understanding of matrix theory in GATE preparation.

    ---

    Chapter Summary

    📖 Specialized Matrices and Properties - Key Takeaways

    In our examination of specialized matrices, we have uncovered properties that are fundamental to both theoretical understanding and computational efficiency. For success in the GATE examination, it is imperative that the student master the following core concepts:

    • The Determinant as a Diagnostic Tool: The determinant of a square matrix AA, denoted det(A)\det(A), is non-zero if and only if the matrix is invertible and its columns (or rows) are linearly independent. We have also established its connection to the matrix's eigenvalues, λi\lambda_i, through the relation det(A)=i=1nλi\det(A) = \prod_{i=1}^{n} \lambda_i.

    • Orthogonal Matrices and Isometry: An n×nn \times n matrix QQ is orthogonal if its transpose is its inverse, i.e., QTQ=IQ^T Q = I. This defining property implies that its columns form an orthonormal basis for Rn\mathbb{R}^n. Orthogonal matrices represent rigid transformations (rotations and reflections) that preserve lengths and angles, a fact encapsulated by the property Qx=x\|Qx\| = \|x\| for any vector xx. Consequently, their determinant is always ±1\pm 1.

    • Idempotent Matrices and Eigenvalues: A matrix AA is idempotent if A2=AA^2 = A. This seemingly simple algebraic property has profound implications for its spectral properties: its eigenvalues can only be 00 or 11. This leads to the crucial result that for an idempotent matrix, the rank is equal to the trace: rank(A)=trace(A)\operatorname{rank}(A) = \operatorname{trace}(A).

    • Projection Matrices: An orthogonal projection matrix PP is a symmetric (PT=PP^T=P) idempotent (P2=PP^2=P) matrix. It maps a vector onto a specific subspace. The matrix that projects vectors onto the column space of a matrix AA (which must have linearly independent columns) is given by the fundamental formula P=A(ATA)1ATP = A(A^T A)^{-1} A^T.

    • Partitioned Matrices: The technique of partitioning matrices allows for simplified computation. For a block triangular matrix, the determinant is the product of the determinants of the diagonal blocks. For instance, for M=[ABOD]M = \begin{bmatrix} A & B \\ O & D \end{bmatrix}, we have shown that det(M)=det(A)det(D)\det(M) = \det(A) \cdot \det(D).

    ---

    Chapter Review Questions

    :::question type="MCQ" question="Let PP be a non-zero n×nn \times n matrix that is both orthogonal and idempotent. Which of the following statements is necessarily true about PP?" options=["PP must be the identity matrix, II","PP must be the zero matrix, OO","PP can be any non-zero projection matrix","PP is singular"] answer="A" hint="Use the defining properties of both matrix types. An orthogonal matrix is always invertible." solution="
    A matrix PP is idempotent if it satisfies the relation P2=PP^2 = P.
    A matrix PP is orthogonal if it satisfies the relation PTP=IP^T P = I, which implies that PP is invertible and its inverse is P1=PTP^{-1} = P^T.

    Since PP is orthogonal, it is invertible. We can therefore pre-multiply the idempotent equation by P1P^{-1}:

    P1(P2)=P1PP^{-1} (P^2) = P^{-1} P

    Using the associative property of matrix multiplication, we get:
    (P1P)P=I(P^{-1} P) P = I

    Since P1P=IP^{-1} P = I, the equation simplifies to:
    IP=II \cdot P = I

    P=IP = I

    Thus, the only non-zero matrix that is simultaneously orthogonal and idempotent is the identity matrix. Option A is the correct conclusion.
    Result:
    Answer: \boxed{\text{A}}
    "
    :::

    :::question type="NAT" question="A vector vR3v \in \mathbb{R}^3 is projected onto the subspace spanned by the linearly independent vectors u1=[110]u_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} and u2=[011]u_2 = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}. If PP is the orthogonal projection matrix for this subspace, what is the value of the determinant of PP?" answer="0" hint="Recall the relationship between the rank, eigenvalues, and determinant of a projection matrix. Direct computation of PP is not required." solution="
    The matrix PP projects vectors from the 3-dimensional space R3\mathbb{R}^3 onto a 2-dimensional subspace (a plane spanned by u1u_1 and u2u_2).

    Method 1: Using Properties of Projection Matrices

  • The rank of a projection matrix is equal to the dimension of the subspace onto which it projects. Here, the subspace is spanned by two linearly independent vectors, so its dimension is 2. Therefore, rank(P)=2\operatorname{rank}(P) = 2.

  • The eigenvalues of any projection matrix can only be 0 or 1.

  • The number of eigenvalues equal to 1 is the rank of the matrix. Since rank(P)=2\operatorname{rank}(P) = 2, the matrix PP has two eigenvalues equal to 1.

  • Since PP is a 3×33 \times 3 matrix, it must have 3 eigenvalues in total. The remaining eigenvalue must be 0.

  • The determinant of a matrix is the product of its eigenvalues.

  • det(P)=λ1λ2λ3=110=0\det(P) = \lambda_1 \cdot \lambda_2 \cdot \lambda_3 = 1 \cdot 1 \cdot 0 = 0

    Therefore, the determinant of PP is 0.

    Method 2: Direct Calculation (for verification)

    Let A=[u1u2]=[101101]A = \begin{bmatrix} u_1 & u_2 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 1 & 1 \\ 0 & 1 \end{bmatrix}.
    The projection matrix is P=A(ATA)1ATP = A(A^T A)^{-1} A^T.

    ATA=[110011][101101]=[2112]A^T A = \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 1 & 1 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}

    (ATA)1=12(2)1(1)[2112]=13[2112](A^T A)^{-1} = \frac{1}{2(2)-1(1)} \begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix} = \frac{1}{3} \begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix}

    Since PP is formed from the product of non-square matrices, we cannot simply take individual determinants. However, we have established that PP is a singular matrix (its rank is 2, which is less than its dimension 3), and therefore its determinant must be 0.
    Result:
    Answer: \boxed{0}
    "
    :::

    :::question type="MCQ" question="Consider the 5×55 \times 5 block lower triangular matrix M=[AOCD]M = \begin{bmatrix} A & O \\ C & D \end{bmatrix}, where AA is a 3×33 \times 3 idempotent matrix with rank(A)=2\operatorname{rank}(A)=2, and DD is a 2×22 \times 2 orthogonal matrix. What is the value of det(M)\det(M)?" options=["0","1","-1","Cannot be determined"] answer="A" hint="The determinant of a block triangular matrix is the product of the determinants of its diagonal blocks. Relate the rank of the idempotent matrix AA to its eigenvalues." solution="
    The matrix MM is a block lower triangular matrix. The determinant of such a matrix is the product of the determinants of its diagonal blocks:

    det(M)=det(A)det(D)\det(M) = \det(A) \cdot \det(D)

    We must now determine det(A)\det(A) and det(D)\det(D).

    Analysis of Matrix A:

    • AA is a 3×33 \times 3 idempotent matrix, which means A2=AA^2 = A.

    • The eigenvalues of an idempotent matrix can only be 0 or 1.

    • For an idempotent matrix, the rank is equal to the trace, which is also equal to the number of eigenvalues that are 1.

    • We are given rank(A)=2\operatorname{rank}(A) = 2. Therefore, AA has two eigenvalues equal to 1.

    • Since AA is a 3×33 \times 3 matrix, it has three eigenvalues in total. The third eigenvalue must be 0.

    • The determinant of AA is the product of its eigenvalues: det(A)=1×1×0=0\det(A) = 1 \times 1 \times 0 = 0.


    Analysis of Matrix D:
    • DD is an orthogonal matrix. The determinant of any orthogonal matrix is either +1 or -1.


    Calculating det(M):
    Now we can compute the determinant of MM:
    det(M)=det(A)det(D)=0(±1)=0\det(M) = \det(A) \cdot \det(D) = 0 \cdot (\pm 1) = 0

    Thus, the determinant of MM is 0.
    Result:
    Answer: \boxed{0}
    "
    :::

    ---

    What's Next?

    💡 Continue Your GATE Journey

    Having completed Specialized Matrices and Properties, you have established a firm foundation for more advanced topics in Linear Algebra. The concepts discussed herein do not exist in isolation but form the building blocks for a deeper understanding of vector spaces and transformations.

    Connections to Previous Learning:
    This chapter builds directly upon the foundational concepts of matrix algebra, vector spaces, rank, and invertibility. We have now enriched our understanding of the determinant, moving from a purely computational tool to an indicator of crucial matrix properties, such as the ±1\pm 1 determinant of an orthogonal matrix.

    Future Chapters Building on These Concepts:

      • Eigenvalues and Eigenvectors: Our use of eigenvalue properties to analyze idempotent and projection matrices was a preview of this critical topic. The next chapter will formalize the study of eigenvalues and eigenvectors, and the special matrices from this chapter will serve as recurring, illustrative examples.

      • Linear Transformations: We will soon see that orthogonal matrices correspond to geometric rotations and reflections, while projection matrices perform geometric projections. This chapter provides the algebraic basis for understanding the geometry of linear maps.

      • Systems of Linear Equations and Least Squares: The concept of an orthogonal projection is the theoretical core of the method of least squares, a powerful technique for finding the "best fit" solution to overdetermined systems of linear equations (Ax=bAx=b) that have no exact solution.

      • Matrix Decompositions: The property of orthogonality is central to advanced and powerful techniques such as QR Factorization and Singular Value Decomposition (SVD), which have wide-ranging applications in engineering and data science.

    🎯 Key Points to Remember

    • Master the core concepts in Specialized Matrices and Properties before moving to advanced topics
    • Practice with previous year questions to understand exam patterns
    • Review short notes regularly for quick revision before exams

    Related Topics in Linear Algebra

    More Resources

    Why Choose MastersUp?

    🎯

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    📚

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    📊

    Smart Analytics

    Track your progress with subject-wise performance insights

    🔖

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation →

    No credit card required • Free forever for basic features