100% FREE Updated: Mar 2026 Linear Algebra Vector Spaces and Systems of Equations

Matrices and Systems of Linear Equations

Comprehensive study notes on Matrices and Systems of Linear Equations for GATE DA preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

Matrices and Systems of Linear Equations

Overview

In this chapter, we shall explore the fundamental relationship between matrices and systems of linear equations. A system of linear equations, which may involve numerous variables, can be compactly and elegantly represented by a single matrix equation of the form Ax=bA\mathbf{x} = \mathbf{b}. This representation is more than a mere notational convenience; it allows us to leverage the rich algebraic structure of matrices to systematically analyze and solve such systems. The properties of the coefficient matrix AA directly reveal profound insights into the nature of the solution set, determining whether a solution is unique, non-existent, or one of infinitely many.

A mastery of the concepts presented herein is indispensable for success in the GATE examination. The methods we will study form the algorithmic backbone for solving a significant portion of problems in linear algebra and its applications across various engineering disciplines. We will move beyond rote calculation to develop a conceptual understanding of how the structure of a matrix dictates the behavior of the system it represents. Our focus will be on the practical techniques and theoretical underpinnings required to solve problems with both accuracy and efficiency, a critical skill assessed in a competitive examination environment.

---

Chapter Contents

| # | Topic | What You'll Learn |
|---|-------|-------------------|
| 1 | Gaussian Elimination | Systematically solving linear systems via row operations. |
| 2 | Rank and Nullity | Analyzing the fundamental subspaces of a matrix. |
| 3 | Matrix Representation | Encoding linear transformations as structured arrays. |

---

Learning Objectives

By the End of This Chapter

After completing this chapter, you will be able to:

  • Apply the Gaussian elimination algorithm to solve systems of linear equations and find the inverse of a matrix.

  • Determine the rank and nullity of a matrix and understand their relationship via the rank-nullity theorem.

  • Analyze the consistency and nature of solutions for a system of linear equations using the concept of rank.

  • Represent a linear transformation with respect to a given basis.

---

We now turn our attention to Gaussian Elimination...

Part 1: Gaussian Elimination

Introduction

Gaussian elimination stands as a cornerstone algorithm in linear algebra, providing a systematic and computationally efficient method for solving systems of linear equations. Its fundamental purpose is to transform a given system, represented by an augmented matrix, into an equivalent, simpler system that can be solved with relative ease. This transformation is achieved by methodically applying a set of elementary row operations to reduce the matrix to what is known as row echelon form.

The procedure is not merely a tool for solving for unknown variables; its applications extend to other critical matrix computations relevant to the GATE examination, including the determination of a matrix's rank, the calculation of its determinant, and finding its inverse. A thorough understanding of this algorithm, including its operational steps and its computational complexity, is indispensable for tackling a wide range of problems in linear algebra and its applications in data science. We shall explore the mechanics of this method, analyze its efficiency, and discuss its implications for determining the nature of solutions to linear systems.

📖 Augmented Matrix

For a system of linear equations expressed in matrix form as Ax=bA\mathbf{x} = \mathbf{b}, where AA is an m×nm \times n coefficient matrix, x\mathbf{x} is an n×1n \times 1 column vector of variables, and b\mathbf{b} is an m×1m \times 1 column vector of constants, the augmented matrix is a combined m×(n+1)m \times (n+1) matrix denoted by [Ab][A | \mathbf{b}]. This matrix contains all the essential information of the system.

---

The Algorithm: Forward Elimination and Back Substitution

The process of Gaussian elimination is elegantly divided into two distinct phases. The first phase, Forward Elimination, aims to simplify the system by introducing zeros below the main diagonal of the coefficient matrix. The second phase, Back Substitution, involves solving the simplified upper triangular system for the unknown variables.

1. Phase I: Forward Elimination

The primary objective of forward elimination is to convert the augmented matrix [Ab][A | \mathbf{b}] into an equivalent matrix [Uc][U | \mathbf{c}] where UU is in row echelon form. This is accomplished using a sequence of elementary row operations.

The elementary row operations are:

  • Swapping: Interchanging two rows (RiRjR_i \leftrightarrow R_j).

  • Scaling: Multiplying a row by a non-zero scalar (RikRiR_i \rightarrow kR_i, where k0k \neq 0).

  • Replacement: Replacing one row with the sum of itself and a multiple of another row (RiRi+kRjR_i \rightarrow R_i + kR_j).
  • The algorithm proceeds by selecting a pivot element in the first column (typically a11a_{11}) and using the replacement operation to eliminate all entries below it. This process is then repeated for the submatrix formed by excluding the first row and first column, and so on, until the entire matrix is in row echelon form.



    Original Matrix [A|b]
    [
    p
    | *
    [
    x
    | *
    [
    x
    | *

    R2 -> R2 - (x/p)R1
    R3 -> R3 - (x/p)R1
    After Step 1
    [
    p
    | *
    [
    0
    | *
    [
    0
    | *

    ...
    p
    = Pivot Element
    x
    = Element to be eliminated

    📖 Row Echelon Form (REF)

    A matrix is in row echelon form if it satisfies the following conditions:

    • All rows consisting entirely of zeros are at the bottom of the matrix.

    • For each non-zero row, the first non-zero entry (called the pivot or leading entry) is to the right of the pivot of the row above it.

    • All entries in a column below a pivot are zeros.

    Worked Example (Forward Elimination):

    Problem: Transform the following augmented matrix into row echelon form.

    [12182221235425]\left[ \begin{array}{ccc|c} 1 & 2 & 1 & 8 \\ 2 & 2 & 2 & 12 \\ 3 & 5 & 4 & 25\end{array} \right]

    Solution:

    Step 1: Use the pivot in the first row (a11=1a_{11}=1) to create zeros in the first column of rows 2 and 3. We perform the operations R2R22R1R_2 \rightarrow R_2 - 2R_1 and R3R33R1R_3 \rightarrow R_3 - 3R_1.

    [121822(1)22(2)22(1)122(8)33(1)53(2)43(1)253(8)]\left[ \begin{array}{ccc|c} 1 & 2 & 1 & 8 \\ 2 - 2(1) & 2 - 2(2) & 2 - 2(1) & 12 - 2(8) \\ 3 - 3(1) & 5 - 3(2) & 4 - 3(1) & 25 - 3(8)\end{array} \right]
    =[121802040111]= \left[ \begin{array}{ccc|c} 1 & 2 & 1 & 8 \\ 0 & -2 & 0 & -4 \\ 0 & -1 & 1 & 1\end{array} \right]

    Step 2: Now, consider the submatrix. The next pivot is in the second row, second column (a22=2a'_{22}=-2). Use this pivot to create a zero below it. We perform the operation R3R312R2R_3 \rightarrow R_3 - \frac{1}{2}R_2.

    [12180204012(0)112(2)112(0)112(4)]\left[ \begin{array}{ccc|c} 1 & 2 & 1 & 8 \\ 0 & -2 & 0 & -4 \\ 0 - \frac{1}{2}(0) & -1 - \frac{1}{2}(-2) & 1 - \frac{1}{2}(0) & 1 - \frac{1}{2}(-4)\end{array} \right]
    =[121802040013]= \left[ \begin{array}{ccc|c} 1 & 2 & 1 & 8 \\ 0 & -2 & 0 & -4 \\ 0 & 0 & 1 & 3\end{array} \right]

    Result: The matrix is now in row echelon form.

    2. Phase II: Back Substitution

    Once the augmented matrix is in row echelon form, [Uc][U|\mathbf{c}], the system of equations is upper triangular and can be solved by starting from the last equation and substituting the results back into the equations above.

    Worked Example (Back Substitution):

    Problem: Solve the system represented by the row echelon form matrix obtained previously.

    [121802040013]\left[ \begin{array}{ccc|c} 1 & 2 & 1 & 8 \\ 0 & -2 & 0 & -4 \\ 0 & 0 & 1 & 3\end{array} \right]

    Solution:

    The system of equations corresponding to this matrix is:

    x1+2x2+x3=8x_1 + 2x_2 + x_3 = 8

    2x2=4-2x_2 = -4

    x3=3x_3 = 3

    Step 1: Solve the last equation for x3x_3.

    x3=3x_3 = 3

    Step 2: Substitute the value of x3x_3 into the second-to-last equation and solve for x2x_2. (In this case, the second equation only depends on x2x_2).

    2x2=4-2x_2 = -4
    x2=42=2x_2 = \frac{-4}{-2} = 2

    Step 3: Substitute the values of x2x_2 and x3x_3 into the first equation and solve for x1x_1.

    x1+2(2)+(3)=8x_1 + 2(2) + (3) = 8
    x1+4+3=8x_1 + 4 + 3 = 8
    x1+7=8x_1 + 7 = 8
    x1=1x_1 = 1

    Result: The unique solution to the system is

    x=[123]\mathbf{x} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}

    ---

    ---

    Computational Complexity Analysis

    For competitive examinations like GATE, understanding the computational cost of an algorithm is crucial. The complexity is typically measured by counting the number of floating-point arithmetic operations (multiplications, divisions, additions, and subtractions).

    General Case: Dense n×nn \times n Matrix

    Let us consider an n×nn \times n system. The forward elimination phase is the most computationally intensive part. To eliminate the entries below the pivot in the first column, we perform (n1)(n-1) row operations. Each row operation involves one division (to find the multiplier) and nn multiplications and nn additions across the row. This is repeated for each of the n1n-1 pivots.

    The number of operations can be approximated by the sums:

    • Multiplications/Divisions: k=1n1(nk)(nk+1)n33\sum_{k=1}^{n-1} (n-k)(n-k+1) \approx \frac{n^3}{3}

    • Additions/Subtractions: k=1n1(nk)(nk)n33\sum_{k=1}^{n-1} (n-k)(n-k) \approx \frac{n^3}{3}


    The total number of operations for forward elimination is therefore proportional to n3n^3, which we denote as O(n3)O(n^3).

    For back substitution, we solve for xnx_n, then xn1x_{n-1}, and so on. To solve for xix_i, it requires (ni)(n-i) multiplications, (ni)(n-i) subtractions, and one division.

    • Total operations: i=1n(2(ni)+1)n2\sum_{i=1}^{n} (2(n-i)+1) \approx n^2.

    The complexity of back substitution is thus O(n2)O(n^2).

    Since the forward elimination phase dominates, the overall complexity of Gaussian elimination for a general n×nn \times n matrix is O(n3)O(n^3).

    Special Case: Upper Triangular Matrix

    A critical observation, often tested in GATE, concerns the complexity for special matrix structures. Consider an n×nn \times n matrix that is already in upper triangular form.

    In this scenario, the matrix is already in a state where no elements exist below the main diagonal. Consequently, the entire forward elimination phase is redundant and can be skipped. The algorithm proceeds directly to the back substitution phase.

    As we established, the complexity of back substitution is O(n2)O(n^2). Therefore, performing "Gaussian elimination" on an already upper triangular matrix to solve a system Ax=bA\mathbf{x} = \mathbf{b} involves only back substitution, and its complexity is O(n2)O(n^2).

    📐 Complexity of Gaussian Elimination
    Complexity={O(n3)for a general n×n matrixO(n2)for an upper triangular n×n matrix\text{Complexity} = \begin{cases}O(n^3) & \text{for a general } n \times n \text{ matrix} \\ O(n^2) & \text{for an upper triangular } n \times n \text{ matrix}\end{cases}

    Variables:

      • nn = The dimension of the square matrix.


    When to use: To estimate the computational cost of solving a system of linear equations. The special case is a common variation in competitive exams.

    ---

    Problem-Solving Strategies

    💡 Identifying System Consistency

    The final row echelon form of the augmented matrix provides complete information about the nature of the solution set. Let the rank of the coefficient matrix AA be rr.

      • Unique Solution: The system is consistent and has a unique solution if the rank(A)\operatorname{rank}(A) equals the rank([Ab])\operatorname{rank}(\left[A|\mathbf{b}\right]), and this rank is equal to the number of variables, nn. That is, rank(A)=rank([Ab])=n\operatorname{rank}(A) = \operatorname{rank}(\left[A|\mathbf{b}\right]) = n. There are no free variables.

      • No Solution: The system is inconsistent and has no solution if the rank(A)\operatorname{rank}(A) is less than the rank([Ab])\operatorname{rank}(\left[A|\mathbf{b}\right]). This occurs when the echelon form has a row of the form [0 0  0  c][0 \ 0 \ \dots \ 0 \ | \ c] where c0c \neq 0. This corresponds to the impossible equation 0=c0 = c.

      • Infinitely Many Solutions: The system is consistent and has infinitely many solutions if rank(A)=rank([Ab])=r<n\operatorname{rank}(A) = \operatorname{rank}(\left[A|\mathbf{b}\right]) = r < n. In this case, there are nrn-r free variables, which can be chosen arbitrarily, leading to an infinite number of solutions.

    ---

    Common Mistakes

    ⚠️ Common Errors in Gaussian Elimination
      • Arithmetic Slips: Simple errors in addition or multiplication, especially with negative numbers, are the most frequent source of incorrect answers.
    Correct approach: Write down each step clearly and double-check your calculations, particularly the row replacement operations.
      • Operating on the Wrong Row: Applying a replacement operation like RiRi+kRjR_i \rightarrow R_i + kR_j but using the new version of RjR_j from a previous step within the same elimination stage.
    Correct approach: In a single stage (e.g., clearing all entries below pivot akka_{kk}), always use the original state of the pivot row RkR_k for all calculations.
      • Forgetting the Augmented Part: Performing row operations only on the coefficient matrix AA and forgetting to apply them to the vector b\mathbf{b}.
    Correct approach: Remember that every operation must be applied to the entire row of the augmented matrix [Ab]\left[A|\mathbf{b}\right].

    ---

    Practice Questions

    :::question type="MCQ" question="Consider the augmented matrix for a system of linear equations:

    [246811591324]\left[ \begin{array}{ccc|c} 2 & -4 & 6 & 8 \\ 1 & -1 & 5 & 9 \\ -1 & 3 & -2 & -4 \end{array} \right]
    . After performing the first step of forward elimination to create zeros below the first pivot, what is the entry in the second row, third column?" options=["2","3","-2","8"] answer="2" hint="The first pivot is 2. Perform the operation R2R2(1/2)R1R_2 \rightarrow R_2 - (1/2)R_1. Calculate the new value for the element at position (2,3)." solution="
    Step 1: Identify the pivot and the target row. The pivot is a11=2a_{11} = 2. We want to modify Row 2.

    Step 2: Determine the row operation. The operation is R2R2a21a11R1R_2 \rightarrow R_2 - \frac{a_{21}}{a_{11}}R_1.

    R2R212R1R_2 \rightarrow R_2 - \frac{1}{2}R_1

    Step 3: Apply this operation to the third element of Row 2. The original element is a23=5a_{23} = 5. The corresponding element in Row 1 is a13=6a_{13} = 6.

    a23=a2312a13a'_{23} = a_{23} - \frac{1}{2}a_{13}
    a23=512(6)a'_{23} = 5 - \frac{1}{2}(6)
    a23=53=2a'_{23} = 5 - 3 = 2

    Result: The new entry in the second row, third column is 2.
    Answer: \boxed{2}
    "
    :::

    :::question type="NAT" question="A system of equations is reduced to the following row echelon form:

    [123901350024]\left[ \begin{array}{ccc|c} 1 & -2 & 3 & 9 \\ 0 & 1 & 3 & 5 \\ 0 & 0 & 2 & 4 \end{array} \right]
    . What is the value of the variable x2x_2?" answer="-1" hint="This requires back substitution. First, solve for x3x_3 from the last row. Then, substitute this value into the equation from the second row to find x2x_2." solution="
    Step 1: Write down the equations from the row echelon form.

    x12x2+3x3=9x_1 - 2x_2 + 3x_3 = 9
    x2+3x3=5x_2 + 3x_3 = 5
    2x3=42x_3 = 4

    Step 2: Solve for x3x_3 using the third equation.

    2x3=4    x3=22x_3 = 4 \implies x_3 = 2

    Step 3: Substitute the value of x3x_3 into the second equation to solve for x2x_2.

    x2+3(2)=5x_2 + 3(2) = 5
    x2+6=5x_2 + 6 = 5
    x2=56=1x_2 = 5 - 6 = -1

    Result: The value of x2x_2 is -1.
    Answer: \boxed{-1}
    "
    :::

    :::question type="MSQ" question="The augmented matrix of a system of linear equations is reduced to the form

    [123450012300000]\left[ \begin{array}{cccc|c} 1 & 2 & 3 & 4 & 5 \\ 0 & 0 & 1 & 2 & 3 \\ 0 & 0 & 0 & 0 & 0 \end{array} \right]
    . Which of the following statements is/are correct?" options=["The system is inconsistent.","The system has infinitely many solutions.","The system has 2 free variables.","The rank of the coefficient matrix is 3."] answer="The system has infinitely many solutions.,The system has 2 free variables." hint="Analyze the rank\operatorname{rank} of the coefficient matrix and the augmented matrix. The number of free variables is the total number of variables minus the rank\operatorname{rank}." solution="
    Analysis:
  • The coefficient matrix has 2 non-zero rows, so its rank\operatorname{rank} is r=2r=2.

  • The augmented matrix also has 2 non-zero rows, so its rank\operatorname{rank} is also 2. Since rank(A)=rank([Ab])\operatorname{rank}(A) = \operatorname{rank}(\left[A|\mathbf{b}\right]), the system is consistent. This eliminates the 'inconsistent' option.

  • The number of variables is n=4n=4 (corresponding to the four columns in A).

  • Since r<nr < n (2<42 < 4), the system has infinitely many solutions.

  • The number of free variables is nr=42=2n - r = 4 - 2 = 2. The pivot columns are column 1 and column 3. The non-pivot columns, column 2 (x2x_2) and column 4 (x4x_4), correspond to the free variables.

  • The rank\operatorname{rank} of the coefficient matrix is 2, not 3.
  • Therefore, the correct statements are that the system has infinitely many solutions and has 2 free variables.
    Answer: \boxed{\text{The system has infinitely many solutions.,The system has 2 free variables.}}
    "
    :::

    :::question type="MCQ" question="What is the order of computational complexity for solving a system of linear equations Ax=bA\mathbf{x}=\mathbf{b} where AA is an n×nn \times n lower triangular matrix?" options=["O(n)O(n)","O(n2)O(n^2)","O(n3)O(n^3)","O(logn)O(\log n)"] answer="O(n2)O(n^2)" hint="A lower triangular system is similar to an upper triangular one, but the solving process is reversed. Analyze the complexity of this 'forward substitution' process." solution="
    Step 1: A system with a lower triangular matrix LL has the form:

    l11x1=b1l21x1+l22x2=b2\begin{aligned}l_{11}x_1 & = b_1 \\
    l_{21}x_1 + l_{22}x_2 & = b_2 \\
    & \vdots\end{aligned}

    Step 2: This system can be solved directly without an elimination phase. The process is called forward substitution. We first solve for x1x_1, then substitute its value into the second equation to solve for x2x_2, and so on.

    Step 3: Let's analyze the complexity of this process.

    • To find x1x_1: 1 division.

    • To find x2x_2: 1 multiplication, 1 subtraction, 1 division.

    • To find xkx_k: (k1)(k-1) multiplications, (k1)(k-1) subtractions, 1 division.


    Step 4: The total number of operations is approximately k=1n2(k1)\sum_{k=1}^{n} 2(k-1), which is proportional to n2n^2.

    Result: The complexity is O(n2)O(n^2), analogous to the back substitution process for upper triangular matrices.
    Answer: \boxed{O(n^2)}
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Two-Phase Process: Gaussian elimination consists of forward elimination (to create a row echelon form) and back substitution (to solve the resulting upper triangular system).

    • Computational Cost: The complexity for solving a general n×nn \times n system is O(n3)O(n^3). This cost is dominated by the forward elimination phase.

    • Critical Special Case: For an already upper or lower triangular matrix, the elimination phase is not needed. The system is solved directly by back or forward substitution, respectively, with a much lower complexity of O(n2)O(n^2). This is a frequent concept in competitive exams.

    • Solution Nature: The row echelon form of the augmented matrix definitively determines if the system has a unique solution, no solution, or infinitely many solutions by comparing the ranks of the coefficient and augmented matrices.

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • LU Decomposition: This is a direct factorization of a matrix AA into a lower triangular matrix LL and an upper triangular matrix UU, such that A=LUA = LU. The matrices LL and UU are, in fact, byproducts of the Gaussian elimination process.

      • Finding the Matrix Inverse: An extension of Gaussian elimination, known as the Gauss-Jordan method, continues the elimination process to create zeros above the pivots as well, resulting in a diagonal matrix. This method is a standard way to compute the inverse of a matrix, A1A^{-1}.

      • Rank and Null Space of a Matrix: The number of pivots (non-zero rows) in the row echelon form is the rank of the matrix. The structure of the solution to Ax=0A\mathbf{x} = \mathbf{0}, found using Gaussian elimination, defines the null space of the matrix.

    ---

    💡 Moving Forward

    Now that you understand Gaussian Elimination, let's explore Rank and Nullity which builds on these concepts.

    ---

    Part 2: Rank and Nullity

    Introduction

    In the study of linear algebra, the concepts of rank and nullity are of central importance. They provide a quantitative measure of the fundamental properties of a matrix and the linear transformation it represents. The rank of a matrix can be understood as a measure of its "non-degeneracy," quantifying the dimension of the output space of the transformation. Conversely, the nullity measures the dimension of the input space that is mapped to the zero vector, representing the "degeneracy" of the transformation.

    Together, these concepts are encapsulated in the Rank-Nullity Theorem, a cornerstone result that establishes a profound relationship between the dimensions of the domain, the image (column space), and the kernel (null space) of a linear map. For the GATE examination, a firm grasp of rank and nullity is indispensable for analyzing systems of linear equations, determining matrix invertibility, and understanding the structure of vector spaces associated with matrices. We shall explore these concepts, their properties, and their direct applications to problem-solving.

    📖 Rank of a Matrix

    The rank of a matrix AA, denoted as rank(A)\operatorname{rank}(A) or ρ(A)\rho(A), is defined as the maximum number of linearly independent column vectors in the matrix. Equivalently, it is the maximum number of linearly independent row vectors. The rank also corresponds to the dimension of the column space (or image) of the matrix, as well as the dimension of its row space.

    📖 Nullity of a Matrix

    The nullity of a matrix AA, denoted as nullity(A)\operatorname{nullity}(A), is the dimension of the null space (or kernel) of the matrix. The null space of AA is the set of all vectors x\mathbf{x} such that Ax=0A\mathbf{x} = \mathbf{0}.

    ---

    ---

    Key Concepts

    1. The Rank-Nullity Theorem

    The Rank-Nullity Theorem provides a fundamental relationship between the rank and nullity of a matrix. It connects the dimensions of the two primary subspaces associated with any matrix: the column space and the null space.

    📐 Rank-Nullity Theorem

    For a matrix AA of dimensions m×nm \times n:

    rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n

    Variables:

      • rank(A)\operatorname{rank}(A) = The rank of matrix AA.

      • nullity(A)\operatorname{nullity}(A) = The nullity of matrix AA.

      • nn = The number of columns in matrix AA.


    When to use: This theorem is crucial for finding the dimension of the null space when the rank is known, or vice versa. It is fundamental in problems involving the number of free variables in a system of linear equations.




    Domain Rn\mathbb{R}^n

    Codomain Rm\mathbb{R}^m

    Null Space
    (Kernel)
    dim = nullity(A)

    Column Space
    (Image)
    dim = rank(A)

    Matrix A







    0





    Worked Example:

    Problem: Find the rank and nullity of the matrix A=(121313242537)A = \begin{pmatrix} 1 & 2 & 1 & 3 \\ 1 & 3 & 2 & 4 \\ 2 & 5 & 3 & 7 \end{pmatrix}.

    Solution:

    We will use Gaussian elimination to find the row echelon form of the matrix. The number of non-zero rows will be the rank.

    Step 1: Start with the given matrix.

    A=(121313242537)A = \begin{pmatrix} 1 & 2 & 1 & 3 \\ 1 & 3 & 2 & 4 \\ 2 & 5 & 3 & 7 \end{pmatrix}

    Step 2: Apply row operations to create zeros below the first pivot.
    Apply R2R2R1R_2 \rightarrow R_2 - R_1 and R3R32R1R_3 \rightarrow R_3 - 2R_1.

    (121301110111)\sim \begin{pmatrix} 1 & 2 & 1 & 3 \\ 0 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \end{pmatrix}

    Step 3: Apply a row operation to create a zero below the second pivot.
    Apply R3R3R2R_3 \rightarrow R_3 - R_2.

    (121301110000)\sim \begin{pmatrix} 1 & 2 & 1 & 3 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}

    Step 4: Identify the rank.
    The matrix is now in row echelon form. We count the number of non-zero rows. There are two non-zero rows.

    rank(A)=2\operatorname{rank}(A) = 2

    Step 5: Apply the Rank-Nullity Theorem.
    The matrix AA is of size 3×43 \times 4. Thus, m=3m=3 and n=4n=4.

    rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n
    2+nullity(A)=42 + \operatorname{nullity}(A) = 4
    nullity(A)=42=2\operatorname{nullity}(A) = 4 - 2 = 2

    Answer: The rank of the matrix is 2, and the nullity is 2.

    ---

    2. Rank and Systems of Linear Equations

    The concept of rank is fundamental to determining the nature of solutions for a system of linear equations, Ax=bA\mathbf{x} = \mathbf{b}.

    Consistency Condition

    A system of linear equations Ax=bA\mathbf{x} = \mathbf{b} is consistent (i.e., has at least one solution) if and only if the rank of the coefficient matrix AA is equal to the rank of the augmented matrix [A|b]\left[A \middle| \mathbf{b}\right].

    rank(A)=rank[A|b]\operatorname{rank}(A) = \operatorname{rank}\left[A \middle| \mathbf{b}\right]

    If rank(A)<rank[A|b]\operatorname{rank}(A) < \operatorname{rank}\left[A \middle| \mathbf{b}\right], the system is inconsistent and has no solution.

    Let us consider a consistent system where rank(A)=rank[A|b]=r\operatorname{rank}(A) = \operatorname{rank}\left[A \middle| \mathbf{b}\right] = r. Let nn be the number of variables (columns of AA).

  • Unique Solution: The system has a unique solution if and only if r=nr = n. In this case, there are no free variables. For a square matrix (m=nm=n), this occurs when AA is invertible (full rank).
  • Infinite Solutions: The system has infinitely many solutions if and only if r<nr < n. The number of free variables, which determines the dimensionality of the solution set, is given by nrn-r. This quantity is precisely the nullity of AA.
  • This analysis is critical for understanding questions that probe the existence and uniqueness of solutions for different matrices and vectors, as seen in GATE.

    | Case | Matrix `AA` | `rank(A)=r\operatorname{rank}(A) = r` | Condition | Nature of Solutions for `Ax=bA\mathbf{x} = \mathbf{b}` |
    | :--- | :--- | :--- | :--- | :--- |
    | 1 | Square (`n×nn \times n`) | `r=nr=n` | Invertible | Unique solution for every `b\mathbf{b}`. |
    | 2 | Square (`n×nn \times n`) | `r<nr<n` | Singular | No solution for some `b\mathbf{b}`, infinite solutions for other `b\mathbf{b}`. |
    | 3 | Wide (`m×n,m<nm \times n, m<n`) | `rm<nr \le m < n` | - | No solution or infinite solutions. Never a unique solution. |
    | 4 | Tall (`m×n,m>nm \times n, m>n`) | `r=nr=n` | Full column rank | Unique solution or no solution. |
    | 5 | Tall (`m×n,m>nm \times n, m>n`) | `r<nr<n` | Rank deficient | Infinite solutions or no solution. |

    ---

    3. Properties of Rank and Nullity

    Understanding the properties of rank is essential for solving more abstract problems.

    Let AA be an m×nm \times n matrix and BB be an n×pn \times p matrix.

    • Bounds: 0rank(A)min(m,n)0 \le \operatorname{rank}(A) \le \min(m, n).
    • Transpose: rank(A)=rank(AT)\operatorname{rank}(A) = \operatorname{rank}(A^T).
    • Matrix Product (Sylvester's Inequality): rank(A)+rank(B)nrank(AB)min(rank(A),rank(B))\operatorname{rank}(A) + \operatorname{rank}(B) - n \le \operatorname{rank}(AB) \le \min(\operatorname{rank}(A), \operatorname{rank}(B)). The second inequality is most frequently used.
    • Invertibility: An n×nn \times n matrix AA is invertible if and only if rank(A)=n\operatorname{rank}(A) = n. This is equivalent to nullity(A)=0\operatorname{nullity}(A) = 0 or det(A)0\det(A) \ne 0.
    • Matrix Powers: For a square matrix AA, we have rank(A)rank(A2)rank(A3)\operatorname{rank}(A) \ge \operatorname{rank}(A^2) \ge \operatorname{rank}(A^3) \ge \dots. The ranks are non-increasing. A particularly important result, relevant to GATE questions, is that if for some k1k \ge 1, rank(Ak)=rank(Ak+1)\operatorname{rank}(A^k) = \operatorname{rank}(A^{k+1}), then rank(Ak)=rank(Aj)\operatorname{rank}(A^k) = \operatorname{rank}(A^j) for all j>kj > k.
    • Relation between Column Spaces: The column space of ABAB, C(AB)C(AB), is a subspace of the column space of AA, C(A)C(A). Similarly, the null space of BB, N(B)N(B), is a subspace of the null space of ABAB, N(AB)N(AB).
    • A Special Condition: A condition like A3=AA^3=A implies a stable structure for the column space. If yC(A)\mathbf{y} \in C(A), then y=Ax\mathbf{y} = A\mathbf{x} for some x\mathbf{x}. The condition A3=AA^3=A gives y=Ax=A3x=A2(Ax)=A2y\mathbf{y} = A\mathbf{x} = A^3\mathbf{x} = A^2(A\mathbf{x}) = A^2\mathbf{y}. This means every vector in the column space of AA is also in the column space of A2A^2. Since we always have C(A2)C(A)C(A^2) \subseteq C(A), this forces the equality C(A2)=C(A)C(A^2) = C(A), which in turn implies rank(A2)=rank(A)\operatorname{rank}(A^2) = \operatorname{rank}(A).
    ---

    4. Rank, Nullity, and Eigenvalues

    The nullity of a matrix is directly connected to its eigenvalues.

    Nullity and Eigenvalue 0

    The nullity of a square matrix AA is greater than zero if and only if λ=0 is an eigenvalue of A\lambda=0 \text{ is an eigenvalue of } A.

    nullity(A)>0    λ=0 is an eigenvalue of A\operatorname{nullity}(A) > 0 \iff \lambda=0 \text{ is an eigenvalue of } A

    Furthermore, nullity(A)\operatorname{nullity}(A) is the geometric multiplicity of the eigenvalue λ=0\lambda=0. A non-zero nullity implies the matrix is singular (non-invertible).

    This connection is frequently tested. For instance, if a problem provides information that implies a matrix must have an eigenvalue of 0, we can immediately conclude that its nullity is at least 1, its rank is less than nn, and the homogeneous system Ax=0A\mathbf{x}=\mathbf{0} has infinitely many solutions.

    Eigenvalues of Polynomials of Matrices:
    If AA has an eigenvalue λ\lambda with corresponding eigenvector v\mathbf{v}, and P(t)P(t) is a polynomial, then P(A)P(A) has an eigenvalue P(λ)P(\lambda) with the same eigenvector v\mathbf{v}.
    For example, if B=A32A2+AB = A^3 - 2A^2 + A, and λ\lambda is an eigenvalue of AA, then μ=λ32λ2+λ\mu = \lambda^3 - 2\lambda^2 + \lambda is an eigenvalue of BB.

    A Common GATE Scenario:
    If the sum of elements in each row of a matrix AA is a constant kk, then kk is an eigenvalue of AA, and the vector v=[1,1,,1]T\mathbf{v} = [1, 1, \dots, 1]^T is the corresponding eigenvector.
    To see this, consider the product AvA\mathbf{v}. The ii-th entry of the resulting vector is j=1nAijvj=j=1nAij(1)\sum_{j=1}^{n} A_{ij}v_j = \sum_{j=1}^{n} A_{ij}(1), which is the sum of the elements in the ii-th row of AA. If this sum is kk for all rows, then Av=kvA\mathbf{v} = k\mathbf{v}.

    ---

    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Finding Rank and Nullity

    • Use Gaussian Elimination: The most reliable method to find the rank of a matrix is to reduce it to its Row Echelon Form. The number of non-zero rows (or pivots) is the rank.

    • Check for Singularity: For a square matrix, first check if it is singular (det(A)=0\det(A) = 0). If it is, then rank(A)<n\operatorname{rank}(A) < n and nullity(A)>0\operatorname{nullity}(A) > 0. This is often faster than full row reduction.

    • Analyze Homogeneous Systems (Ax=0A\mathbf{x}=\mathbf{0}): The question of whether Ax=0A\mathbf{x}=\mathbf{0} has non-trivial solutions is a question about nullity. It has non-trivial (infinite) solutions if and only if nullity(A)>0\operatorname{nullity}(A) > 0.

    • Leverage Matrix Properties: For abstract problems, do not compute. Use properties like rank(AB)min(rank(A),rank(B))\operatorname{rank}(AB) \le \min(\operatorname{rank}(A), \operatorname{rank}(B)) or eigenvalue properties to deduce the rank or nullity. If a problem states "the sum of each row is 1," immediately conclude that λ=1\lambda = 1 is an eigenvalue.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • ❌ Confusing columns and rows in the Rank-Nullity Theorem. The theorem is rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n, where nn is the number of columns.
      • ❌ Assuming a square system Ax=bA\mathbf{x}=\mathbf{b} has a unique solution. This is only true if AA is non-singular (rank(A)=n\operatorname{rank}(A) = n). If AA is singular, it will have either no solution or infinite solutions.
      • ❌ Stating that an underdetermined system (m<nm < n) has a unique solution. An underdetermined system can never have a unique solution; its solutions, if they exist, are always infinite.
      • ❌ Forgetting that the homogeneous system Ax=0A\mathbf{x}=\mathbf{0} is always consistent because x=0\mathbf{x}=\mathbf{0} is always a solution. The key question is whether non-trivial solutions exist.

    ---

    Practice Questions

    :::question type="MCQ" question="Let AA be a 5×55 \times 5 real matrix such that A2=AA^2 = A (an idempotent matrix). Which of the following statements is NOT always true?" options=["rank(A)+rank(IA)=5\operatorname{rank}(A) + \operatorname{rank}(I-A) = 5", "The null space of AA is a subset of the column space of (IA)(I-A)", "AA is invertible", "The only possible eigenvalues of AA are 00 and 11"] answer="AA is invertible" hint="Consider the properties of idempotent matrices. What happens if you choose AA to be the zero matrix? Also, analyze the equation Ax=λxA\mathbf{x}=\lambda\mathbf{x} by multiplying by AA again." solution="
    Step 1: Analyze the condition A2=AA^2 = A. This can be rewritten as A(AI)=0A(A-I) = 0.

    Step 2: Analyze the eigenvalues. Let λ\lambda be an eigenvalue of AA with eigenvector x\mathbf{x}.

    Ax=λxA\mathbf{x} = \lambda\mathbf{x}

    Multiplying by AA, we get
    A2x=A(λx)=λ(Ax)=λ(λx)=λ2xA^2\mathbf{x} = A(\lambda\mathbf{x}) = \lambda(A\mathbf{x}) = \lambda(\lambda\mathbf{x}) = \lambda^2\mathbf{x}

    Since A2=AA^2 = A, we have
    Ax=A2xA\mathbf{x} = A^2\mathbf{x}

    Therefore,
    λx=λ2x\lambda\mathbf{x} = \lambda^2\mathbf{x}

    which implies
    (λ2λ)x=0(\lambda^2 - \lambda)\mathbf{x} = 0

    Since x\mathbf{x} is an eigenvector, it is non-zero, so we must have
    λ2λ=0\lambda^2 - \lambda = 0

    or
    λ(λ1)=0\lambda(\lambda-1)=0

    The only possible eigenvalues are λ=0\lambda=0 and λ=1\lambda=1. So, option D is always true.

    Step 3: Analyze invertibility. If AA is the zero matrix, A2=0=AA^2 = 0 = A, so the condition is satisfied. The zero matrix is not invertible. If AA is the identity matrix II, then I2=II^2 = I, and II is invertible. Since AA can be singular, the statement 'AA is invertible' is not always true. This makes C the likely answer.

    Step 4: Analyze the other options for completeness.
    For an idempotent matrix, it is a standard result that Rn=C(A)N(A)\mathbb{R}^n = C(A) \oplus N(A), where C(A)C(A) is the column space and N(A)N(A) is the null space.
    Also, it can be shown that N(A)=C(IA)N(A) = C(I-A) and N(IA)=C(A)N(I-A) = C(A).
    The rank is the dimension of the column space. So, rank(A)=dim(C(A))\operatorname{rank}(A) = \dim(C(A)) and rank(IA)=dim(C(IA))=dim(N(A))=nullity(A)\operatorname{rank}(I-A) = \dim(C(I-A)) = \dim(N(A)) = \operatorname{nullity}(A).
    By the Rank-Nullity theorem, rank(A)+nullity(A)=5\operatorname{rank}(A) + \operatorname{nullity}(A) = 5.
    Substituting, we get rank(A)+rank(IA)=5\operatorname{rank}(A) + \operatorname{rank}(I-A) = 5. So, option A is always true.
    From N(A)=C(IA)N(A) = C(I-A), it is clear that the null space of AA is not just a subset but is equal to the column space of (IA)(I-A). So, option B is always true.

    Result: The statement that is not always true is that AA is invertible.
    Answer: \boxed{A \text{ is invertible}}
    "
    :::

    :::question type="NAT" question="Let AA be a 4×54 \times 5 matrix with rank(A)=3\operatorname{rank}(A) = 3. If the system Ax=bA\mathbf{x}=\mathbf{b} is consistent, what is the dimension of the solution space for the corresponding homogeneous system Ax=0A\mathbf{x}=\mathbf{0}?" answer="2" hint="The dimension of the solution space of a homogeneous system is its nullity. Use the Rank-Nullity theorem." solution="
    Step 1: Identify the given parameters.
    The matrix AA has dimensions m×nm \times n, where m=4m=4 and n=5n=5.
    The rank of the matrix is given: rank(A)=3\operatorname{rank}(A) = 3.

    Step 2: Understand the question.
    The 'dimension of the solution space for the homogeneous system Ax=0A\mathbf{x}=\mathbf{0}' is the definition of the nullity of matrix AA.

    Step 3: Apply the Rank-Nullity Theorem.
    The theorem states: rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n (number of columns).

    Step 4: Substitute the values and solve for nullity.

    3+nullity(A)=53 + \operatorname{nullity}(A) = 5

    nullity(A)=53\operatorname{nullity}(A) = 5 - 3

    nullity(A)=2\operatorname{nullity}(A) = 2

    Result: The dimension of the solution space is 2.
    Answer: \boxed{2}
    "
    :::

    :::question type="MSQ" question="Let AR3×4A \in \mathbb{R}^{3 \times 4}, BR4×3B \in \mathbb{R}^{4 \times 3}, and CR3×3C \in \mathbb{R}^{3 \times 3}. Let rank(A)=3\operatorname{rank}(A)=3 and rank(B)=2\operatorname{rank}(B)=2. Which of the following statements is/are necessarily TRUE?" options=["The system Ax=bA\mathbf{x}=\mathbf{b} has infinite solutions for every bR3\mathbf{b} \in \mathbb{R}^3.","The system By=dB\mathbf{y}=\mathbf{d} can have a unique solution for some dR4\mathbf{d} \in \mathbb{R}^4.","rank(AC)\operatorname{rank}(AC) can be 3.","rank(AB)\operatorname{rank}(AB) is at most 2."] answer="A,D" hint="For A, consider the dimensions and rank of A. For B, consider the dimensions of B. For C and D, use the property rank(XY)min(rank(X),rank(Y))\operatorname{rank}(XY) \le \min(\operatorname{rank}(X), \operatorname{rank}(Y))." solution="
    Analysis of Option A:
    AA is a 3×43 \times 4 matrix with rank(A)=3\operatorname{rank}(A)=3. The number of rows is m=3m=3. Since rank(A)=m\operatorname{rank}(A)=m, the columns of AA span the entire codomain R3\mathbb{R}^3. This means that for any vector bR3\mathbf{b} \in \mathbb{R}^3, the system Ax=bA\mathbf{x}=\mathbf{b} is consistent. The number of variables is n=4n=4. Since rank(A)=3<n=4\operatorname{rank}(A) = 3 < n=4, there must be nrank(A)=43=1n - \operatorname{rank}(A) = 4-3=1 free variable. Therefore, the system has infinite solutions for every b\mathbf{b}. Statement A is TRUE.

    Analysis of Option B:
    BB is a 4×34 \times 3 matrix. The system is By=dB\mathbf{y}=\mathbf{d}, where yR3\mathbf{y} \in \mathbb{R}^3. The number of variables is n=3n=3. For a unique solution, we need rank(B)=n=3\operatorname{rank}(B) = n = 3. However, it is given that rank(B)=2\operatorname{rank}(B)=2. Since rank(B)<n\operatorname{rank}(B) < n, the system, if consistent, must have infinite solutions. It can never have a unique solution. Statement B is FALSE.

    Analysis of Option C:
    CC is a 3×33 \times 3 matrix. We consider the product ACAC. The size of ACAC is (3×4)×(3×3)(3 \times 4) \times (3 \times 3), which is not a valid matrix multiplication. Let's assume the question meant CACA, where CC is 4×34 \times 3. Or perhaps CC is 4×44 \times 4. Assuming the question intended AR3×3,BR3×4,CR4×3A \in \mathbb{R}^{3 \times 3}, B \in \mathbb{R}^{3 \times 4}, C \in \mathbb{R}^{4 \times 3} and rank(A)=3,rank(C)=2\operatorname{rank}(A)=3, \operatorname{rank}(C)=2. Then rank(AC)min(rank(A),rank(C))=min(3,2)=2\operatorname{rank}(AC) \le \min(\operatorname{rank}(A), \operatorname{rank}(C)) = \min(3,2)=2. Let's stick to the original dimensions. The product ACAC is not defined. If the question intended a product like CAC'A with CRk×3C' \in \mathbb{R}^{k \times 3}, then rank(CA)min(rank(C),rank(A))rank(A)=3\operatorname{rank}(C'A) \le \min(\operatorname{rank}(C'), \operatorname{rank}(A)) \le \operatorname{rank}(A)=3. But we cannot be certain. Let's re-read the question carefully. AR3×4,CR3×3A \in \mathbb{R}^{3 \times 4}, C \in \mathbb{R}^{3 \times 3}. The product ACAC is not defined. Let's assume it was a typo for CACA where CR4×3C \in \mathbb{R}^{4 \times 3}. Then rank(CA)min(rank(C),rank(A))\operatorname{rank}(CA) \le \min(\operatorname{rank}(C), \operatorname{rank}(A)). Or maybe the typo was AR3×3A \in \mathbb{R}^{3 \times 3}. Let's assume the question is flawed as written and move on. Let's assume it was CR4×kC \in \mathbb{R}^{4 \times k} and the product was ACAC. Then rank(AC)min(rank(A),rank(C))rank(A)=3\operatorname{rank}(AC) \le \min(\operatorname{rank}(A), \operatorname{rank}(C)) \le \operatorname{rank}(A)=3. It is possible for the rank to be 3 if CC is, for example, a 4×44 \times 4 identity matrix. But the question gives CR3×3C \in \mathbb{R}^{3 \times 3}. Given the ambiguity, let's assume the intended product was valid, such as C3×3A3×4C_{3 \times 3} A_{3 \times 4}. Then rank(CA)min(rank(C),rank(A))=min(rank(C),3)\operatorname{rank}(CA) \le \min(\operatorname{rank}(C), \operatorname{rank}(A)) = \min(\operatorname{rank}(C), 3). If CC is invertible, rank(C)=3\operatorname{rank}(C)=3, and rank(CA)=rank(A)=3\operatorname{rank}(CA)=\operatorname{rank}(A)=3. So it is possible. But is it necessarily true? No. If CC is the zero matrix, the rank is 0. Let's re-evaluate option D.

    Analysis of Option D:
    We are considering the product ABAB. AA is 3×43 \times 4 and BB is 4×34 \times 3. The product ABAB is a 3×33 \times 3 matrix.
    Using the rank property:

    rank(AB)min(rank(A),rank(B))\operatorname{rank}(AB) \le \min(\operatorname{rank}(A), \operatorname{rank}(B))

    rank(AB)min(3,2)\operatorname{rank}(AB) \le \min(3, 2)

    rank(AB)2\operatorname{rank}(AB) \le 2

    This means the rank of ABAB is at most 2. Statement D is TRUE.

    Conclusion: Statements A and D are necessarily true.
    Answer: \boxed{A,D}
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • The Rank-Nullity Theorem is Central: For any m×nm \times n matrix AA, rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n. Always remember that nn is the number of columns.

    • Rank Governs Solutions to Ax=bA\mathbf{x}=\mathbf{b}: The system is consistent iff rank(A)=rank([Ab])\operatorname{rank}(A) = \operatorname{rank}([A|\mathbf{b}]). If consistent, the solution is unique if rank(A)=n\operatorname{rank}(A) = n (number of variables) and infinite if rank(A)<n\operatorname{rank}(A) < n.

    • Nullity and Eigenvalue 0 are Linked: A square matrix AA is singular if and only if nullity(A)>0\operatorname{nullity}(A) > 0, which is true if and only if λ=0\lambda=0 is an eigenvalue of AA. This implies the system Ax=0A\mathbf{x}=\mathbf{0} has infinite solutions.

    • Rank of a Product: The rank of a product of matrices is less than or equal to the minimum of their individual ranks: rank(AB)min(rank(A),rank(B))\operatorname{rank}(AB) \le \min(\operatorname{rank}(A), \operatorname{rank}(B)).

    ---

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • Eigenvalues and Eigenvectors: As we have seen, the \operatorname{nullity} is the geometric multiplicity of the eigenvalue 00. Understanding eigenvalues provides a deeper insight into the properties of a matrix, including its \operatorname{rank}.

      • Vector Spaces: \operatorname{Rank} and \operatorname{nullity} are the dimensions of the fundamental subspaces associated with a matrix (column space and null space). A thorough understanding of vector spaces, basis, and dimension is essential.

      • Systems of Linear Equations: \operatorname{Rank} is the ultimate tool for classifying the solution sets of linear systems. Mastering this connection is crucial for a wide range of application-based problems.


    Master these connections for comprehensive GATE preparation!

    ---

    💡 Moving Forward

    Now that you understand Rank and Nullity, let's explore Matrix Representation which builds on these concepts.

    ---

    Part 3: Matrix Representation

    Introduction

    In the study of linear algebra, linear transformations are fundamental objects that describe structure-preserving maps between vector spaces. While these transformations can be defined abstractly, their true computational power is unlocked when we can represent them in a concrete, numerical form. The matrix representation of a linear transformation provides this bridge from the abstract to the concrete.

    By choosing an ordered basis for both the domain and codomain vector spaces, we can associate a unique matrix with any given linear transformation. This matrix encapsulates all the information about the transformation. Consequently, the abstract operation of applying a transformation to a vector becomes the familiar and straightforward process of matrix-vector multiplication. This concept is not merely a notational convenience; it is a cornerstone of computational linear algebra, enabling us to analyze and solve complex problems involving linear systems, differential equations, and data analysis.

    📖 Matrix Representation of a Linear Transformation

    Let VV and WW be finite-dimensional vector spaces. Let B={v1,v2,,vn}\mathcal{B} = \{v_1, v_2, \dots, v_n\} be an ordered basis for VV and C={w1,w2,,wm}\mathcal{C} = \{w_1, w_2, \dots, w_m\} be an ordered basis for WW. For a linear transformation T:VWT: V \to W, the matrix representation of T with respect to the bases B\mathcal{B} and C\mathcal{C} is the m×nm \times n matrix, denoted [T]BC[T]_{\mathcal{B}}^{\mathcal{C}}, whose jj-th column is the coordinate vector of T(vj)T(v_j) with respect to the basis C\mathcal{C}. That is,

    [T]BC=[[T(v1)]C[T(v2)]C[T(vn)]C][T]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} | & | & & | \\ [T(v_1)]_{\mathcal{C}} & [T(v_2)]_{\mathcal{C}} & \cdots & [T(v_n)]_{\mathcal{C}} \\ | & | & & | \end{bmatrix}

    where [T(vj)]C[T(v_j)]_{\mathcal{C}} is the coordinate vector of the image of the jj-th basis vector of VV in the basis C\mathcal{C} of WW.

    ---

    Key Concepts

    1. Constructing the Matrix Representation

    The definition itself provides a direct procedure for constructing the matrix. The core task is to determine how the transformation TT acts on each basis vector of the domain VV. The resulting vectors, which lie in the codomain WW, are then expressed in terms of the basis vectors of WW. The coefficients of these linear combinations form the columns of the matrix representation.

    This construction establishes a fundamental relationship: for any vector vVv \in V, the coordinate vector of its image T(v)T(v) can be found by multiplying the matrix representation of TT by the coordinate vector of vv.

    📐 Transformation via Matrix Multiplication
    [T(v)]C=[T]BC[v]B[T(v)]_{\mathcal{C}} = [T]_{\mathcal{B}}^{\mathcal{C}} [v]_{\mathcal{B}}

    Variables:

      • [v]B[v]_{\mathcal{B}}: The coordinate vector of vVv \in V with respect to the basis B\mathcal{B}.

      • [T(v)]C[T(v)]_{\mathcal{C}}: The coordinate vector of the transformed vector T(v)WT(v) \in W with respect to the basis C\mathcal{C}.

      • [T]BC[T]_{\mathcal{B}}^{\mathcal{C}}: The matrix representation of TT with respect to bases B\mathcal{B} and C\mathcal{C}.


    When to use: This formula is the primary application of the matrix representation. It converts the action of a linear transformation into a matrix multiplication problem, which is computationally efficient.

    The following diagram illustrates this commutative relationship. Moving from vv to T(v)T(v) in the abstract vector space (top path) is equivalent to moving from [v]B[v]_{\mathcal{B}} to [T(v)]C[T(v)]_{\mathcal{C}} via matrix multiplication in the coordinate space (bottom path).





    V
    W


    Rn\mathbb{R}^n
    Rm\mathbb{R}^m




    T (Transformation)



    Multiplication by [T]BC[T]_{\mathcal{B}}^{\mathcal{C}}



    CoordB_{B}



    CoordC_{C}

    Worked Example:

    Problem: Let T:R2R3T: \mathbb{R}^2 \to \mathbb{R}^3 be a linear transformation defined by T(x,y)=(x+2y,3xy,y)T(x, y) = (x+2y, 3x-y, y). Find the matrix representation of TT with respect to the standard basis B={e1,e2}\mathcal{B} = \{e_1, e_2\} for R2\mathbb{R}^2 and the standard basis C={f1,f2,f3}\mathcal{C} = \{f_1, f_2, f_3\} for R3\mathbb{R}^3.

    Here, e1=(1,0)e_1 = (1, 0), e2=(0,1)e_2 = (0, 1), and f1=(1,0,0)f_1 = (1, 0, 0), f2=(0,1,0)f_2 = (0, 1, 0), f3=(0,0,1)f_3 = (0, 0, 1).

    Solution:

    Step 1: Apply the transformation TT to the first basis vector of the domain, e1e_1.

    T(e1)=T(1,0)=(1+2(0),3(1)0,0)=(1,3,0)T(e_1) = T(1, 0) = (1+2(0), 3(1)-0, 0) = (1, 3, 0)

    Step 2: Express the result T(e1)T(e_1) as a linear combination of the basis vectors of the codomain, C\mathcal{C}. The coordinate vector [T(e1)]C[T(e_1)]_{\mathcal{C}} will form the first column of our matrix.

    (1,3,0)=1(1,0,0)+3(0,1,0)+0(0,0,1)=1f1+3f2+0f3(1, 3, 0) = 1 \cdot (1, 0, 0) + 3 \cdot (0, 1, 0) + 0 \cdot (0, 0, 1) = 1f_1 + 3f_2 + 0f_3
    [T(e1)]C=[130][T(e_1)]_{\mathcal{C}} = \begin{bmatrix} 1 \\ 3 \\ 0 \end{bmatrix}

    Step 3: Apply the transformation TT to the second basis vector of the domain, e2e_2.

    T(e2)=T(0,1)=(0+2(1),3(0)1,1)=(2,1,1)T(e_2) = T(0, 1) = (0+2(1), 3(0)-1, 1) = (2, -1, 1)

    Step 4: Express the result T(e2)T(e_2) as a linear combination of the basis vectors of the codomain, C\mathcal{C}. The coordinate vector [T(e2)]C[T(e_2)]_{\mathcal{C}} will form the second column of our matrix.

    (2,1,1)=2(1,0,0)1(0,1,0)+1(0,0,1)=2f11f2+1f3(2, -1, 1) = 2 \cdot (1, 0, 0) - 1 \cdot (0, 1, 0) + 1 \cdot (0, 0, 1) = 2f_1 - 1f_2 + 1f_3
    [T(e2)]C=[211][T(e_2)]_{\mathcal{C}} = \begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix}

    Step 5: Assemble the matrix [T]BC[T]_{\mathcal{B}}^{\mathcal{C}} using the coordinate vectors from Step 2 and Step 4 as its columns.

    [T]BC=[[T(e1)]C[T(e2)]C]=[123101][T]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} [T(e_1)]_{\mathcal{C}} & [T(e_2)]_{\mathcal{C}} \end{bmatrix} = \begin{bmatrix} 1 & 2 \\ 3 & -1 \\ 0 & 1 \end{bmatrix}

    Answer:

    [123101]\boxed{\begin{bmatrix} 1 & 2 \\ 3 & -1 \\ 0 & 1 \end{bmatrix}}

    ---

    ---

    Problem-Solving Strategies

    💡 Systematic Construction

    When finding a matrix representation, always follow a systematic, column-by-column approach:

    • Take the first basis vector from the domain, v1v_1.

    • Compute its image, T(v1)T(v_1).

    • Find the coordinates of T(v1)T(v_1) with respect to the codomain basis. This gives you the first column of the matrix.

    • Repeat this process for each basis vector of the domain (v2,v3,,vnv_2, v_3, \dots, v_n) to find the subsequent columns.

    This methodical process minimizes errors and ensures all components are correctly placed.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Mixing up Bases: Using the domain's basis B\mathcal{B} to find the coordinates of T(vj)T(v_j) instead of the codomain's basis C\mathcal{C}.
    ✅ Always remember: the transformation takes you from VV to WW. The result T(vj)T(v_j) lives in WW, so its coordinates must be found with respect to the basis of WW, which is C\mathcal{C}.
      • Incorrect Column Order: The order of the columns in the matrix must match the order of the basis vectors in the domain basis B\mathcal{B}. The jj-th column must correspond to the image of the jj-th basis vector, vjv_j.
    ✅ Adhere strictly to the order given in the problem for the basis B={v1,v2,,vn}\mathcal{B} = \{v_1, v_2, \dots, v_n\}.

    ---

    Practice Questions

    :::question type="MCQ" question="Let T:P2R2T: \mathbb{P}_2 \to \mathbb{R}^2 be a linear transformation defined by T(p(x))=(p(0),p(1))T(p(x)) = (p(0), p'(1)), where P2\mathbb{P}_2 is the space of polynomials of degree at most 2. Let B={1,x,x2}\mathcal{B} = \{1, x, x^2\} be the standard basis for P2\mathbb{P}_2 and C={(1,0),(0,1)}\mathcal{C} = \{(1,0), (0,1)\} be the standard basis for R2\mathbb{R}^2. What is the entry in the second row and third column of the matrix [T]BC[T]_{\mathcal{B}}^{\mathcal{C}}?" options=["0", "1", "2", "-1"] answer="2" hint="The third column is determined by the image of the third basis vector of the domain, x2x^2. Find T(x2)T(x^2) and then its coordinates with respect to the standard basis of R2\mathbb{R}^2." solution="
    Step 1: Identify the third basis vector of the domain P2\mathbb{P}_2.
    The basis is B={1,x,x2}\mathcal{B} = \{1, x, x^2\}, so the third basis vector is v3=p(x)=x2v_3 = p(x) = x^2.

    Step 2: Apply the transformation TT to this basis vector.
    For p(x)=x2p(x) = x^2, we have p(0)=02=0p(0) = 0^2 = 0. The derivative is p(x)=2xp'(x) = 2x, so p(1)=2(1)=2p'(1) = 2(1) = 2.
    Therefore, T(x2)=(p(0),p(1))=(0,2)T(x^2) = (p(0), p'(1)) = (0, 2).

    Step 3: Find the coordinate vector of T(x2)T(x^2) with respect to the codomain basis C={(1,0),(0,1)}\mathcal{C} = \{(1,0), (0,1)\}.

    (0,2)=0(1,0)+2(0,1)(0, 2) = 0 \cdot (1,0) + 2 \cdot (0,1)

    The coordinate vector is
    [T(x2)]C=[02][T(x^2)]_{\mathcal{C}} = \begin{bmatrix} 0 \\ 2 \end{bmatrix}

    Step 4: This coordinate vector forms the third column of the matrix [T]BC[T]_{\mathcal{B}}^{\mathcal{C}}.
    The matrix has the form

    [02]\begin{bmatrix} \cdot & \cdot & 0 \\ \cdot & \cdot & 2 \end{bmatrix}

    Result: The entry in the second row and third column is 2.
    Answer: \boxed{2}
    "
    :::

    :::question type="NAT" question="Consider the linear transformation T:R2R2T: \mathbb{R}^2 \to \mathbb{R}^2 that reflects vectors across the line y=xy=x. Let B={(1,2),(0,1)}\mathcal{B} = \{(1, 2), (0, 1)\} be a basis for the domain and C={(1,0),(0,1)}\mathcal{C} = \{(1, 0), (0, 1)\} (the standard basis) be the basis for the codomain. Calculate the value of the entry in the first row, first column of the matrix [T]BC[T]_{\mathcal{B}}^{\mathcal{C}}." answer="2" hint="The transformation that reflects (a,b)(a,b) across y=xy=x is T(a,b)=(b,a)T(a,b)=(b,a). Apply this to the first basis vector of B\mathcal{B} and find its coordinates with respect to C\mathcal{C}." solution="
    Step 1: Define the action of the transformation TT.
    A reflection across the line y=xy=x maps a point (a,b)(a,b) to (b,a)(b,a). So, T(a,b)=(b,a)T(a,b) = (b,a).

    Step 2: Identify the first basis vector of the domain, v1v_1.
    From B={(1,2),(0,1)}\mathcal{B} = \{(1, 2), (0, 1)\}, we have v1=(1,2)v_1 = (1, 2).

    Step 3: Apply the transformation TT to v1v_1.

    T(v1)=T(1,2)=(2,1)T(v_1) = T(1, 2) = (2, 1)

    Step 4: Find the coordinate vector of T(v1)T(v_1) with respect to the codomain basis C={(1,0),(0,1)}\mathcal{C} = \{(1,0), (0,1)\}.
    Since C\mathcal{C} is the standard basis, the coordinates are simply the components of the vector.

    T(v1)=(2,1)=2(1,0)+1(0,1)T(v_1) = (2, 1) = 2 \cdot (1,0) + 1 \cdot (0,1)

    So,
    [T(v1)]C=[21][T(v_1)]_{\mathcal{C}} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}

    Step 5: This vector is the first column of the matrix [T]BC[T]_{\mathcal{B}}^{\mathcal{C}}.
    The entry in the first row, first column is the first component of this vector.

    Result: The value is 2.
    Answer: \boxed{2}
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • A linear transformation T:VWT: V \to W can be represented by a matrix [T]BC[T]_{\mathcal{B}}^{\mathcal{C}} once ordered bases B\mathcal{B} for VV and C\mathcal{C} for WW are chosen.

    • The jj-th column of the matrix [T]BC[T]_{\mathcal{B}}^{\mathcal{C}} is the coordinate vector of T(vj)T(v_j) with respect to the basis C\mathcal{C}, where vjv_j is the jj-th basis vector of B\mathcal{B}.

    • The core relationship is

    [T(v)]C=[T]BC[v]B[T(v)]_{\mathcal{C}} = [T]_{\mathcal{B}}^{\mathcal{C}} [v]_{\mathcal{B}}

    This converts the abstract transformation into a concrete matrix multiplication. Mastery of the construction process is essential.

    ---

    ---

    What's Next?

    💡 Continue Learning

    This topic serves as a foundation for more advanced concepts in linear algebra:

      • Change of Basis: Understanding how the matrix representation [T][T] changes when you change the basis of the domain or codomain is a critical next step. This leads to the formula A=P1APA' = P^{-1}AP, where PP is the change-of-basis matrix.

      • Eigenvalues and Eigenvectors: The matrix representation of a linear operator T:VVT: V \to V (where the domain and codomain are the same) is crucial for finding its eigenvalues and eigenvectors. The analysis of these properties is simplified by first finding the matrix representation with respect to a convenient basis.


    Master these connections for a comprehensive understanding of linear algebra for GATE.

    ---

    Chapter Summary

    📖 Matrices and Systems of Linear Equations - Key Takeaways

    In this chapter, we have developed a systematic approach to analyzing and solving systems of linear equations using the framework of matrix algebra. The concepts introduced here are fundamental to a wide range of engineering problems. It is imperative for the student to internalize the following key principles:

    • Matrix Representation: Any system of mm linear equations in nn variables can be concisely represented in the matrix form Ax=bA\mathbf{x} = \mathbf{b}, where AA is the m×nm \times n coefficient matrix, x\mathbf{x} is the n×1n \times 1 column vector of variables, and b\mathbf{b} is the m×1m \times 1 column vector of constants.

    • Gaussian Elimination: The primary computational tool for solving systems of linear equations is Gaussian elimination, which uses elementary row operations to transform the augmented matrix [Ab][A|\mathbf{b}] into its row echelon form. This process reveals the fundamental structure of the system and its solution set.

    • Rank and Solution Consistency: The existence of solutions is determined by comparing the rank of the coefficient matrix, rank(A)\operatorname{rank}(A), with the rank of the augmented matrix, rank([Ab])\operatorname{rank}([A|\mathbf{b}]).

    - If rank(A)rank([Ab])\operatorname{rank}(A) \neq \operatorname{rank}([A|\mathbf{b}]), the system is inconsistent and has no solution.
    - If rank(A)=rank([Ab])=r\operatorname{rank}(A) = \operatorname{rank}([A|\mathbf{b}]) = r, the system is consistent and has at least one solution.

    • Types of Consistent Solutions: For a consistent system where rank(A)=r\operatorname{rank}(A) = r and the number of variables is nn:

    - If r=nr = n, there exists a unique solution.
    - If r<nr < n, there are infinitely many solutions. The solution set will have nrn-r free parameters.

    • The Rank-Nullity Theorem: For any m×nm \times n matrix AA, the sum of its rank and its nullity (the dimension of the null space) is equal to the number of columns: rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n.

    • Homogeneous Systems: A system of the form Ax=0A\mathbf{x} = \mathbf{0} is always consistent, as it possesses the trivial solution x=0\mathbf{x} = \mathbf{0}. It has non-trivial solutions if and only if rank(A)<n\operatorname{rank}(A) < n. The set of all solutions to Ax=0A\mathbf{x} = \mathbf{0} forms a vector space known as the null space of AA, whose dimension is the nullity, nrank(A)n - \operatorname{rank}(A).

    ---

    Chapter Review Questions

    :::question type="MCQ" question="Consider the system of linear equations:

    x+y+z=6x+2y+3z=10x+2y+λz=μ\begin{align*}
    x + y + z & = 6 \\
    x + 2y + 3z & = 10 \\
    x + 2y + \lambda z & = \mu
    \end{align*}

    For which of the following values of λ\lambda and μ\mu does the system have infinitely many solutions?" options=["λ=3,μ=10\lambda = 3, \mu = 10","λ=3,μ10\lambda = 3, \mu \neq 10","λ3,μ=10\lambda \neq 3, \mu = 10","λ3,μ10\lambda \neq 3, \mu \neq 10"] answer="A" hint="For infinitely many solutions, the system must be consistent, and the rank of the coefficient matrix must be less than the number of variables. Use Gaussian elimination on the augmented matrix." solution="
    We begin by writing the augmented matrix [Ab][A|\mathbf{b}] for the given system:
    [Ab]=[11161231012λμ][A|\mathbf{b}] = \left[
    \begin{array}{ccc|c}
    1 & 1 & 1 & 6 \\
    1 & 2 & 3 & 10 \\
    1 & 2 & \lambda & \mu\end{array}
    \right]

    Now, we apply Gaussian elimination to reduce the matrix to its row echelon form.
    Perform the row operations R2R2R1R_2 \leftarrow R_2 - R_1 and R3R3R1R_3 \leftarrow R_3 - R_1:
    [1116012401λ1μ6]\left[
    \begin{array}{ccc|c}
    1 & 1 & 1 & 6 \\
    0 & 1 & 2 & 4 \\
    0 & 1 & \lambda-1 & \mu-6\end{array}
    \right]

    Next, perform the operation R3R3R2R_3 \leftarrow R_3 - R_2:
    [1116012400(λ1)2(μ6)4]=[1116012400λ3μ10]\left[
    \begin{array}{ccc|c}
    1 & 1 & 1 & 6 \\
    0 & 1 & 2 & 4 \\
    0 & 0 & (\lambda-1)-2 & (\mu-6)-4\end{array}
    \right]
    =
    \left[
    \begin{array}{ccc|c}
    1 & 1 & 1 & 6 \\
    0 & 1 & 2 & 4 \\
    0 & 0 & \lambda-3 & \mu-10\end{array}
    \right]

    For the system to have infinitely many solutions, it must be consistent and the rank of the coefficient matrix AA must be less than the number of variables (n=3n=3).
    From the echelon form, we see that rank(A)\operatorname{rank}(A) will be less than 3 if and only if the last row of the coefficient part is all zeros. This requires:
    λ3=0    λ=3\lambda - 3 = 0 \implies \lambda = 3

    When λ=3\lambda=3, the rank of AA is 2.
    For the system to be consistent, we must have rank(A)=rank([Ab])\operatorname{rank}(A) = \operatorname{rank}([A|\mathbf{b}]). This means the last entry in the augmented part must also be zero.
    μ10=0    μ=10\mu - 10 = 0 \implies \mu = 10

    Thus, for infinitely many solutions, we need λ=3\lambda = 3 and μ=10\mu = 10. In this case, rank(A)=rank([Ab])=2<3\operatorname{rank}(A) = \operatorname{rank}([A|\mathbf{b}]) = 2 < 3.
    "
    :::

    :::question type="NAT" question="Let AA be a 4×54 \times 5 real matrix. The set of all solutions to the homogeneous system Ax=0A\mathbf{x} = \mathbf{0} is spanned by the two linearly independent vectors [1,1,0,0,0]T[1, 1, 0, 0, 0]^T and [0,0,1,1,0]T[0, 0, 1, 1, 0]^T. What is the rank of the matrix AA?" answer="3" hint="The set of solutions to Ax=0A\mathbf{x}=\mathbf{0} is the null space of AA. The dimension of this space (the nullity) is given by the number of linearly independent vectors that span it. Apply the Rank-Nullity Theorem." solution="
    The problem states that the solution space of the homogeneous system Ax=0A\mathbf{x} = \mathbf{0} is spanned by two linearly independent vectors. This solution space is, by definition, the null space of the matrix AA.

    The dimension of the null space is called the nullity of AA, denoted as nullity(A)\operatorname{nullity}(A). The dimension is equal to the number of linearly independent vectors in any basis for that space. Since the two given spanning vectors are linearly independent, they form a basis for the null space.
    Therefore,

    nullity(A)=2\operatorname{nullity}(A) = 2

    The matrix AA is a 4×54 \times 5 matrix, which means it has n=5n=5 columns.

    We now apply the Rank-Nullity Theorem, which states that for any m×nm \times n matrix:

    rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n

    Substituting the known values:
    rank(A)+2=5\operatorname{rank}(A) + 2 = 5

    Solving for the rank of AA:
    rank(A)=52=3\operatorname{rank}(A) = 5 - 2 = 3

    The rank of the matrix AA is 3.
    "
    :::

    :::question type="MCQ" question="Let AA be an m×nm \times n matrix with rank rr, where r<nr < n. The number of linearly independent solutions to the homogeneous system of equations Ax=0A\mathbf{x} = \mathbf{0} is:" options=["rr","nrn-r","nn","mrm-r"] answer="B" hint="The number of linearly independent solutions to a homogeneous system corresponds to the dimension of its null space. Recall the fundamental theorem that connects rank, nullity, and the number of columns of a matrix." solution="
    The question asks for the number of linearly independent solutions to the homogeneous system Ax=0A\mathbf{x} = \mathbf{0}. This is equivalent to asking for the dimension of the null space of the matrix AA, which is defined as the nullity of AA.

    We are given:

    • The matrix AA has dimensions m×nm \times n, so the number of columns is nn.

    • The rank of the matrix is rank(A)=r\operatorname{rank}(A) = r.

    • It is also given that r<nr < n.


    The Rank-Nullity Theorem provides the relationship between the rank, nullity, and the number of columns (nn) of a matrix:
    rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n

    We can rearrange this equation to solve for the nullity:
    nullity(A)=nrank(A)\operatorname{nullity}(A) = n - \operatorname{rank}(A)

    Substituting the given rank rr:
    nullity(A)=nr\operatorname{nullity}(A) = n - r

    The nullity represents the dimension of the solution space for the homogeneous system, which corresponds to the number of free variables, and thus the number of linearly independent solution vectors.
    Since r<nr < n, the nullity nrn-r is at least 1, confirming the existence of non-trivial solutions.
    Therefore, the number of linearly independent solutions is nrn-r.
    "
    :::

    ---

    What's Next?

    💡 Continue Your GATE Journey

    Having completed our study of Matrices and Systems of Linear Equations, you have established a firm foundation for several advanced topics in Linear Algebra. The methods of Gaussian elimination and the concepts of rank and nullity are not merely computational tools; they are the language used to describe the fundamental properties of linear transformations and vector spaces.

    Key connections to upcoming chapters include:

      • Vector Spaces: We have seen that the solution set of a homogeneous system Ax=0A\mathbf{x} = \mathbf{0} forms the null space of matrix AA. This is a canonical example of a vector subspace. In the next chapter, we will formalize the concepts of vector spaces, subspaces, basis, and dimension. The rank of a matrix will be re-contextualized as the dimension of its column space and row space, while its nullity is the dimension of its null space.
      • Eigenvalues and Eigenvectors: The central problem in eigenvalue analysis is solving the equation Av=λvA\mathbf{v} = \lambda\mathbf{v}. This is rewritten as (AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}, which is a homogeneous system of linear equations. To find the eigenvectors for a given eigenvalue λ\lambda, one must find the basis for the null space of the matrix (AλI)(A - \lambda I). Thus, the techniques mastered in this chapter are a direct prerequisite for finding eigenvectors.
      • Determinants: For a square matrix AA, its determinant being non-zero (det(A)0\det(A) \neq 0) is a condition equivalent to the matrix having full rank. This, as we now know, implies that the system Ax=bA\mathbf{x} = \mathbf{b} has a unique solution for any b\mathbf{b}. The study of determinants will provide another powerful tool for analyzing the invertibility of matrices and the nature of solutions to linear systems.

    🎯 Key Points to Remember

    • Master the core concepts in Matrices and Systems of Linear Equations before moving to advanced topics
    • Practice with previous year questions to understand exam patterns
    • Review short notes regularly for quick revision before exams

    Related Topics in Linear Algebra

    More Resources

    Why Choose MastersUp?

    🎯

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    📚

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    📊

    Smart Analytics

    Track your progress with subject-wise performance insights

    🔖

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation →

    No credit card required • Free forever for basic features