100% FREE Updated: Mar 2026 Artificial Intelligence Problem Solving and Search

Search Algorithms

Comprehensive study notes on Search Algorithms for GATE DA preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

Search Algorithms

Overview

The systematic exploration of a problem's state space to find a solution is one of the most fundamental paradigms in Artificial Intelligence. This process, known as search, provides a powerful framework for problem-solving, from navigating a route between two cities to solving complex logical puzzles. In this chapter, we shall explore the foundational algorithms that enable an intelligent agent to methodically find a path from an initial state to a desired goal state. A thorough understanding of these techniques is not merely an academic exercise; it is essential for building systems that can reason, plan, and make decisions in a structured manner.

Our study is organized into three principal categories of search. We begin with Uninformed Search strategies, which explore the search space systematically without any domain-specific knowledge beyond the problem's definition. Subsequently, we advance to Informed Search, where we introduce the concept of a heuristic function—an estimate of the cost to reach a goal—to guide the search more efficiently toward a solution. Finally, we will examine Adversarial Search, a specialized category designed for competitive, multi-agent environments such as games, where the agent must make optimal decisions while accounting for the actions of an opponent.

For the GATE examination, proficiency in search algorithms is paramount. Questions frequently test not only the procedural execution of these algorithms but also a deep conceptual understanding of their properties. Candidates are expected to analyze and compare algorithms based on their completeness, optimality, time complexity (e.g., O(bd)O(b^d) ), and space complexity. Mastery of this chapter will equip you with the analytical tools necessary to deconstruct complex problems and evaluate the trade-offs inherent in different search strategies, a skill that is consistently assessed in the examination.

---

Chapter Contents

| # | Topic | What You'll Learn |
|---|-------|-------------------|
| 1 | Uninformed Search | Systematic exploration without domain-specific knowledge. |
| 2 | Informed Search | Using heuristic functions to guide search. |
| 3 | Adversarial Search | Optimal decision-making in competitive environments. |

---

Learning Objectives

By the End of This Chapter

After completing this chapter, you will be able to:

  • Differentiate between various uninformed search strategies and analyze their performance metrics.

  • Explain the role of heuristics in informed search and apply algorithms like A* and Greedy Best-First Search.

  • Formulate game-playing problems and apply the Minimax algorithm with Alpha-Beta pruning to find optimal moves.

  • Model computational problems by defining appropriate states, actions, transition models, and goal tests.

---

We now turn our attention to Uninformed Search...

Part 1: Uninformed Search

Introduction

In the domain of Artificial Intelligence, a fundamental problem is that of search: finding a sequence of actions that leads from an initial state to a desired goal state. Search algorithms provide the mechanism for exploring the state space to find such a solution. We begin our study with the most basic class of these algorithms, known as Uninformed Search, or blind search.

These methods are termed "uninformed" because they are provided with no domain-specific knowledge beyond the problem definition itself. They do not possess any information about the cost or distance to the goal, nor do they have a preference for which non-goal state is more promising than another. The search strategy is therefore systematic but blind, exploring the state space based on the order and structure of the nodes. Despite their simplicity, these algorithms form the foundational bedrock upon which more sophisticated, informed search techniques are built. For the GATE examination, a thorough understanding of their mechanics, properties, and comparative performance is essential.

📖 Search Problem

A formal search problem can be defined by five components:

  • State Space ( SS ): The set of all possible states the environment can be in.

  • Initial State ( s0s_0 ): The state from which the agent begins. s0Ss_0 \in S.

  • Actions ( A(s)A(s) ): A set of actions available to the agent in a given state ss.

  • Transition Model ( T(s,a)T(s, a) ): A function that returns the state resulting from performing action aa in state ss. This is also known as the successor function.

  • Goal Test ( G(s)G(s) ): A function that determines whether a given state ss is a goal state.

  • Path Cost ( C(p)C(p) ): A function that assigns a numeric cost to a path pp. In uninformed search, we often assume a uniform step cost, where each step has a cost of 1.

---

Key Concepts

The core of any search algorithm is the systematic exploration of the state space graph. This is managed by maintaining a collection of discovered but not-yet-explored nodes, often called the fringe or frontier. The specific data structure used to manage this fringe dictates the nature of the search. We will also maintain a set of already explored nodes, often called the closed list or explored set, to prevent redundant exploration in graph search.

Let us consider the following state space graph for our worked examples. The start state is 'S' and the goal state is 'G'.






S


A


B


C


D


G









For all examples, we will assume that when a node is expanded, its successors are added to the fringe in alphabetical order.

1. Breadth-First Search (BFS)

Breadth-First Search explores the state space layer by layer. It expands the shallowest unexpanded node first. This behavior is achieved by implementing the fringe as a First-In, First-Out (FIFO) queue. We place newly generated successor nodes at the back of the queue, and the node to be expanded next is always taken from the front.

BFS is guaranteed to find the shallowest goal state, which, for a uniform step cost, corresponds to the optimal solution (the path with the fewest edges).

📐 BFS Properties
Completeness:Yes (if branching factor is finite)Optimality:Yes (if all step costs are equal)Time Complexity:O(bd)Space Complexity:O(bd)\begin{aligned}\text{Completeness:} & \quad \text{Yes (if branching factor is finite)} \\ \text{Optimality:} & \quad \text{Yes (if all step costs are equal)} \\ \text{Time Complexity:} & \quad O(b^d) \\ \text{Space Complexity:} & \quad O(b^d)\end{aligned}

Variables:

    • bb = branching factor (maximum number of successors of any node)

    • dd = depth of the shallowest goal node


When to use: When you need to find the shortest path in an unweighted graph and memory is not a major constraint.

Worked Example: BFS

Problem: Find a path from state 'S' to state 'G' in the example graph using BFS. List the sequence of nodes expanded.

Solution:

We maintain a queue for the fringe and a set for the explored nodes.

Initial State:

  • Fringe (Queue): `[S]`

  • Explored: `{}`


Step 1: Dequeue 'S' and expand it. Add 'S' to Explored. Successors are 'A', 'B'. Add them to the queue.
  • Node Expanded: `S`

  • Fringe (Queue): `[A, B]`

  • Explored: `{S}`


Step 2: Dequeue 'A' and expand it. Add 'A' to Explored. Successors are 'C', 'D'. Add them to the queue.
  • Node Expanded: `A`

  • Fringe (Queue): `[B, C, D]`

  • Explored: `{S, A}`


Step 3: Dequeue 'B' and expand it. Add 'B' to Explored. Successor is 'D'. 'D' is already in the fringe, so we typically do not add it again in a graph search. (Even if we did, it would be explored later).
  • Node Expanded: `B`

  • Fringe (Queue): `[C, D]`

  • Explored: `{S, A, B}`


Step 4: Dequeue 'C' and expand it. Add 'C' to Explored. Successor is 'G'. Add 'G' to the queue.
  • Node Expanded: `C`

  • Fringe (Queue): `[D, G]`

  • Explored: `{S, A, B, C}`


Step 5: Dequeue 'D' and expand it. Add 'D' to Explored. Successor is 'G'. 'G' is already in the fringe.
  • Node Expanded: `D`

  • Fringe (Queue): `[G]`

  • Explored: `{S, A, B, C, D}`


Step 6: Dequeue 'G'. Goal test is successful. The search terminates.
  • Node Expanded: `G`

  • Fringe (Queue): `[]`

  • Explored: `{S, A, B, C, D, G}`


Answer: The sequence of expanded nodes is S, A, B, C, D, G. The path found is S -> A -> C -> G.

---

---

2. Depth-First Search (DFS)

Depth-First Search explores the state space by going as deep as possible along each branch before backtracking. This is achieved by implementing the fringe as a Last-In, First-Out (LIFO) stack. When a node is expanded, its successors are pushed onto the stack, and the node to be expanded next is always popped from the top of the stack.

DFS is not optimal; it may find a longer path to a goal before it finds a shorter one. Furthermore, in infinite state spaces or graphs with cycles, a pure tree-search DFS may never terminate. Using an explored list (graph search) prevents infinite loops in finite graphs with cycles.

📐 DFS Properties
Completeness:No (fails in infinite spaces/cycles without explored list)Yes (in finite spaces with explored list)Optimality:NoTime Complexity:O(bm)Space Complexity:O(bm)\begin{aligned}\text{Completeness:} & \quad \text{No (fails in infinite spaces/cycles without explored list)} \\ & \quad \text{Yes (in finite spaces with explored list)} \\ \text{Optimality:} & \quad \text{No} \\ \text{Time Complexity:} & \quad O(b^m) \\ \text{Space Complexity:} & \quad O(bm)\end{aligned}

Variables:

    • bb = branching factor

    • mm = maximum depth of the state space


When to use: When the search space is very deep, solutions are expected to be plentiful, and memory is a significant constraint.

Worked Example: DFS

Problem: Find a path from state 'S' to state 'G' in the example graph using DFS. List the sequence of nodes expanded.

Solution:

We maintain a stack for the fringe and a set for the explored nodes.

Initial State:

  • Fringe (Stack): `[S]`

  • Explored: `{}`


Step 1: Pop 'S' and expand it. Add 'S' to Explored. Successors are 'A', 'B'. Push them onto the stack (alphabetical order: A then B, so B is on top).
  • Node Expanded: `S`

  • Fringe (Stack): `[A, B]` (Top -> B)

  • Explored: `{S}`


Step 2: Pop 'B' and expand it. Add 'B' to Explored. Successor is 'D'. Push 'D' onto the stack.
  • Node Expanded: `B`

  • Fringe (Stack): `[A, D]` (Top -> D)

  • Explored: `{S, B}`


Step 3: Pop 'D' and expand it. Add 'D' to Explored. Successor is 'G'. Push 'G' onto the stack.
  • Node Expanded: `D`

  • Fringe (Stack): `[A, G]` (Top -> G)

  • Explored: `{S, B, D}`


Step 4: Pop 'G'. Goal test is successful. The search terminates.
  • Node Expanded: `G`


Answer: The sequence of expanded nodes is S, B, D, G. The path found is S -> B -> D -> G. Observe that this path is longer than the one found by BFS, and fewer nodes were expanded to find a solution.

---

3. Iterative Deepening Depth-First Search (IDDFS)

Iterative Deepening Search (IDS) is a hybrid strategy that combines the space-efficiency of DFS with the completeness and optimality (for unit costs) of BFS. It works by performing a series of depth-limited searches (DLS). It first runs DLS with a depth limit of 0, then 1, then 2, and so on, until a goal is found.

Although it seems wasteful because it re-generates nodes in upper levels of the search tree multiple times, the overhead is not prohibitive. The number of nodes at a given level grows exponentially with depth, so the majority of the work is done at the deepest level of the search, which is only generated once.

Must Remember

IDDFS is often the preferred uninformed search method when the search space is large and the depth of the solution is unknown. It guarantees finding the shallowest goal while maintaining a memory footprint of only O(bd)O(bd).

---

Problem-Solving Strategies

When faced with a search problem in the GATE exam, a systematic approach is critical. The problem statement often contains subtle but crucial constraints, such as tie-breaking rules or whether previously visited states are re-explored.

💡 GATE Strategy: Tracing on Paper

For any problem asking you to trace BFS or DFS, do not attempt to solve it mentally. Always use your scratch pad to explicitly maintain the data structures.

  • Identify the Fringe Data Structure: Is it BFS (Queue) or DFS (Stack)?

  • Set up columns: Create columns for `Step`, `Node Expanded`, `Fringe`, and `Explored List`.

  • Follow the Algorithm: At each step, remove a node from the front (Queue) or top (Stack) of the fringe. Add it to the explored list.

  • Generate Successors: Find its children/successors.

  • Check and Add: For each successor, check if it's in the fringe or explored list. If not, add it to the fringe (back of Queue, top of Stack). Pay close attention to any specified ordering for adding successors (e.g., alphabetical, ascending numerical order).

  • Repeat: Continue until the goal is removed from the fringe.

This methodical process minimizes errors and clearly shows the search's progression, which is exactly what is being tested.

---

Common Mistakes

A frequent source of error in GATE is a misunderstanding of the fundamental mechanics of the search algorithms or misinterpretation of the problem statement.

⚠️ Avoid These Errors
    • Confusing the Fringe Data Structure: Using a stack for BFS or a queue for DFS. This is the most fundamental error and leads to a completely incorrect search path.
BFS uses a FIFO Queue. DFS uses a LIFO Stack.
    • Ignoring Tie-Breaking Rules: The PYQ explicitly states "expanded in the ascending order of numbers". If the problem specifies a rule (alphabetical, numerical), you MUST follow it when adding successors to the fringe. Ignoring it will alter the search path and the number of expanded nodes.
Always check for and apply any stated tie-breaking rules.
    • Forgetting the Explored List: In a graph with cycles, failing to keep track of explored nodes can lead a DFS into an infinite loop or cause BFS to perform redundant work. Most GATE problems imply a graph search (no re-expansion).
Assume a graph search with an explored list unless the problem explicitly states it is a tree search.

---

Practice Questions

:::question type="MCQ" question="Consider a state space where the start state is 2. The successor function for a state numbered nn returns two states: 2n2n and 2n+12n+1. The goal state is 11. Assume that when multiple nodes are available for expansion, the one with the lower number is chosen first. Which algorithm expands fewer nodes to find the goal?" options=["BFS expands fewer nodes than DFS.", "DFS expands fewer nodes than BFS.", "Both expand the same number of nodes.", "Neither algorithm can find the goal."] answer="DFS expands fewer nodes than BFS." hint="Trace both BFS and DFS step-by-step, maintaining the fringe and explored list. The tie-breaking rule is to expand the lower-numbered node first." solution="
BFS Trace:

  • Start: Fringe=[2], Explored={}

  • Expand 2: Fringe=[4, 5], Explored={2}

  • Expand 4: Fringe=[5, 8, 9], Explored={2, 4}

  • Expand 5: Fringe=[8, 9, 10, 11], Explored={2, 4, 5}

  • Expand 8: Fringe=[9, 10, 11, 16, 17], Explored={2, 4, 5, 8}

  • Expand 9: Fringe=[10, 11, 16, 17, 18, 19], Explored={2, 4, 5, 8, 9}

  • Expand 10: Fringe=[11, 16, 17, 18, 19, 20, 21], Explored={2, 4, 5, 8, 9, 10}

  • Expand 11: Goal found.

  • BFS Expanded Nodes: 2, 4, 5, 8, 9, 10, 11 (Total: 7)


DFS Trace (assuming successors 2n2n and 2n+12n+1 are pushed onto the stack in increasing order, so 2n+12n+1 is on top and expanded first):
  • Start: Fringe (Stack)=[2], Explored={}

  • Expand 2: Pop 2. Successors: 4, 5. Push 4, then 5. Fringe=[4, 5]. Explored={2}. (5 is on top)

  • Expand 5: Pop 5. Successors: 10, 11. Push 10, then 11. Fringe=[4, 10, 11]. Explored={2, 5}. (11 is on top)

  • Expand 11: Pop 11. Goal found.

  • DFS Expanded Nodes: 2, 5, 11 (Total: 3)

Therefore, DFS expands fewer nodes.
Answer: \boxed{DFS expands fewer nodes than BFS.}
"
:::

:::question type="NAT" question="In the state space graph provided in the worked example (with start S and goal G), how many nodes are in the fringe (queue) immediately after node 'B' is expanded by a Breadth-First Search?" answer="2" hint="Follow the step-by-step trace of BFS. After expanding 'B', list the contents of the queue." solution="
Step 1: Initial Fringe: `[S]`
Step 2: Expand S. Fringe becomes `[A, B]`
Step 3: Expand A. Fringe becomes `[B, C, D]`
Step 4: Expand B. Successor is D. Since D is already in the fringe, it is not added again. The fringe becomes `[C, D]`.
The number of nodes in the fringe is 2.
Answer: \boxed{2}
"
:::

:::question type="MSQ" question="Which of the following statements about Iterative Deepening Depth-First Search (IDDFS) are true?" options=["IDDFS has the same space complexity as Breadth-First Search.", "IDDFS is guaranteed to find an optimal solution if all step costs are equal.", "IDDFS combines the memory benefits of DFS with the completeness of BFS (for finite branching factor).", "In the worst case, the time complexity of IDDFS is significantly higher than that of BFS."] answer="B,C" hint="Analyze the properties of IDDFS in terms of completeness, optimality, time, and space complexity relative to BFS and DFS." solution="

  • A is false: IDDFS has a space complexity of O(bd)O(bd), which is the same as DFS, not BFS (O(bd)O(b^d)). This is its primary advantage.

  • B is true: Because IDDFS explores the search space level by level (similar to BFS), it is guaranteed to find the shallowest goal. If step costs are uniform, this corresponds to the optimal path.

  • C is true: This statement accurately describes the main benefit of IDDFS. It uses space proportional to the depth of the search (like DFS) but is complete (like BFS).

  • D is false: While IDDFS does re-generate nodes, the overhead is not asymptotically significant. For a tree, the time complexity of both BFS and IDDFS is O(bd)O(b^d). The constant factors for IDDFS are slightly larger, but it is not considered 'significantly higher' in terms of asymptotic complexity.

Answer: \boxed{B,C}
"
:::

---

Summary

Key Takeaways for GATE

  • BFS vs. DFS is Core: The fundamental difference is the fringe data structure: FIFO Queue for BFS, LIFO Stack for DFS. This dictates the entire search behavior. Be prepared to trace and compare them on a given graph.

  • Properties Matter: Know the properties of each algorithm by heart. BFS is complete and optimal (unit costs) but memory-intensive. DFS is space-efficient but incomplete and non-optimal without modifications.

  • Read the Question Carefully: Pay extremely close attention to details like tie-breaking rules, the definition of the successor function, and whether it's a graph search (with an explored list) or a tree search. These details determine the correct answer.

  • IDDFS is the Best of Both: For large state spaces where the solution depth is unknown, IDDFS provides the completeness and optimality of BFS with the modest memory requirements of DFS.

---

What's Next?

💡 Continue Learning

Uninformed search strategies provide a crucial foundation, but their inefficiency in large state spaces highlights the need for more intelligent methods. This topic connects directly to:

    • Informed (Heuristic) Search: The next logical step is to study algorithms like A* Search and Greedy Best-First Search. These algorithms use a heuristic function—an estimate of the cost to the goal—to guide the search more efficiently, dramatically reducing the number of states that need to be explored.
    • Complexity Analysis: The time and space complexities (O(bd)O(b^d), O(bm)O(bm)) are not just theoretical concepts. They explain why BFS can fail due to memory limitations and why DFS might be preferred in certain scenarios. A strong grasp of Big-O notation is essential.
Mastering uninformed search is the first step toward understanding the full spectrum of problem-solving techniques in Artificial Intelligence.

---

💡 Moving Forward

Now that you understand Uninformed Search, let's explore Informed Search which builds on these concepts.

---

Part 2: Informed Search

Introduction

In our study of search algorithms, we have previously examined uninformed or "blind" search strategies. Such methods, while systematic, are often inefficient as they possess no problem-specific knowledge to guide their exploration of the state space. They methodically explore paths without any preference for one direction or another. We now turn our attention to a more powerful class of algorithms known as informed search, or heuristic search.

Informed search strategies leverage problem-specific knowledge to find solutions more efficiently. The core of this approach lies in the use of a heuristic function, which estimates the "desirability" of expanding a node. This function provides an estimate of the cost from a given node to the nearest goal state. By prioritizing nodes that appear to be closer to the goal, informed search algorithms can often find solutions much more quickly than their uninformed counterparts, significantly reducing the search space that must be explored. The central idea is to expand the most promising node first, a principle embodied in the family of algorithms known as Best-First Search.

📖 Heuristic Function

A heuristic function, denoted h(n)h(n), is a function that estimates the cost of the cheapest path from the state at node nn to a goal state. The function takes a node as input and returns a non-negative real number. If nn is a goal node, then we must have h(n)=0h(n) = 0. The heuristic function is problem-specific and relies on domain knowledge.

---

Key Concepts

1. Greedy Best-First Search

Greedy Best-First Search is an informed search algorithm that attempts to expand the node that is judged to be closest to the goal. It evaluates nodes using only the heuristic function, effectively making a "greedy" choice at each step.

The evaluation function for Greedy Best-First Search is simply:

f(n)=h(n)f(n) = h(n)

The algorithm maintains a priority queue of nodes to be explored, ordered by increasing h(n)h(n) values. At each step, it expands the node with the lowest heuristic value. While this strategy can be very fast, it is susceptible to being misled by an imperfect heuristic. Since it ignores the cost already incurred to reach a node (g(n)g(n)), it may follow a path that seems promising initially but ultimately proves to be a long and costly dead end. Consequently, Greedy Best-First Search is neither optimal nor complete (it can get stuck in loops in infinite state spaces).

2. A* Search Algorithm

The A* search algorithm is arguably the most widely known and effective informed search algorithm. It improves upon Greedy Best-First Search by considering both the cost to reach the current node and the estimated cost to get from the current node to the goal. It seeks to find a path with the minimum total cost.

A* evaluates nodes by combining g(n)g(n), the cost of the path from the start node to node nn, and h(n)h(n), the heuristic estimate of the cost from nn to the goal.

📐 A* Evaluation Function

The evaluation function for A* search is given by:

f(n)=g(n)+h(n)f(n) = g(n) + h(n)

Variables:

    • g(n)g(n): The actual cost of the path from the initial state to node nn.

    • h(n)h(n): The estimated cost of the cheapest path from node nn to a goal state (the heuristic).

    • f(n)f(n): The estimated cost of the cheapest solution path that passes through node nn.


When to use: A* is used to find the shortest path in a weighted graph when a heuristic is available. It is optimal and complete if the heuristic function is admissible.

The A algorithm maintains a priority queue of nodes to visit (the frontier or open list), ordered by the lowest f(n)f(n) value. It also keeps track of nodes already visited (the explored set or closed list*). At each step, it removes the node with the lowest f(n)f(n) value from the frontier, expands it by generating its successors, and adds the successors to the frontier.

Worked Example:

Problem:
Consider the following state graph where the start node is S and the goal node is G. The numbers on the edges represent the actual path costs, and the heuristic value h(n)h(n) for each node is given. Find the sequence of nodes expanded by the A* algorithm.








Sh=10
Ah=9
Bh=5
Ch=4
Gh=0
1
2
8
3
5

Solution:

We trace the algorithm's execution, maintaining a priority queue (Frontier) ordered by f(n)f(n). The expanded nodes are tracked separately.

Step 1: Initialize the frontier with the start node S.

  • Frontier: { S }
  • Expanded: { }
  • Calculation for S: g(S)=0g(S)=0, h(S)=10    f(S)=0+10=10h(S)=10 \implies f(S) = 0 + 10 = 10.
  • Frontier state: { (S, f=10) }
Step 2: Expand S (lowest ff-value).
  • Node expanded: S
  • Successors: A, B
  • Expanded: { S }
  • Calculation for A: g(A)=g(S)+cost(S,A)=0+1=1g(A) = g(S) + \text{cost}(S,A) = 0 + 1 = 1. h(A)=9    f(A)=1+9=10h(A)=9 \implies f(A) = 1 + 9 = 10.
  • Calculation for B: g(B)=g(S)+cost(S,B)=0+2=2g(B) = g(S) + \text{cost}(S,B) = 0 + 2 = 2. h(B)=5    f(B)=2+5=7h(B)=5 \implies f(B) = 2 + 5 = 7.
  • Frontier state: { (B, f=7), (A, f=10) }
Step 3: Expand B (lowest ff-value).
  • Node expanded: B
  • Successors: C
  • Expanded: { S, B }
  • Calculation for C from B: g(C)=g(B)+cost(B,C)=2+3=5g(C) = g(B) + \text{cost}(B,C) = 2 + 3 = 5. h(C)=4    f(C)=5+4=9h(C)=4 \implies f(C) = 5 + 4 = 9.
  • Frontier state: { (C, f=9), (A, f=10) }
Step 4: Expand C (lowest ff-value).
  • Node expanded: C
  • Successors: G
  • Expanded: { S, B, C }
  • Calculation for G from C: g(G)=g(C)+cost(C,G)=5+5=10g(G) = g(C) + \text{cost}(C,G) = 5 + 5 = 10. h(G)=0    f(G)=10+0=10h(G)=0 \implies f(G) = 10 + 0 = 10.
  • Frontier state: { (A, f=10), (G, f=10) }
  • We observe a tie in ff-values between A and G. The tie-breaking rule can vary, but a common one is FIFO (First-In, First-Out). Since A was added to the frontier before G, we expand A next.
Step 5: Expand A (chosen by tie-breaking).
  • Node expanded: A
  • Successors: C
  • Expanded: { S, B, C, A }
  • Calculation for C from A: g(C)=g(A)+cost(A,C)=1+8=9g(C) = g(A) + \text{cost}(A,C) = 1 + 8 = 9.
  • We observe that a path to C already exists in the frontier with g(C)=5g(C)=5. Since the new path via A is more expensive (g=9g=9), we discard it. The frontier remains unchanged.
  • Frontier state: { (G, f=10) }
Step 6: Expand G (lowest ff-value).
  • Node expanded: G
  • G is the goal state. The search terminates.
  • Expanded: { S, B, C, A, G }
Answer: The sequence of nodes expanded by the A* algorithm is S, B, C, A, G.

---

3. Properties of Heuristics

The performance and correctness of A* search are critically dependent on the properties of the heuristic function h(n)h(n).

Admissibility

The most important property for ensuring optimality is admissibility.
📖 Admissible Heuristic

A heuristic function h(n)h(n) is admissible if for every node nn, it never overestimates the true cost to reach the goal. That is,

0h(n)h(n)0 \le h(n) \le h^*(n)

where h(n)h^*(n) is the true cost of the optimal path from node nn to a goal state.

If the heuristic function used by A is admissible, then A is guaranteed to return an optimal solution. An admissible heuristic is optimistic; it always assumes the path to the goal is at least as good as the true path.

Consistency

Consistency is a stronger condition than admissibility. It is also known as the monotonicity property.
📖 Consistent Heuristic

A heuristic function h(n)h(n) is consistent (or monotonic) if for every node nn and every successor nn' of nn generated by an action with cost c(n,a,n)c(n,a,n'), the following condition holds:

h(n)c(n,a,n)+h(n)h(n) \le c(n,a,n') + h(n')

This is a form of the triangle inequality. It states that the estimated cost from nn cannot be greater than the cost of taking one step to nn' and then following the estimated cost from nn'.

If a heuristic is consistent, it can be proven that it is also admissible. The primary advantage of using a consistent heuristic is that it guarantees that the ff-values of the nodes expanded by A are non-decreasing. This means that when A selects a node nn for expansion, it has already found the optimal path to nn, and it will never need to be re-opened.

Combining Admissible Heuristics

In some problems, we might devise several different admissible heuristics. Consider two admissible heuristics, h1(n)h_1(n) and h2(n)h_2(n). We can combine them to create a new, potentially better heuristic.

Let h(n)=max(h1(n),h2(n))h(n) = \max(h_1(n), h_2(n)).

This new heuristic h(n)h(n) is also admissible. Since both h1(n)h(n)h_1(n) \le h^(n) and h2(n)h(n)h_2(n) \le h^(n), their maximum must also be less than or equal to h(n)h^(n). Furthermore, this composite heuristic is more informed (or dominates) the individual ones, as its value is greater than or equal to both h1(n)h_1(n) and h2(n)h_2(n). A more informed heuristic allows A to prune the search space more effectively, leading to faster performance.

Conversely, a sum of two admissible heuristics, h1(n)+h2(n)h_1(n) + h_2(n), is generally not admissible, as the sum can easily overestimate the true cost h(n)h^*(n).

---

Problem-Solving Strategies

💡 Tracing A* in GATE

When asked to trace the A* algorithm in an exam, a systematic approach is crucial to avoid errors.

  • Create a Table: Use a table to manage the frontier (priority queue). The columns should be: Node, Parent, g(n)g(n), h(n)h(n), and f(n)f(n).

  • Maintain an Expanded List: Keep a separate list of nodes that have been expanded. This will be your final answer if the question asks for the expansion sequence.

  • Expand Minimum f(n): In each step, scan your frontier table for the entry with the lowest f(n)f(n) value. Move this node from the frontier to the expanded list.

  • Update Successors: For each successor of the expanded node, calculate its g(n)g(n) and f(n)f(n).

  • - If the successor is new, add it to the frontier table.
    - If the successor is already in the frontier but the new path is cheaper (lower g(n)g(n)), update its entry in the table with the new parent and lower costs.
  • Handle Ties: If there is a tie in f(n)f(n) values, follow the standard convention (e.g., FIFO or alphabetical order if not specified) consistently.

---

Common Mistakes

⚠️ Avoid These Errors
    • Confusing g(n)g(n) and Path Cost: Forgetting that g(n)g(n) is the accumulated cost from the start node SS, not just the cost from its immediate parent. Always calculate g(successor)=g(parent)+cost(parent,successor)g(\text{successor}) = g(\text{parent}) + \text{cost}(\text{parent}, \text{successor}).
    • Ignoring Path Updates: If you find a new path to a node already in the frontier, you must check if the new path is better (has a lower g(n)g(n) value). Many students forget to perform this update step, leading to an incorrect expansion order and a suboptimal final path.
    • Assuming Admissibility for All Combinations: Assuming that if h1h_1 and h2h_2 are admissible, then any combination like h1+h2h_1 + h_2 is also admissible. This is incorrect. Only specific combinations, such as max(h1,h2)\max(h_1, h_2), are guaranteed to preserve admissibility.
    • Stopping at Goal Generation: Terminating the search as soon as a goal node is generated (added to the frontier). A only guarantees optimality when it terminates upon expanding* the goal node (i.e., when the goal node is selected from the frontier). There might be another, cheaper path to the goal still waiting in the frontier.

---

Practice Questions

:::question type="MCQ" question="An A search algorithm uses a heuristic h(n)h(n) that is admissible but not consistent. What is the primary consequence of the lack of consistency?" options=["The algorithm is no longer guaranteed to find an optimal solution.","The algorithm may re-expand nodes that have already been visited.","The algorithm will behave identically to a Greedy Best-First search.","The algorithm will fail to terminate if the state space contains cycles."] answer="The algorithm may re-expand nodes that have already been visited." hint="Consistency ensures that once a node is expanded, the optimal path to it has been found. What happens if this guarantee is removed?" solution="Consistency guarantees that the ff-values of expanded nodes are non-decreasing. If a heuristic is admissible but not consistent, the ff-values may not be monotonic. This can lead to a situation where A finds a path to a node nn, expands it, and later discovers a cheaper path to nn. If this happens, nn may need to be added back to the frontier and expanded again to propagate the new, lower cost to its successors. Admissibility alone is sufficient to guarantee optimality, but consistency provides greater efficiency by preventing re-expansion of nodes."
:::

:::question type="NAT" question="In an A search, a node NN is reachable from the start node SS via two paths. Path 1: SANS \rightarrow A \rightarrow N with total cost g1(N)=12g_1(N)=12. Path 2: SBNS \rightarrow B \rightarrow N with total cost g2(N)=10g_2(N)=10. The heuristic value is h(N)=5h(N)=5. When node NN is first placed on the frontier via Path 1, what is its initial ff-value?" answer="17" hint="The ff-value is calculated using the gg-value of the path through which the node was discovered." solution="The A evaluation function is f(n)=g(n)+h(n)f(n) = g(n) + h(n). When node NN is first discovered via the path SANS \rightarrow A \rightarrow N, its path cost is given as g1(N)=12g_1(N)=12. The heuristic value is h(N)=5h(N)=5.

Step 1: Apply the A* evaluation formula.

f(N)=g1(N)+h(N)f(N) = g_1(N) + h(N)

Step 2: Substitute the given values.

f(N)=12+5f(N) = 12 + 5

Step 3: Calculate the result.

f(N)=17f(N) = 17

Result: The initial ff-value for node NN is 17. (Later, if the path via BB is discovered, its entry in the frontier would be updated to f(N)=10+5=15f(N) = 10 + 5 = 15).
"
:::

:::question type="MSQ" question="Let h1(n)h_1(n) and h2(n)h_2(n) be two different admissible heuristics for a search problem. Which of the following heuristics are also guaranteed to be admissible?" options=["h3(n)=h1(n)+h2(n)2h_3(n) = \frac{h_1(n) + h_2(n)}{2}","h4(n)=min(h1(n),h2(n))h_4(n) = \min(h_1(n), h_2(n))","h5(n)=h1(n)×h2(n)h_5(n) = h_1(n) \times h_2(n)","h6(n)=2×h1(n)h_6(n) = 2 \times h_1(n)"] answer="A,B" hint="An admissible heuristic must never overestimate the true cost h(n)h^(n). Check if each combination can exceed h(n)h^(n)." solution="Let h(n)h^(n) be the true optimal cost from node nn to the goal. Since h1h_1 and h2h_2 are admissible, we know 0h1(n)h(n)0 \le h_1(n) \le h^(n) and 0h2(n)h(n)0 \le h_2(n) \le h^*(n).

  • A: h3(n)=h1(n)+h2(n)2h_3(n) = \frac{h_1(n) + h_2(n)}{2}: We have
h1(n)+h2(n)h(n)+h(n)=2h(n)h_1(n) + h_2(n) \le h^(n) + h^(n) = 2h^*(n)
Therefore,
h1(n)+h2(n)22h(n)2=h(n)\frac{h_1(n) + h_2(n)}{2} \le \frac{2h^(n)}{2} = h^(n)
This heuristic is admissible.
  • B: h4(n)=min(h1(n),h2(n))h_4(n) = \min(h_1(n), h_2(n)): Since h1(n)h(n)h_1(n) \le h^(n) and h2(n)h(n)h_2(n) \le h^(n), the minimum of the two must also be less than or equal to h(n)h^*(n). This heuristic is admissible.
  • C: h5(n)=h1(n)×h2(n)h_5(n) = h_1(n) \times h_2(n): This is not generally admissible. For example, if h(n)=5h^(n)=5, we could have h1(n)=4h_1(n)=4 and h2(n)=3h_2(n)=3. Then h1(n)×h2(n)=12h_1(n) \times h_2(n) = 12, which is greater than h(n)=5h^(n)=5. This overestimates the cost.
  • D: h6(n)=2×h1(n)h_6(n) = 2 \times h_1(n): This is not generally admissible. If h(n)=5h^(n)=5 and h1(n)=4h_1(n)=4, then 2×h1(n)=82 \times h_1(n) = 8, which is greater than h(n)=5h^(n)=5. This overestimates the cost.
Thus, only options A and B are guaranteed to be admissible." :::

:::question type="MCQ" question="Consider the state graph below, with start node S and goal node G. Heuristic values are shown next to each node. Which path will Greedy Best-First Search explore to reach G?"







Sh=12
Ah=5
Bh=10
Ch=6
Gh=0
1
10
20
1
1

options=["S → A → C → G","S → B → G","S → A → G (path does not exist)","S → B → C → G (path does not exist)"] answer="S → A → C → G" hint="Greedy Best-First search only considers the heuristic value h(n)h(n) and ignores path costs." solution="Greedy Best-First Search expands the node with the lowest h(n)h(n) value.

  • Start at SS. Successors are AA (h=5h=5) and BB (h=10h=10).

  • Expand AA because h(A)<h(B)h(A) < h(B).

  • From AA, the only successor is CC (h=6h=6). The frontier contains {CC, BB}.

  • Expand CC because h(C)<h(B)h(C) < h(B). (Note: even though path cost to CC is huge, Greedy search ignores it).

  • From CC, the successor is GG (h=0h=0).

  • Expand GG. The path found is SACGS \rightarrow A \rightarrow C \rightarrow G. The total cost is 1+20+1=221+20+1=22. The algorithm does not realize the path SBGS \rightarrow B \rightarrow G has a cost of only 10+1=1110+1=11 because it was greedily attracted to node AA."

  • :::

    ---

    Summary

    Key Takeaways for GATE

    • A Search Formula: The core of A is the evaluation function f(n)=g(n)+h(n)f(n) = g(n) + h(n), which balances the actual cost incurred so far (g(n)g(n)) with the estimated cost to the goal (h(n)h(n)).

    • Admissibility Guarantees Optimality: For A to be optimal (i.e., guaranteed to find the least-cost path), the heuristic function h(n)h(n) must be admissible. An admissible heuristic never overestimates the true cost to the goal (h(n)h(n)h(n) \le h^(n)).

    • Consistency for Efficiency: A consistent heuristic, which obeys the triangle inequality h(n)c(n,a,n)+h(n)h(n) \le c(n,a,n') + h(n'), is a stronger condition than admissibility. It ensures that the ff-values of expanded nodes are non-decreasing, making the search more efficient by preventing re-expansions.

    • Dominance and Combining Heuristics: Given two admissible heuristics h1h_1 and h2h_2, the heuristic h(n)=max(h1(n),h2(n))h(n) = \max(h_1(n), h_2(n)) is also admissible and is more informed (dominates) both. Using a more informed heuristic allows A* to find the solution by expanding fewer nodes.

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to several other important areas in AI and algorithms.

      • Uninformed Search Strategies: Compare the performance, completeness, and optimality of A with algorithms like Breadth-First Search (BFS), Depth-First Search (DFS), and Uniform Cost Search (UCS). It is crucial to understand that A with h(n)=0h(n) = 0 for all nn is equivalent to UCS, and BFS is a special case of UCS where all edge costs are equal.
      • Local Search Algorithms: Contrast the systematic, path-based exploration of A* with local search algorithms like Hill Climbing, Simulated Annealing, and Genetic Algorithms. These algorithms do not maintain a search tree but instead operate on a single current state, iteratively improving it. They are often used for optimization problems where the path to the goal is not important.
      • Game Playing: Heuristic evaluation functions are central to adversarial search algorithms like Minimax and Alpha-Beta Pruning, which are used in two-player games like Chess and Go. The evaluation function estimates the "goodness" of a board position.

    ---

    💡 Moving Forward

    Now that you understand Informed Search, let's explore Adversarial Search which builds on these concepts.

    ---

    Part 3: Adversarial Search

    Introduction

    In the domain of artificial intelligence, many problems can be modeled as a search through a state space for a solution. However, a special class of problems involves multiple agents with conflicting goals, a scenario most commonly observed in games like chess, checkers, or Go. This is the realm of adversarial search, where the actions of one agent are designed to counteract the objectives of another. The challenge lies not merely in finding a path to a goal state, but in formulating a strategy or policy that selects the optimal move, assuming the opponent will also play optimally.

    We will explore the foundational algorithm for decision-making in such environments: the Minimax algorithm. This provides a formal method for determining the best move by recursively exploring the game tree. Subsequently, we will turn our attention to a crucial optimization, Alpha-Beta Pruning, which significantly reduces the search space by eliminating branches that cannot possibly influence the final outcome. A thorough understanding of these techniques is essential for solving problems involving strategic decision-making under competitive conditions, a recurring theme in the GATE examination.

    📖 Adversarial Search Problem

    An adversarial search problem is characterized by a competitive multi-agent environment, typically a two-player, zero-sum game, where:

      • There are two agents, commonly referred to as MAX and MIN.

      • MAX's objective is to maximize a utility function score.

      • MIN's objective is to minimize the same utility function score.

      • The agents take turns making moves, altering the state of the game.

    The goal is to find an optimal strategy for one agent, assuming the other agent also plays optimally.

    ---

    Key Concepts

    1. The Minimax Algorithm

    The Minimax algorithm is a decision-making algorithm used in two-player, zero-sum games. It operates by exploring the entire game tree down to its terminal nodes (leaves), which represent the end of the game. A utility function is applied to these terminal nodes to assign a numerical score representing the outcome from MAX's perspective (e.g., +1 for a win, -1 for a loss, 0 for a draw). These values are then propagated back up the tree to determine the optimal move from the current state.

    The core principle is that the MAX player will always choose a move that leads to a state with the maximum possible score, while the MIN player will always choose a move that leads to a state with the minimum possible score.

    Let us consider a node nn in the game tree. The minimax value of this node, denoted V(n)V(n), is computed as follows:

    • If nn is a terminal node, V(n)V(n) is the utility of that state.
    • If nn is a MAX player's turn,
    V(n)=maxssuccessors(n)V(s)V(n) = \max_{s \in \operatorname{successors}(n)} V(s)
    • If nn is a MIN player's turn,
    V(n)=minssuccessors(n)V(s)V(n) = \min_{s \in \operatorname{successors}(n)} V(s)

    The algorithm performs a complete depth-first exploration of the game tree.




    MAX
    MIN
    MAX
    Terminal



    A
    3



    B
    3

    C
    2



    D
    3

    E
    5

    F
    2

    G
    9





    3

    12

    8

    5

    2

    4

    6

    9








    Worked Example:

    Problem: Determine the optimal move for the MAX player at the root node A using the Minimax algorithm for the tree shown above.

    Solution:

    Step 1: Compute values for the lowest level of MAX nodes (D, E, F, G).

    For node D, its children have utilities 3 and 12. Since D is a MAX node:

    V(D)=max(3,12)=12V(D) = \max(3, 12) = 12

    For node E, its children have utilities 8 and 5. Since E is a MAX node:

    V(E)=max(8,5)=8V(E) = \max(8, 5) = 8

    For node F, its children have utilities 2 and 4. Since F is a MAX node:

    V(F)=max(2,4)=4V(F) = \max(2, 4) = 4

    For node G, its children have utilities 6 and 9. Since G is a MAX node:

    V(G)=max(6,9)=9V(G) = \max(6, 9) = 9

    Step 2: Compute values for the MIN nodes (B, C) using the values from their children (D, E, F, G).

    For node B, its children are D and E. Since B is a MIN node:

    V(B)=min(V(D),V(E))=min(12,8)=8V(B) = \min(V(D), V(E)) = \min(12, 8) = 8

    For node C, its children are F and G. Since C is a MIN node:

    V(C)=min(V(F),V(G))=min(4,9)=4V(C) = \min(V(F), V(G)) = \min(4, 9) = 4

    Step 3: Compute the value for the root MAX node (A) using the values from its children (B, C).

    For node A, its children are B and C. Since A is a MAX node:

    V(A)=max(V(B),V(C))=max(8,4)=8V(A) = \max(V(B), V(C)) = \max(8, 4) = 8

    Step 4: Determine the optimal move.

    The value of the root node A is 8. This value comes from child B. Therefore, the optimal move for MAX at the root is to move to state B.

    Answer: The minimax value of the root is 8.

    ---

    2. Alpha-Beta Pruning

    The primary drawback of the Minimax algorithm is its time complexity. For a game tree with a branching factor of bb and a maximum depth of dd, the complexity is O(bd)O(b^d). This is computationally infeasible for all but the simplest games. Alpha-Beta pruning is an optimization technique that reduces the number of nodes evaluated by the Minimax algorithm without affecting the final decision.

    The algorithm maintains two values, alpha (α\alpha) and beta (β\beta), which represent the lower and upper bounds on the possible minimax value.

    📖 Alpha (α\alpha)

    Alpha (α\alpha) is the best value (highest score) that the MAX player can guarantee at the current point in the search or any point higher up the path. It represents a lower bound on the final score. Initially, α\alpha is set to -\infty.

    📖 Beta (β\beta)

    Beta (β\beta) is the best value (lowest score) that the MIN player can guarantee at the current point in the search or any point higher up the path. It represents an upper bound on the final score. Initially, β\beta is set to ++\infty.

    These values are passed down the tree during the search. A MAX node can update α\alpha, and a MIN node can update β\beta.

    📐 Alpha-Beta Pruning Condition
    αβ\alpha \ge \beta

    Variables:

      • α\alpha: The best value found so far for MAX along the path.

      • β\beta: The best value found so far for MIN along the path.


    When to use: During a depth-first traversal of the game tree. If at any node, the condition αβ\alpha \ge \beta is met, the subtree rooted at this node can be pruned. This is because the current path is already worse for the opposing player than another path they have already found.

    Explanation of Pruning:

    • Pruning at a MIN node: If a MIN node's current value (which can only decrease) becomes less than or equal to α\alpha, the MAX player (higher up the tree) who set that α\alpha value will never choose this path. MAX already has a choice that guarantees a score of at least α\alpha, so there is no need to explore this branch further, as MIN will ensure a score no better than the current one.

    • Pruning at a MAX node: If a MAX node's current value (which can only increase) becomes greater than or equal to β\beta, the MIN player (higher up the tree) who set that β\beta value will never allow play to reach this node. MIN already has a choice that guarantees a score of at most β\beta.


    Worked Example:

    Problem: Apply Alpha-Beta pruning to the game tree below. The search proceeds from left to right. Identify which nodes are pruned.




    MAX
    MIN
    MAX



    A
    α=3, β=∞



    B
    α=-∞, β=3

    C
    α=3, β=∞



    D
    α=-∞, β=∞

    E
    α=-∞, β=3

    F
    α=3, β=∞

    G
    α=3, β=∞






    3
    5
    6
    9
    1
    2
    0
    1








    Solution:

    We trace the execution, keeping track of α\alpha and β\beta at each node.

    Step 1: Start at root A (MAX).

    • Initialize VA=V_A = -\infty, α=\alpha = -\infty, β=+\beta = +\infty.

    • Descend to child B (MIN). Pass down α,β\alpha, \beta.


    Step 2: At node B (MIN).
    • Initialize VB=+V_B = +\infty. Receives α=,β=+\alpha = -\infty, \beta = +\infty.

    • Descend to child D (MAX). Pass down α,β\alpha, \beta.


    Step 3: At node D (MAX).
    • Initialize VD=V_D = -\infty. Receives α=,β=+\alpha = -\infty, \beta = +\infty.

    • Evaluate first child (utility 3). VDV_D becomes 3. MAX node updates its own α\alpha. Local α\alpha becomes 3.

    • Evaluate second child (utility 5). VDV_D becomes max(3,5)=5\max(3, 5) = 5.

    • D returns value 5 to B.


    Step 4: Back at node B (MIN).
    • Receives 5 from D. VBV_B becomes min(+,5)=5\min(+\infty, 5) = 5. MIN node updates its own β\beta. Local β\beta becomes 5.

    • Descend to child E (MAX). Pass down B's current range: α=,β=5\alpha = -\infty, \beta = 5.


    Step 5: At node E (MAX).
    • Initialize VE=V_E = -\infty. Receives α=,β=5\alpha = -\infty, \beta = 5.

    • Evaluate first child (utility 6). VEV_E becomes 6. Local α\alpha becomes 6.

    • Check pruning condition: Is local αβ\alpha \ge \beta? Is 656 \ge 5? Yes.

    • Pruning occurs. The remaining children of E (the node with value 9) are not evaluated. E can return immediately. The value it returns is 6, although this value is not used by B.

    • B received 5 from D, and now sees that path E will result in a value of at least 6. Since B is a MIN node, it will prefer the path through D with value 5. So B's final value is 5.


    Step 6: Back at node A (MAX).
    • Receives 5 from B. VAV_A becomes max(,5)=5\max(-\infty, 5) = 5. MAX node updates its α\alpha. A's α\alpha is now 5.

    • Descend to child C (MIN). Pass down A's current range: α=5,β=+\alpha = 5, \beta = +\infty.


    Step 7: At node C (MIN).
    • Initialize VC=+V_C = +\infty. Receives α=5,β=+\alpha = 5, \beta = +\infty.

    • Descend to child F (MAX). Pass down α=5,β=+\alpha = 5, \beta = +\infty.


    Step 8: At node F (MAX).
    • Initialize VF=V_F = -\infty. Receives α=5,β=+\alpha = 5, \beta = +\infty.

    • Evaluate first child (utility 1). VFV_F becomes 1.

    • Evaluate second child (utility 2). VFV_F becomes max(1,2)=2\max(1, 2) = 2.

    • F returns value 2 to C.


    Step 9: Back at node C (MIN).
    • Receives 2 from F. VCV_C becomes min(+,2)=2\min(+\infty, 2) = 2. C updates its local β\beta to 2.

    • Check pruning condition: Is parent's α\alpha \ge my β\beta? Is 525 \ge 2? Yes.

    • Pruning occurs. The MIN player at C knows it can achieve a score of 2 (or less). The MAX player at A already has a move that guarantees a score of 5. MAX will never choose the move to C. Therefore, the rest of C's children (node G and its subtree) are pruned.

    • C can return its current value of 2.


    Step 10: Back at node A (MAX).
    • Receives 2 from C. VAV_A becomes max(5,2)=5\max(5, 2) = 5.

    • All children of A have been explored. The final value is 5.


    Answer: The nodes pruned are the terminal node with value 9 (child of E) and the entire subtree under node G. The final minimax value is 5.

    ---

    Problem-Solving Strategies

    💡 Optimal Ordering for Pruning

    The effectiveness of Alpha-Beta pruning is highly dependent on the order in which nodes are examined. To maximize pruning:

      • At a MAX node, explore the children that are likely to have the highest values first. This will set a high α\alpha value early, increasing the chances of pruning branches at subsequent MIN nodes.

      • At a MIN node, explore the children that are likely to have the lowest values first. This will set a low β\beta value early, increasing the chances of pruning branches at subsequent MAX nodes.


    In a perfectly ordered tree, Alpha-Beta pruning reduces the time complexity from O(bd)O(b^d) to O(bd/2)O(b^{d/2}), effectively doubling the searchable depth.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Confusing α\alpha and β\beta roles: Updating β\beta at a MAX node or α\alpha at a MIN node.
    Correct approach: Remember MAX updates α\alpha (its best-guaranteed lower bound) and MIN updates β\beta (its best-guaranteed upper bound).
      • Incorrectly passing values: Passing a node's local, updated α\alpha or β\beta value back up to its parent.
    Correct approach: A node only passes its final, computed minimax value up to its parent. The α\alpha and β\beta values are passed down to children to define the search window.
      • Global α\alpha and β\beta: Thinking there is only one global α\alpha and β\beta for the entire tree.
    Correct approach: α\alpha and β\beta are specific to a path from the root to the current node. Different branches of the tree will have different α\alpha and β\beta values during the search.

    ---

    Practice Questions

    :::question type="MCQ" question="In the Alpha-Beta pruning algorithm, α\alpha represents the minimum score that the maximizing player is assured of, and β\beta represents the maximum score that the minimizing player is assured of. Pruning occurs when a condition between α\alpha and β\beta is met. Which of the following statements is correct?" options=["A MAX node updates α\alpha and pruning occurs if αβ\alpha \le \beta","A MIN node updates α\alpha and pruning occurs if αβ\alpha \ge \beta","A MAX node updates α\alpha and pruning occurs if αβ\alpha \ge \beta","A MIN node updates β\beta and pruning occurs if βα\beta \ge \alpha"] answer="A MAX node updates α\alpha and pruning occurs if αβ\alpha \ge \beta" hint="Recall which player is associated with which bound and what the pruning condition signifies." solution="The value α\alpha is the best score (highest) guaranteed for the MAX player, so MAX nodes update it. The value β\beta is the best score (lowest) guaranteed for the MIN player, so MIN nodes update it. The fundamental condition for pruning is when the lower bound for MAX (α\alpha) is greater than or equal to the upper bound for MIN (β\beta), i.e., αβ\alpha \ge \beta. Therefore, the correct statement is that a MAX node updates α\alpha and pruning occurs if αβ\alpha \ge \beta.
    Answer: A MAX node updates α and pruning occurs if αβ\boxed{\text{A MAX node updates } \alpha \text{ and pruning occurs if } \alpha \ge \beta}"
    :::

    :::question type="NAT" question="Consider the given game tree where the first level is a MAX node. The terminal nodes have utility values. Using the Alpha-Beta pruning algorithm and exploring children from left to right, what is the final value computed for the root node?"


    MAX
    MIN
    A
    B
    C


    7
    8
    5
    6
    4
    9







    answer="5" hint="Trace the algorithm step-by-step, passing alpha and beta values down the tree and checking for pruning at each step." solution="
    Step 1: Root A (MAX) starts with α=,β=+\alpha=-\infty, \beta=+\infty. It explores child B (MIN).
    Step 2: Node B (MIN) receives α=,β=+\alpha=-\infty, \beta=+\infty. It explores its children.
    Step 3: B evaluates its first child (7). B is a MIN node, so its value is at most 7. It sets its local β=7\beta=7.
    Step 4: B evaluates its second child (8). Its value is min(7,8)=7\min(7, 8) = 7. No change to β\beta.
    Step 5: B evaluates its third child (5). Its value is min(7,5)=5\min(7, 5) = 5. B updates its local β=5\beta=5.
    Step 6: B has explored all children and returns its final value, 5, to A.
    Step 7: Root A (MAX) receives 5. Its value is at least 5. It updates its α=5\alpha=5.
    Step 8: A explores its second child C (MIN), passing down α=5,β=+\alpha=5, \beta=+\infty.
    Step 9: Node C (MIN) receives α=5,β=+\alpha=5, \beta=+\infty. It explores its first child (6).
    Step 10: C's value is at most 6. It updates its local β=6\beta=6.
    Step 11: C evaluates its second child (4). Its value is min(6,4)=4\min(6, 4) = 4. It updates its local β=4\beta=4.
    Step 12: C now checks for pruning. The condition is αβ\alpha \ge \beta. Here, 545 \ge 4. The condition is met.
    Step 13: The rest of C's children (the node with value 9) are pruned. C returns its current value, 4.
    Step 14: A receives 4 from C. A's value is max(5,4)=5\max(5, 4) = 5.
    Answer: 5\boxed{5}
    "
    :::

    :::question type="MSQ" question="Which of the following statements about the Alpha-Beta pruning algorithm are correct?" options=["The algorithm may not find the optimal minimax value if the tree is not searched in an optimal order.","The number of nodes explored depends on the order in which children are evaluated.","Alpha-Beta pruning can be applied to game trees of any depth.","The value of α\alpha at any MAX node can never decrease."] answer="B,C,D" hint="Consider the core properties of the algorithm: its correctness, its dependency on ordering, its applicability, and the monotonic nature of its bounds." solution="

    • Option A is incorrect. Alpha-Beta pruning is a correct optimization of Minimax. It is guaranteed to return the exact same minimax value as the full Minimax algorithm, regardless of the move ordering. The ordering only affects the efficiency (how many nodes are pruned), not the correctness of the result.

    • Option B is correct. The efficiency of Alpha-Beta pruning is highly dependent on move ordering. An optimal ordering (best moves first) prunes a maximal number of nodes, while a pessimal ordering (worst moves first) may result in no pruning at all.

    • Option C is correct. The algorithm is recursive and works on the principle of bounds. It can be applied to a game tree of any finite depth. For very deep trees, it is often combined with a cutoff depth and an evaluation function.

    • Option D is correct. The value of α\alpha represents the best score (maximum) found so far for the MAX player. As the search progresses, MAX will only update α\alpha if it finds a path leading to an even better (higher) score. Therefore, α\alpha is non-decreasing. Similarly, β\beta is non-increasing.

    Answer: B,C,D\boxed{\text{B,C,D}}
    "
    :::

    :::question type="MCQ" question="In a game tree, a MIN node MM has an (α,β)(\alpha, \beta) window of (5,12)(5, 12) passed to it from its parent MAX node. Node MM evaluates its first child, which returns a value of 4. What is the immediate consequence?" options=["The (α,β)(\alpha, \beta) window for MM's subsequent children becomes (5,4)(5, 4).","Node MM immediately returns the value 4 and its other children are pruned.","Node MM updates its local β\beta to 4 and continues evaluating its other children.","Node MM updates its local α\alpha to 5 and its local β\beta to 4."] answer="Node MM immediately returns the value 4 and its other children are pruned." hint="After evaluating a child, a node updates its bounds and checks the pruning condition." solution="
    Step 1: The MIN node MM receives the window α=5,β=12\alpha=5, \beta=12.
    Step 2: It evaluates its first child and gets a value of 4.
    Step 3: As a MIN node, its objective is to find the minimum value. Its current best value is 4. It updates its local β\beta value to 4.
    Step 4: It checks the pruning condition: Is αβ\alpha \ge \beta? Here, we check if 545 \ge 4.
    Step 5: The condition is true. This means that the MIN node MM has found a move that leads to a score of 4. However, its parent MAX node already has an option that guarantees a score of at least 5 (the value of α\alpha). The MAX player will never choose the path through MM.
    Step 6: Therefore, there is no need to explore the remaining children of MM. The subtree is pruned, and MM can immediately return its current value of 4.
    Answer: Node M immediately returns the value 4 and its other children are pruned.\boxed{\text{Node } M \text{ immediately returns the value 4 and its other children are pruned.}}
    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Minimax is Foundational: The Minimax algorithm provides the optimal move for a player in a two-player, zero-sum game by assuming the opponent also plays optimally. MAX nodes maximize utility, and MIN nodes minimize it.

    • Alpha-Beta is an Optimization: Alpha-Beta pruning drastically improves the efficiency of Minimax by eliminating branches of the game tree that cannot influence the final decision. It always yields the same result as a full Minimax search.

    • Understand α\alpha and β\beta: α\alpha is the best score (highest value) guaranteed for MAX on the path so far (a lower bound). β\beta is the best score (lowest value) guaranteed for MIN on the path so far (an upper bound). MAX nodes update α\alpha; MIN nodes update β\beta.

    • Master the Pruning Condition: The core of the algorithm is the condition αβ\alpha \ge \beta. When this condition is met at a node, its remaining sub-branches can be safely pruned.

    ---

    What's Next?

    💡 Continue Learning

    Adversarial search is a specific type of search algorithm. Understanding it well provides a strong foundation for related topics:

      • Heuristic Search Algorithms (A*, Best-First Search): While adversarial search deals with opponents, heuristic search deals with finding optimal paths in a single-agent context using estimates (heuristics). The concept of using bounds (α,β\alpha, \beta) is conceptually related to using heuristic estimates (h(n)h(n)) to guide a search.

      • Game Theory: Adversarial search is the algorithmic implementation of core game theory concepts like zero-sum games and optimal strategies. A deeper look into game theory can provide a more formal understanding of why algorithms like Minimax work.


    Mastering these connections will provide a more comprehensive and robust understanding of search and decision-making in Artificial Intelligence for the GATE exam.

    ---

    Chapter Summary

    In this chapter, we have undertaken a comprehensive exploration of the fundamental algorithms that underpin problem-solving in Artificial Intelligence. We began by classifying search strategies into two primary categories: uninformed and informed search. Our discussion of uninformed search covered foundational algorithms such as Breadth-First Search (BFS) and Depth-First Search (DFS), for which we analyzed the critical properties of completeness, optimality, and time/space complexity. We then advanced to informed, or heuristic, search strategies. The cornerstone of this section was the A algorithm, where we rigorously examined the roles of the cost function g(n)g(n) and the heuristic function h(n)h(n). The concepts of admissibility and consistency were established as the necessary conditions for guaranteeing the optimality of A. Finally, we shifted our focus to multi-agent environments with an introduction to adversarial search. We detailed the Minimax algorithm as the theoretical basis for decision-making in two-player, zero-sum games and introduced Alpha-Beta pruning as a powerful optimization that significantly reduces the search space without affecting the final outcome. It is clear from our discussion that the choice of a search algorithm is a critical design decision, involving a trade-off between computational resources and the quality of the solution required.

    📖 Search Algorithms - Key Takeaways

    • Uninformed vs. Informed Search: The primary distinction lies in the use of domain-specific knowledge. Uninformed (or blind) search algorithms like BFS and DFS systematically explore the search space without any information about the cost or distance to the goal. Informed (or heuristic) search algorithms like A* and Greedy Best-First Search use an evaluation function to guide the search towards more promising states.

    • Algorithm Evaluation Criteria: For the GATE examination, every search algorithm must be evaluated on four key metrics:

    - Completeness: Is the algorithm guaranteed to find a solution if one exists?
    - Optimality: Is the algorithm guaranteed to find the best (e.g., lowest cost) solution?
    - Time Complexity: How long does the algorithm take as a function of the problem size (typically branching factor bb and solution depth dd)?
    - Space Complexity: How much memory does the algorithm require?

    • A Search and Heuristics: The A algorithm is central to informed search. Its evaluation function is f(n)=g(n)+h(n)f(n) = g(n) + h(n), where g(n)g(n) is the exact cost from the start node to node nn, and h(n)h(n) is the estimated cost from nn to the goal. A is complete and optimal if the heuristic h(n)h(n) is admissible (i.e., it never overestimates the true cost to the goal, h(n)h(n)h(n) \le h^(n)).

    • Complexity is Critical: For a uniform tree with branching factor bb and solution depth dd, the worst-case time and space complexity of most uninformed search algorithms is exponential, often on the order of O(bd)O(b^d). Understanding this limitation is crucial, as space complexity is frequently the more restrictive factor in practice.

    • Adversarial Search (Minimax): This algorithm is used for decision-making in two-player games. It recursively explores the game tree, assuming both players play optimally. The MAX player aims to maximize the utility score, while the MIN player aims to minimize it. The value of a node is propagated up from the terminal states.

    • Alpha-Beta Pruning: This is an optimization of the Minimax algorithm. It prunes branches of the game tree that cannot possibly influence the final decision. The algorithm maintains two values, alpha (the best value found so far for MAX) and beta (the best value found so far for MIN), to determine which subtrees can be safely ignored. It does not alter the result of Minimax but can drastically reduce the number of nodes evaluated.

    ---

    Chapter Review Questions

    :::question type="MCQ" question="Consider the game tree below, where the root is a MAX node. The values at the leaf nodes represent the utility for the MAX player. Which of the following nodes will be pruned if Alpha-Beta pruning is applied? Assume children are visited from left to right." options=["D", "J", "F", "L"] answer="J" hint="Trace the Minimax algorithm while keeping track of the alpha (for MAX) and beta (for MIN) values. A prune occurs when alpha becomes greater than or equal to beta." solution="
    Let's trace the Alpha-Beta pruning algorithm, assuming children are visited from left to right.
    We maintain α\alpha (best value for MAX found so far on the path to the root) and β\beta (best value for MIN found so far on the path to the root).

  • Root A (MAX node): Initialize α=\alpha = -\infty, β=+\beta = +\infty.
  • Explore Branch B (MIN node): A passes (α=,β=+)(\alpha=-\infty, \beta=+\infty) to B.

  • - B explores E: E returns 3. B's current value is 3\le 3. Update B's β=3\beta = 3.
    - B explores F: F returns 5. B's current value is min(3,5)=3\min(3, 5) = 3.
    - B explores G: G returns 2. B's current value is min(3,2)=2\min(3, 2) = 2.
    - B returns 2 to A.
    - At A, update α=max(,2)=2\alpha = \max(-\infty, 2) = 2.

  • Explore Branch C (MIN node): A passes (α=2,β=+)(\alpha=2, \beta=+\infty) to C.

  • - C explores H: H returns 9. C's current value is 9\le 9. Update C's β=9\beta = 9.
    - C explores I: I returns 1. C's current value is min(9,1)=1\min(9, 1) = 1. Update C's β=1\beta = 1.
    - Pruning Check at C: At this point, for node C, we have β=1\beta=1. The α\alpha value inherited from A is α=2\alpha=2. Since αβ\alpha \ge \beta (212 \ge 1), MAX (at A) already has a guaranteed score of 2 from branch B. MIN (at C) has found a move (to I) that yields a score of 1. Since MAX will never choose a branch that yields 1 when it can get 2, the rest of C's children can be pruned.
    - Therefore, Node J is pruned.
    - C returns 1 to A.
    - At A, update α=max(2,1)=2\alpha = \max(2, 1) = 2.

  • Explore Branch D (MIN node): A passes (α=2,β=+)(\alpha=2, \beta=+\infty) to D.

  • - D explores L: L returns 8. D's current value is 8\le 8. Update D's β=8\beta = 8.
    - D explores M: M returns 4. D's current value is min(8,4)=4\min(8, 4) = 4. Update D's β=4\beta = 4.
    - D explores N: N returns 2. D's current value is min(4,2)=2\min(4, 2) = 2. Update D's β=2\beta = 2.
    - Pruning Check at D: At this point, for node D, we have β=2\beta=2. The α\alpha value inherited from A is α=2\alpha=2. Since αβ\alpha \ge \beta (222 \ge 2), any subsequent children of D would be pruned. However, N is the last child of D, so no further pruning occurs within this branch.
    - D returns 2 to A.
    - At A, update α=max(2,2)=2\alpha = \max(2, 2) = 2.

    The final value for MAX is 2. The only node among the options that was pruned is J.

    Answer: \boxed{J}
    "
    :::

    :::question type="NAT" question="A state space is represented as a tree where every internal node has a branching factor of 4. If the shallowest goal node is at depth 3 (with the root at depth 0), what is the maximum number of nodes that must be generated by a Breadth-First Search (BFS) before it finds the solution? Assume the goal is the last node to be explored at its depth level." answer="85" hint="BFS explores the tree level by level. Calculate the total number of nodes at depths 0, 1, 2, and 3." solution="
    Breadth-First Search (BFS) performs a complete level-by-level exploration of the search tree. The number of nodes at any given depth kk in a tree with a uniform branching factor bb is bkb^k.

    The total number of nodes generated by BFS to find a goal at depth dd (assuming it's the last node at that level) is the sum of all nodes from the root (depth 0) up to and including depth dd.

    The formula for the total nodes generated is:

    N=k=0dbkN = \sum_{k=0}^{d} b^k

    Given parameters:

    • Branching factor, b=4b = 4

    • Depth of the goal node, d=3d = 3


    We calculate the number of nodes at each level:
    • Depth 0 (root): 40=14^0 = 1 node

    • Depth 1: 41=44^1 = 4 nodes

    • Depth 2: 42=164^2 = 16 nodes

    • Depth 3: 43=644^3 = 64 nodes


    The maximum number of nodes generated is the sum of nodes at all these levels:
    N=1+4+16+64=85N = 1 + 4 + 16 + 64 = 85

    Alternatively, we can use the formula for the sum of a geometric series:

    N=bd+11b1=43+1141=4413=25613=2553=85N = \frac{b^{d+1} - 1}{b - 1} = \frac{4^{3+1} - 1}{4 - 1} = \frac{4^4 - 1}{3} = \frac{256 - 1}{3} = \frac{255}{3} = 85

    Thus, a maximum of 85 nodes will be generated.
    Answer: \boxed{85}
    "
    :::

    :::question type="MCQ" question="Which of the following statements about search algorithm properties is FALSE?" options=["An A search with an admissible but inconsistent heuristic is guaranteed to find an optimal solution, but may re-open nodes on the closed list.", "Depth-First Search is not complete in state spaces with infinite paths or cycles.", "Greedy Best-First Search is an optimal algorithm because it always expands the node that is heuristically closest to the goal.", "Uniform Cost Search is a special case of A search where the heuristic function h(n)h(n) is always zero."] answer="C" hint="Consider whether minimizing the heuristic value at each step guarantees minimizing the total path cost." solution="
    Let us analyze each statement to determine its validity.

    • (A) An A search with an admissible but inconsistent heuristic is guaranteed to find an optimal solution, but may re-open nodes on the closed list. This statement is TRUE. Admissibility (h(n)h(n)h(n) \le h^(n)) is the sole condition required to guarantee the optimality of A*. Consistency (a stronger condition) guarantees that once a node is expanded, a better path to it will not be found later, thus avoiding the need to re-open closed nodes. If a heuristic is admissible but not consistent, optimality is maintained, but efficiency may decrease due to re-opening nodes.
    • (B) Depth-First Search is not complete in state spaces with infinite paths or cycles. This statement is TRUE. DFS explores a single path as deeply as possible before backtracking. If it goes down an infinite path, it will never backtrack to explore other paths, and thus will never find a solution that may exist on a different branch.
    • (C) Greedy Best-First Search is an optimal algorithm because it always expands the node that is heuristically closest to the goal. This statement is FALSE. Greedy Best-First Search uses the evaluation function f(n)=h(n)f(n) = h(n). It expands the node that appears to be closest to the goal. However, this approach can lead it down a path that seems promising initially but turns out to be a long and costly dead end. It does not account for the cost already incurred (g(n)g(n)), and is therefore both incomplete (can get stuck in loops) and not optimal.
    • (D) Uniform Cost Search is a special case of A search where the heuristic function h(n)h(n) is always zero. This statement is TRUE. The evaluation function for A is f(n)=g(n)+h(n)f(n) = g(n) + h(n). If we set h(n)=0h(n) = 0 for all nodes nn, the function becomes f(n)=g(n)f(n) = g(n). This is precisely the evaluation function for Uniform Cost Search, which always expands the node with the lowest path cost from the start.
    Therefore, the false statement is (C). Answer: \boxed{C} " :::

    :::question type="NAT" question="In a grid-based pathfinding problem, an agent can move one step at a time to any of its 8 adjacent cells (including diagonals). The cost of a horizontal or vertical move is 1, and the cost of a diagonal move is 2\sqrt{2}. The goal is at coordinates (15, 20). What is the Chebyshev distance heuristic value for a node located at coordinates (8, 12)?" answer="8" hint="The Chebyshev distance (or LL_\infty distance) between two points (x1,y1)(x_1, y_1) and (x2,y2)(x_2, y_2) is defined as max(x1x2,y1y2)\max(|x_1 - x_2|, |y_1 - y_2|). This heuristic is admissible for a grid where diagonal movement is allowed." solution="
    The Chebyshev distance is an admissible heuristic for grids where 8-directional movement is permitted. It calculates the minimum number of moves required to reach the goal if movement is unrestricted, which corresponds to the maximum difference in either the x or y coordinates.

    The formula for the Chebyshev distance DD_\infty between a node nn at (xn,yn)(x_n, y_n) and the goal gg at (xg,yg)(x_g, y_g) is:

    h(n)=D=max(xnxg,ynyg)h(n) = D_\infty = \max(|x_n - x_g|, |y_n - y_g|)

    Given:

    • Node coordinates: (xn,yn)=(8,12)(x_n, y_n) = (8, 12)

    • Goal coordinates: (xg,yg)=(15,20)(x_g, y_g) = (15, 20)


    First, we calculate the absolute differences in the coordinates:
    • Difference in x-coordinates: 815=7=7|8 - 15| = |-7| = 7

    • Difference in y-coordinates: 1220=8=8|12 - 20| = |-8| = 8


    Next, we take the maximum of these two values:
    h(n)=max(7,8)=8h(n) = \max(7, 8) = 8

    The Chebyshev distance heuristic value is 8. This represents the fact that at least 8 moves are required to cover the distance of 8 units in the y-direction, during which the distance of 7 units in the x-direction can also be covered simultaneously via diagonal moves.
    Answer: \boxed{8}
    "
    :::

    ---

    What's Next?

    💡 Continue Your GATE Journey

    Having completed Search Algorithms, you have established a firm foundation for several advanced topics in Artificial Intelligence. The principles of state-space traversal, optimality, and complexity analysis are recurring themes.

    Key connections:

      • Relation to Previous Learning: This chapter builds directly upon your knowledge of Data Structures (queues for BFS, stacks for DFS, priority queues for A*) and Design and Analysis of Algorithms (asymptotic notation for complexity analysis). The search strategies we have discussed are practical applications of graph traversal algorithms.
      • Foundation for Future Chapters:
    - Logic and Knowledge Representation: The search for a proof in a logical system is a form of state-space search, where states are logical sentences and operators are inference rules. - Planning: Automated planning can be viewed as a large-scale search problem. Planning algorithms search through a space of world states to find a sequence of actions that leads to a goal state. - Constraint Satisfaction Problems (CSPs): CSPs utilize a specialized form of search, namely backtracking search, which is a variant of DFS. The local search techniques we briefly mentioned are also highly relevant here. - Machine Learning: The concept of searching for an optimal solution in a vast space is central to machine learning. For instance, finding the optimal set of weights in a neural network is a search problem (solved via optimization algorithms like gradient descent), and reinforcement learning involves searching for an optimal policy in a state space.

    🎯 Key Points to Remember

    • Master the core concepts in Search Algorithms before moving to advanced topics
    • Practice with previous year questions to understand exam patterns
    • Review short notes regularly for quick revision before exams

    Related Topics in Artificial Intelligence

    More Resources

    Why Choose MastersUp?

    🎯

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    📚

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    📊

    Smart Analytics

    Track your progress with subject-wise performance insights

    🔖

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation →

    No credit card required • Free forever for basic features