100% FREE Updated: Mar 2026 Computer Organization and Architecture Memory and I/O Systems

I/O Interface

Comprehensive study notes on I/O Interface for GATE CS preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

I/O Interface

Overview

In our study of computer organization, we have thus far concentrated on the internal workings of the central processing unit and the memory hierarchy. However, a computer system's utility is ultimately defined by its ability to interact with the external world. This interaction is facilitated by a diverse array of peripheral devices—from keyboards and displays to disk drives and network cards. The fundamental challenge lies in reconciling the high speed of the CPU and main memory with the comparatively slow and asynchronous nature of these I/O devices. This chapter is dedicated to the study of the I/O interface, the critical hardware and software components that mediate communication between the processor and the periphery.

We shall investigate the various mechanisms and protocols designed to manage the flow of data and control signals, ensuring that information is transferred reliably and efficiently. An understanding of these mechanisms is not merely a theoretical exercise; it is of paramount importance for the GATE examination. Questions frequently probe the performance trade-offs inherent in different I/O strategies, requiring a quantitative analysis of CPU utilization, data transfer rates, and system throughput. A thorough grasp of the concepts presented herein will equip the aspirant to solve complex problems related to system performance and design.

---

Chapter Contents

| # | Topic | What You'll Learn |
|---|-------|-------------------|
| 1 | I/O Modes | Techniques for managing I/O data transfers. |

---

---

Learning Objectives

By the End of This Chapter

After completing this chapter, you will be able to:

  • Differentiate between Programmed I/O, Interrupt-driven I/O, and Direct Memory Access (DMA).

  • Analyze the performance implications and CPU overhead associated with each I/O transfer mode.

  • Calculate data transfer rates, CPU utilization, and synchronization requirements for I/O subsystems.

  • Explain the role of I/O controllers, such as the DMA Controller (DMAC), in facilitating data transfers.

---

---

We now turn our attention to I/O Modes...

Part 1: I/O Modes

Introduction

The central processing unit (CPU) and main memory operate at electronic speeds, executing millions of instructions per second. In stark contrast, input/output (I/O) devices, being largely electromechanical, operate at significantly slower speeds. This fundamental speed disparity presents a critical challenge in computer system design. To manage this mismatch and ensure efficient system operation, various methods for coordinating data transfer between the CPU, memory, and peripheral devices have been developed. These methods are broadly classified as I/O modes.

We shall explore the three principal modes of I/O data transfer: Programmed I/O, Interrupt-driven I/O, and Direct Memory Access (DMA). Understanding the mechanisms, advantages, and disadvantages of each mode is essential for analyzing system performance, a topic frequently examined in the GATE. Our discussion will focus on the operational principles and quantitative analysis of these techniques, providing the necessary foundation for solving complex problems related to I/O performance.

📖 I/O Mode

An I/O Mode, or I/O technique, is a method used to manage and synchronize the transfer of data between the processor, main memory, and peripheral (I/O) devices. The choice of mode dictates how the CPU is involved in the data transfer process and significantly impacts overall system performance and CPU utilization.

---

---

Key Concepts

1. Programmed I/O

Programmed I/O is the most straightforward method of data transfer. In this mode, the CPU has direct and continuous control over the I/O operation. To transfer data, the CPU executes a program that initiates, directs, and terminates the I/O task.

The core of this technique involves the CPU repeatedly checking the status of the I/O device to determine if it is ready for the next data transfer. This process of periodic status checking is known as polling. Because the CPU remains in a loop waiting for the I/O device, this approach is also referred to as busy-waiting.

The operational sequence is as follows:

  • The CPU issues a command to the I/O module.

  • The CPU polls the status register of the I/O module to check if the operation is complete or if the device is ready.

  • If the device is not ready, the CPU continues to poll, effectively being "stuck" in a waiting loop.

  • Once the device is ready, the CPU performs the data read or write operation.

  • This process is repeated for every unit of data (e.g., byte or word).
  • The principal disadvantage of Programmed I/O is its profound inefficiency. The CPU spends a substantial amount of time in the polling loop, unable to perform other computational tasks. Consequently, CPU utilization is extremely low, making this mode suitable only for very simple or low-speed applications where CPU time is not a critical resource.

    Worked Example:

    Problem: A processor polls a low-speed device every 50 ms. Each poll consumes 500 CPU cycles. The processor clock speed is 2 GHz. If the device provides new data once every second, calculate the fraction of CPU time consumed solely by polling in one second.

    Solution:

    Step 1: Calculate the number of polls per second.
    The processor polls every 50 ms.

    Number of polls per second=1 second50 ms=1000 ms50 ms=20 polls\text{Number of polls per second} = \frac{1\ \text{second}}{50\ \text{ms}} = \frac{1000\ \text{ms}}{50\ \text{ms}} = 20\ \text{polls}

    Step 2: Calculate the total CPU cycles spent on polling per second.
    Each poll takes 500 cycles.

    Total polling cycles per second=20 polls×500 cycles/poll=10000 cycles\text{Total polling cycles per second} = 20\ \text{polls} \times 500\ \text{cycles/poll} = 10000\ \text{cycles}

    Step 3: Calculate the total number of CPU cycles available per second.
    The processor clock speed is 2 GHz.

    Total available cycles per second=2×109 cycles\text{Total available cycles per second} = 2 \times 10^9\ \text{cycles}

    Step 4: Compute the fraction of CPU time spent on polling.

    Fraction=Total polling cyclesTotal available cycles=100002×109=12×105=5×106\text{Fraction} = \frac{\text{Total polling cycles}}{\text{Total available cycles}} = \frac{10000}{2 \times 10^9} = \frac{1}{2 \times 10^5} = 5 \times 10^{-6}

    Answer: The fraction of CPU time consumed by polling is 5×106\boxed{5 \times 10^{-6}}.

    ---

    ---

    2. Interrupt-Driven I/O

    To overcome the inefficiency of Programmed I/O, the mechanism of interrupts was introduced. In Interrupt-driven I/O, the CPU does not wait for the I/O device to be ready. Instead, the CPU initiates an I/O command and then proceeds to execute other instructions. When the I/O device is ready to transfer data, it signals the CPU by sending an interrupt request signal.

    Upon receiving an interrupt, the CPU suspends its current execution, saves its context (most importantly, the Program Counter), and transfers control to a special routine called the Interrupt Service Routine (ISR) or interrupt handler. The ISR performs the necessary data transfer. Once the ISR is complete, the CPU restores the saved context and resumes the execution of the interrupted program.

    The Interrupt Handling Cycle:
    A crucial sequence of events occurs when an interrupt is recognized:

  • Finish Current Instruction: The CPU always completes the execution of the instruction it is currently processing. An interrupt is typically not serviced in the middle of an instruction.

  • Save Processor State: The contents of the Program Counter (PC) are saved onto the system stack. This saved address is the return address for the interrupted program. Other critical registers may also be saved.

  • Load ISR Address: The PC is loaded with the starting address of the appropriate Interrupt Service Routine. The system then begins executing the ISR.


  • User Program




    ISR Execution


    Interrupt Occurs
    Executing...
    Servicing I/O...
    Resumes Execution...

    Vectored vs. Non-Vectored Interrupts:
    How the CPU finds the ISR address differentiates two types of interrupts:

    • Non-Vectored (Polling) Interrupts: The CPU, upon interruption, branches to a generic ISR. This routine then polls all I/O devices to identify which one raised the interrupt. This adds overhead.

    • Vectored Interrupts: The interrupting device itself provides information, typically an index or a part of the address (an "interrupt vector"), which directs the CPU to the specific ISR for that device. This is significantly faster as it eliminates the polling step.


    Interrupt-driven I/O offers a substantial improvement in CPU utilization over Programmed I/O. However, the CPU is still responsible for managing the data transfer itself (i.e., moving data between the I/O module and memory), which can consume considerable processing time for large data volumes.

    ---

    ---

    3. Direct Memory Access (DMA)

    For high-speed, bulk data transfers, even Interrupt-driven I/O can be a bottleneck. Direct Memory Access (DMA) is the most advanced I/O mode, designed to solve this problem by offloading the entire data transfer process from the CPU.

    A specialized hardware component, the DMA Controller (DMAC), takes over the responsibility of transferring data directly between an I/O device and main memory. The CPU is only involved at the beginning and end of the transfer.

    The DMA transfer process is as follows:

  • Initiation: The CPU programs the DMAC with the necessary information: the starting memory address, the number of words to be transferred, the I/O device address, and the direction of transfer (read/write).

  • Transfer: The CPU then continues with other work. The DMAC takes control of the system bus and manages the data transfer directly between the I/O device and memory.

  • Completion: Once the entire block of data has been transferred, the DMAC sends an interrupt signal to the CPU to notify it that the operation is complete.
  • The DMAC needs to use the system bus (address, data, and control lines) to perform its task. Since the CPU also needs the bus, a mechanism for bus arbitration is required. This leads to three primary modes of DMA operation.

    DMA Transfer Modes:

    * Burst Mode (or Block Transfer Mode): The DMAC is granted exclusive access to the system bus and transfers the entire block of data in one contiguous burst. During this time, the CPU is idle, unable to access the bus. This mode provides the highest possible data transfer rate but temporarily halts the CPU. It is ideal for time-critical, bulk data transfers.
    * Cycle Stealing Mode: The DMAC gains control of the bus for just one bus cycle—enough time to transfer a single unit of data (e.g., a word). After the transfer, it relinquishes the bus to the CPU. The DMAC repeatedly "steals" cycles from the CPU. This interleaves CPU execution with DMA transfers, slowing down the CPU but not halting it completely.
    * Transparent Mode: The DMAC only transfers data when the CPU is not using the bus (e.g., during instruction decoding phases where bus access is not required). This mode has zero impact on CPU execution speed but results in the lowest data transfer rate, as the DMAC must wait for opportunities.

    📐 DMA Data Transfer Rate (Cycle Stealing)
    Data Transfer Rate (bits/sec)=Processor Frequency×%Cycles Stolen×Data per Cycle100\text{Data Transfer Rate (bits/sec)} = \frac{\text{Processor Frequency} \times \text{\%Cycles Stolen} \times \text{Data per Cycle}}{100}

    Variables:

      • Processor Frequency: Clock speed of the CPU in Hertz (HzHz).

      • %Cycles Stolen: The percentage of CPU cycles used for DMA transfers.

      • Data per Cycle: The amount of data transferred in one stolen cycle, which must be converted to bits for the final rate.


    When to use: For NAT problems in GATE asking to calculate the transfer rate of a device using cycle stealing DMA.

    Worked Example:

    Problem: A 5 MHz processor is interfaced with a device via a DMA controller. The DMA controller uses cycle stealing to transfer 16 bits of data in one CPU cycle. If 2% of the processor cycles are stolen for DMA, what is the data transfer rate of the device in kilobits per second (kbps)?

    Solution:

    Step 1: Identify the given parameters.
    Processor Frequency = 5 MHz=5×106 Hz5\ MHz = 5 \times 10^6\ Hz
    Percentage of cycles stolen = 2%2\%
    Data transferred per cycle = 16 bits16\ bits

    Step 2: Apply the data transfer rate formula.

    Rate=Processor Frequency×%Cycles Stolen100×Data per CycleRate = \text{Processor Frequency} \times \frac{\text{\%Cycles Stolen}}{100} \times \text{Data per Cycle}

    Step 3: Substitute the values into the formula.

    Rate=(5×106 cycles/sec)×2100×(16 bits/cycle)Rate = (5 \times 10^6\ \text{cycles/sec}) \times \frac{2}{100} \times (16\ \text{bits/cycle})

    Step 4: Compute the result.

    Rate=(5×106)×0.02×16 bits/secRate = (5 \times 10^6) \times 0.02 \times 16\ \text{bits/sec}

    Rate=100,000×16 bits/secRate = 100,000 \times 16\ \text{bits/sec}
    Rate=1,600,000 bits/secRate = 1,600,000\ \text{bits/sec}

    Step 5: Convert the rate to kilobits per second (kbps).

    Rate=1,600,0001000 kbps=1600 kbpsRate = \frac{1,600,000}{1000}\ \text{kbps} = 1600\ \text{kbps}

    Answer: The data transfer rate is 1600 kbps\boxed{1600 \text{ kbps}}.

    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Choosing the Right I/O Mode

    When a question asks to compare I/O modes or select the best one for a scenario, use these heuristics:

      • Highest Throughput / Bulk Data Transfer: The answer is almost always DMA, specifically Burst Mode DMA. DMA is designed for moving large blocks of data efficiently (e.g., from a hard disk to memory).
      • Improved CPU Utilization (over polling): Interrupt-driven I/O is the primary answer. It frees the CPU from busy-waiting.
      • Lowest CPU Utilization: Programmed I/O is the answer. The CPU is tied up for the entire duration.
      • Quantitative Overhead Comparison: For problems comparing the time spent in polling vs. interrupts, always calculate the total time consumed by each method over a fixed interval (usually one second).
    - Polling Time: (Polls/sec) × (Time/poll) + (Events/sec) × (Processing Time/event) - Interrupt Time: (Interrupts/sec) × (Time/interrupt service)
      • DMA Calculations: Pay extremely close attention to units. Processor speed is in Hz (cycles/sec), data size is often in bytes but the rate is asked in bits/sec. Always perform the conversion from bytes to bits (1 byte=8 bits1\ \text{byte} = 8\ \text{bits}).

    ---

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Confusing Programmed I/O and Interrupt-driven I/O efficiency: Stating that Programmed I/O has better CPU utilization.
    Correct Approach: Programmed I/O has the worst CPU utilization due to busy-waiting. Interrupt-driven I/O is significantly more efficient because the CPU can perform other tasks while waiting for the I/O device.
      • Incorrectly ordering the interrupt handling sequence: Assuming the PC is saved after the new ISR address is loaded.
    Correct Approach: The CPU must first finish the current instruction, then save the current PC (the return address), and only then load the PC with the ISR's starting address.
      • Equating Cycle Stealing and Burst Mode Throughput: Assuming cycle stealing is as fast as burst mode for large transfers.
    Correct Approach: For bulk data transfer, burst mode has a higher effective throughput. It monopolizes the bus, avoiding the overhead of repeatedly acquiring and relinquishing the bus that occurs in cycle stealing.
      • Forgetting unit conversions in DMA problems: Calculating the data rate in bytes/sec when the question asks for bits/sec.
    Correct Approach: Always double-check the required units in the question (e.g., bps, kbps, Mbps). Remember to multiply by 8 when converting from bytes to bits.

    ---

    Practice Questions

    :::question type="MCQ" question="Which of the following statements accurately describes the relationship between different I/O modes?" options=["Programmed I/O offers the highest data transfer throughput for large files.","In cycle stealing DMA, the CPU is completely halted during the entire data block transfer.","Interrupt-driven I/O generally provides better CPU utilization than programmed I/O.","Vectored interrupts require the CPU to poll devices to identify the source of the interrupt."] answer="Interrupt-driven I/O generally provides better CPU utilization than programmed I/O." hint="Analyze the CPU's involvement in each mode. Busy-waiting is the key factor for utilization." solution="

    • Option A is incorrect. DMA (Burst Mode) offers the highest throughput for large files.

    • Option B is incorrect. In cycle stealing, the CPU is only paused for one bus cycle at a time, not for the entire block transfer. This description fits Burst Mode.

    • Option C is correct. Interrupt-driven I/O allows the CPU to execute other tasks while the I/O device is busy, whereas Programmed I/O forces the CPU into a busy-wait loop, leading to poor utilization.

    • Option D is incorrect. This describes non-vectored interrupts. Vectored interrupts allow the device to directly provide the ISR address, eliminating the need for polling.

    "
    :::

    :::question type="NAT" question="A system uses interrupt-driven I/O to handle a data stream from a sensor. An interrupt occurs every 200 μs\mu s. The interrupt service routine takes 10 μs\mu s to execute. Calculate the percentage of CPU time spent on handling these interrupts." answer="5" hint="First, find the number of interrupts that occur per second. Then, calculate the total time spent servicing these interrupts in one second. Express this as a percentage of the total time (1 second)." solution="
    Step 1: Calculate the number of interrupts per second.

    Time between interrupts=200μs=200×106s\text{Time between interrupts} = 200 \mu s = 200 \times 10^{-6} s

    Number of interrupts per second=1200×106=106200=5000 interrupts/sec\text{Number of interrupts per second} = \frac{1}{200 \times 10^{-6}} = \frac{10^6}{200} = 5000 \text{ interrupts/sec}

    Step 2: Calculate the total time spent in the ISR per second.
    Total ISR time per second = (Number of interrupts/sec) ×\times (Time per ISR)

    Total ISR time per second=5000×10μs=50000μs=50000×106s=0.05s\text{Total ISR time per second} = 5000 \times 10 \mu s = 50000 \mu s = 50000 \times 10^{-6} s = 0.05 s

    Step 3: Calculate the percentage of CPU time.

    Percentage=Total ISR time per secondTotal time in one second×100\text{Percentage} = \frac{\text{Total ISR time per second}}{\text{Total time in one second}} \times 100

    Percentage=0.05s1s×100=5%\text{Percentage} = \frac{0.05 s}{1 s} \times 100 = 5\%

    Result:
    Answer: \boxed{5}
    "
    :::

    :::question type="NAT" question="A computer system has a 400 MHz processor. A DMA controller is used to transfer 32-bit words from a device to memory using cycle stealing. The DMA transfer takes 1 CPU cycle per word. If 1.25% of the CPU cycles are stolen for this purpose, the data transfer rate of the device is __________ Megabits per second (Mbps)." answer="16" hint="Use the DMA data transfer rate formula. Be careful with units: MHz\text{MHz}, 32-bit words, and the final answer in Mbps\text{Mbps}." solution="
    Step 1: Identify the given parameters.
    Processor Frequency = 400 MHz=400×106 Hz400 \text{ MHz} = 400 \times 10^6 \text{ Hz}.
    Percentage of cycles stolen = 1.25%1.25\%.
    Data transferred per cycle = 32-bit32\text{-bit}.

    Step 2: Apply the data transfer rate formula.

    Rate (bps)=Processor Frequency×%Cycles Stolen100×Data per Cycle\text{Rate (bps)} = \text{Processor Frequency} \times \frac{\text{\%Cycles Stolen}}{100} \times \text{Data per Cycle}

    Step 3: Substitute the values.

    Rate=(400×106 cycles/sec)×1.25100×(32 bits/cycle)\text{Rate} = (400 \times 10^6 \text{ cycles/sec}) \times \frac{1.25}{100} \times (32 \text{ bits/cycle})

    Step 4: Compute the result in bits per second.

    Rate=(400×106)×0.0125×32 bps\text{Rate} = (400 \times 10^6) \times 0.0125 \times 32 \text{ bps}

    Rate=5,000,000×32 bps\text{Rate} = 5,000,000 \times 32 \text{ bps}

    Rate=160,000,000 bps\text{Rate} = 160,000,000 \text{ bps}

    Step 5: Convert the rate to Megabits per second (Mbps).

    Rate=160,000,000106 Mbps=16 Mbps\text{Rate} = \frac{160,000,000}{10^6} \text{ Mbps} = 16 \text{ Mbps}

    Result:
    Answer: \boxed{16}
    "
    :::

    :::question type="MSQ" question="Which of the following statements about Direct Memory Access (DMA) are correct?" options=["The CPU is not involved in the initiation or termination of a DMA transfer.","In burst mode DMA, the DMAC relinquishes the bus to the CPU after every word transfer.","Cycle stealing mode generally causes less interference with CPU execution compared to burst mode.","DMA is most suitable for transferring large blocks of data between I/O devices and main memory."] answer="C,D" hint="Consider the role of the CPU and the bus arbitration mechanism for each DMA mode." solution="

    • A is incorrect. The CPU is responsible for initiating the DMA transfer by programming the DMAC and is notified of its completion via an interrupt. It is not involved in the data transfer itself.

    • B is incorrect. This describes cycle stealing mode. In burst mode, the DMAC holds the bus for the entire block transfer.

    • C is correct. Cycle stealing interleaves DMA and CPU operations, causing the CPU to slow down but not halt. Burst mode completely halts the CPU for the duration of the transfer, causing greater interference.

    • D is correct. The primary advantage and use case for DMA is to achieve high-throughput for bulk data transfers, significantly improving system performance for such tasks.

    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Three Modes, Three Levels of CPU Involvement:

    Programmed I/O: Maximum CPU involvement (busy-waiting). Lowest CPU utilization.
    Interrupt-driven I/O: Moderate CPU involvement (initiates I/O, handles ISR). Greatly improved CPU utilization.
    * DMA: Minimum CPU involvement (initiates and terminates only). The DMAC handles the transfer. Best for bulk data.

    • Interrupt Handling is a Fixed Sequence: The processor must (1) finish the current instruction, (2) save the program counter (and other state), and (3) load the ISR address. Vectored interrupts are faster than non-vectored because they skip the device polling step.

    • DMA is About Throughput: For large data blocks, DMA is superior. Burst mode offers the highest throughput by monopolizing the bus, while cycle stealing offers a compromise by interleaving CPU and DMA activity. Be prepared to calculate the data transfer rate for cycle stealing mode.

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • Memory Hierarchy: Understand how DMA interacts with caches. A DMA transfer typically bypasses the CPU cache and writes directly to main memory, which can lead to cache coherency issues that the system hardware must resolve.

      • Operating Systems: The concepts of interrupts and ISRs are fundamental to how an OS manages hardware, handles system calls, and performs context switching. The study of I/O modes in COA provides the hardware foundation for the I/O management functions of an OS.


    Master these connections for comprehensive GATE preparation!

    ---

    Chapter Summary

    In this chapter, we have conducted a thorough examination of the fundamental modes of I/O data transfer, which form the bridge between the central processing unit and peripheral devices. We began with the most basic method, Programmed I/O, and progressed to more sophisticated and efficient techniques, culminating in Direct Memory Access. The choice of an appropriate I/O mode is a critical design decision, balancing hardware complexity, CPU overhead, and the data transfer rate requirements of the peripheral.

    📖 I/O Interface - Key Takeaways
      • Programmed I/O is the simplest mode, where the CPU has direct and continuous control over the I/O operation. It is characterized by polling or "busy-waiting," which makes it highly inefficient for all but the simplest, slowest devices, as it fully occupies the CPU.
      • Interrupt-Driven I/O significantly improves upon Programmed I/O by allowing the CPU to perform other tasks while the I/O module is busy. The CPU is only involved when an I/O device signals its readiness via an interrupt. This introduces the overhead of context switching and executing an Interrupt Service Routine (ISR).
      • Direct Memory Access (DMA) provides the highest performance by offloading the entire data transfer process from the CPU. A dedicated DMA Controller (DMAC) manages the transfer of a block of data directly between the I/O device and main memory, requiring CPU intervention only at the beginning and end of the transfer.
      • DMA and Bus Arbitration: DMA operates by becoming the "bus master." In a process known as cycle stealing, the DMAC temporarily gains control of the system bus from the CPU to transfer a single word of data, causing the CPU to wait for one or more clock cycles.
      • I/O Addressing Schemes: We distinguished between Memory-Mapped I/O, where I/O device registers are part of the main memory address space and can be accessed by any memory-reference instruction, and Isolated I/O (or I/O-Mapped I/O), which uses a separate address space for I/O and requires special instructions like `IN` and `OUT`.
      • The Performance Trade-Off: There exists a clear hierarchy: Programmed I/O involves maximum CPU overhead with minimal hardware. Interrupt-Driven I/O offers a balance. DMA minimizes CPU overhead at the cost of a more complex hardware controller (the DMAC). The selection is dictated by the device's data rate and the system's performance requirements.

    ---

    Chapter Review Questions

    :::question type="MCQ" question="A computer system uses DMA for data transfer from a hard disk to main memory. The CPU is executing a program when the DMA transfer is initiated. Which of the following statements most accurately describes the CPU's state during the transfer of a large block of data?" options=["The CPU is halted and only resumes after the entire block transfer is complete.","The CPU continues to execute the program, but may be temporarily paused for one or more clock cycles for each word transferred by the DMA controller.","The CPU is responsible for executing a special data transfer instruction for every word moved from the disk buffer to main memory.","The CPU is interrupted by the DMA controller after each word is transferred, and the CPU executes an ISR to place the word into memory."] answer="B" hint="Consider the mechanism of 'cycle stealing' and how DMA is designed to minimize CPU involvement." solution="The primary advantage of DMA is to free the CPU from the task of data transfer.

  • The CPU initiates the DMA transfer by providing the DMA Controller (DMAC) with the starting memory address, the number of words to transfer, and the direction of transfer.

  • Once initiated, the CPU resumes executing its program.

  • When the I/O device is ready to transfer a word, the DMAC requests control of the system bus from the CPU. This is known as cycle stealing.

  • The CPU relinquishes the bus, pausing its execution for one or more memory cycles. During this time, the DMAC transfers the word directly to memory.

  • After the word transfer, the DMAC releases the bus, and the CPU resumes its program execution from where it left off.

  • This process repeats for every word in the block. The CPU is only interrupted once, after the entire block has been transferred.
  • Therefore, the CPU continues execution but is periodically paused. Option (A) is incorrect as the CPU is not halted for the entire duration. Option (C) describes Programmed I/O. Option (D) describes Interrupt-driven I/O, not DMA."
    :::

    :::question type="NAT" question="A hard disk transfers data at a rate of 64 MB/s\text{MB/s}. The data is transferred one 32-bit word at a time. The system's processor is running at 400 MHz\text{MHz} and a memory cycle takes 25 ns\text{ns}. If DMA is employed for the transfer, what percentage of CPU time is consumed by the DMA controller for this I/O operation? (Answer up to one decimal place)" answer="40" hint="First, calculate how many words are transferred per second. Then, determine the total time the bus is occupied by the DMA controller in one second." solution="
    Step 1: Calculate the number of words transferred per second.

    The data transfer rate is 64 MB/s\text{MB/s}.

    Rate=64×106 bytes/second\text{Rate} = 64 \times 10^6 \text{ bytes/second}

    The word size is 32-bit, which is 4 bytes.

    Words/sec=64×1064=16×106 words/sec\text{Words/sec} = \frac{64 \times 10^6}{4} = 16 \times 10^6 \text{ words/sec}

    Step 2: Calculate the total time the bus is occupied by DMA per second.

    Each word transfer requires one memory cycle for the DMA controller to access the bus.
    Time for one memory cycle = 25 ns=25×109\text{ns} = 25 \times 10^{-9} seconds.

    Total time stolen by DMA per second = (Number of words/sec) ×\times (Time per memory cycle)

    Tstolen=(16×106 words/sec)×(25×109 sec/word)T_{stolen} = (16 \times 10^6 \text{ words/sec}) \times (25 \times 10^{-9} \text{ sec/word})

    Tstolen=400×103 seconds=0.4 secondsT_{stolen} = 400 \times 10^{-3} \text{ seconds} = 0.4 \text{ seconds}

    Step 3: Calculate the percentage of CPU time consumed.

    Percentage of time consumed = Total time DMA uses the bus in one secondTotal time in one second×100\frac{\text{Total time DMA uses the bus in one second}}{\text{Total time in one second}} \times 100

    %CPU Time=0.4 s1 s×100=40%\% \text{CPU Time} = \frac{0.4 \text{ s}}{1 \text{ s}} \times 100 = 40\%

    Result:
    Answer: \boxed{40}
    "
    :::

    :::question type="MCQ" question="Consider a system that uses interrupt-driven I/O. Which of the following factors contributes the MOST to the overhead associated with this I/O mode, thereby limiting its maximum data transfer rate?" options=["The time taken by the I/O device to place data in its buffer.","The time required to save the processor's current state and restore it after the interrupt.","The time spent by the CPU polling the status register of the I/O device.","The bandwidth of the system bus connecting the CPU and the I/O module."] answer="B" hint="Think about what the CPU must do every single time an interrupt is triggered, beyond just moving the data." solution="In interrupt-driven I/O, an interrupt is generated for each unit of data (e.g., a byte or word). For every single interrupt, the CPU must perform a sequence of actions that constitute overhead:

  • Complete the current instruction.

  • Push the program counter and processor status register (and potentially other registers) onto the stack. This is saving the state.

  • Load the program counter with the starting address of the appropriate Interrupt Service Routine (ISR).

  • Execute the ISR to process the data.

  • Pop the saved state from the stack back into the processor's registers. This is restoring the state.

  • Resume the interrupted program.
  • The time taken for state saving and restoration (Option B) is pure overhead that must be incurred for every single data unit. This time directly limits how frequently interrupts can be serviced, and thus places a ceiling on the maximum data transfer rate.

    • Option (A) is the device latency, not the CPU overhead.
    • Option (C) describes polling, which is characteristic of Programmed I/O, not Interrupt-driven I/O.
    • Option (D), while a physical limitation, is a general constraint on all I/O, whereas the state-saving/restoration is the defining overhead specific to the interrupt mechanism itself."
    :::

    :::question type="NAT" question="An I/O device uses interrupt-driven I/O to transfer data to a processor. An interrupt is generated for every byte of data. The processor spends 4 μs\mu s on context switching (saving and restoring the state) and the interrupt service routine (ISR) takes 16 μs\mu s to execute. What is the maximum possible data transfer rate in KB/s\text{KB/s}? (Assume 1 KB=1000 Bytes\text{KB} = 1000 \text{ Bytes})" answer="50" hint="The maximum rate is determined by the total time it takes to process a single interrupt. The processor cannot accept a new interrupt until the previous one is fully handled." solution="
    Step 1: Calculate the total time required to service one interrupt.

    The total time to handle one interrupt is the sum of the context switching time and the ISR execution time.

    Ttotal=Tcontext_switch+TISR\text{T}_{total} = \text{T}_{context\_switch} + \text{T}_{ISR}

    Ttotal=4μs+16μs=20μs\text{T}_{total} = 4 \mu s + 16 \mu s = 20 \mu s

    Step 2: Determine the maximum number of interrupts that can be serviced per second.

    Since each interrupt takes 20μs20 \mu s (20×106s20 \times 10^{-6} s), the maximum frequency of interrupts is the reciprocal of this time.

    Max Interrupts/sec=1Ttotal=120×106 s\text{Max Interrupts/sec} = \frac{1}{\text{T}_{total}} = \frac{1}{20 \times 10^{-6} \text{ s}}

    Max Interrupts/sec=10620=50,000 interrupts/sec\text{Max Interrupts/sec} = \frac{10^6}{20} = 50,000 \text{ interrupts/sec}

    Step 3: Calculate the maximum data transfer rate.

    The problem states that one interrupt is generated for every byte of data. Therefore, the maximum data transfer rate in bytes per second is equal to the maximum number of interrupts per second.

    Max Rate (B/s)=50,000 B/s\text{Max Rate (B/s)} = 50,000 \text{ B/s}

    Step 4: Convert the rate to KB/s.

    The question specifies using the conversion 1 KB=1000 Bytes\text{KB} = 1000 \text{ Bytes}.

    Max Rate (KB/s)=50,000 B/s1000 B/KB=50 KB/s\text{Max Rate (KB/s)} = \frac{50,000 \text{ B/s}}{1000 \text{ B/KB}} = 50 \text{ KB/s}

    "
    :::

    ---

    What's Next?

    💡 Continue Your GATE Journey

    Having completed our study of the I/O Interface and its transfer modes, you have established a firm foundation for understanding how a computer system interacts with the outside world. This chapter is not an isolated topic; rather, it is a crucial link between the processor and other key subsystems.

    Key connections:

      • Previous Chapters: Our discussion built directly upon the principles of the System Bus and CPU Architecture. We have now seen the practical application of bus arbitration (for DMA) and the CPU's instruction cycle, observing how it can be suspended by external interrupt requests. The distinction between memory-mapped and isolated I/O reinforces our understanding of system address spaces.
      • Upcoming Chapters: The concepts mastered here are prerequisite for a deeper understanding of several advanced topics:
    - Memory Hierarchy: DMA is the primary mechanism for transferring data between secondary storage (like disks) and main memory. Understanding its operation is essential when studying caching, virtual memory, and the performance implications of page faults. - Operating Systems: This chapter provides the hardware perspective for core OS concepts. The OS is responsible for managing all I/O devices. Interrupt handling, the implementation of ISRs, and the configuration of DMA transfers are all fundamental tasks managed by the operating system's kernel and device drivers. - Pipelining and Performance: External events like interrupts and DMA cycle stealing can disrupt the flow of a pipelined processor. They introduce stalls (or bubbles) into the pipeline, which directly impacts the processor's throughput (Instructions Per Cycle). A thorough grasp of I/O mechanisms is necessary to analyze these performance effects.

    🎯 Key Points to Remember

    • Master the core concepts in I/O Interface before moving to advanced topics
    • Practice with previous year questions to understand exam patterns
    • Review short notes regularly for quick revision before exams

    Related Topics in Computer Organization and Architecture

    More Resources

    Why Choose MastersUp?

    🎯

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    📚

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    📊

    Smart Analytics

    Track your progress with subject-wise performance insights

    🔖

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation →

    No credit card required • Free forever for basic features