0% found this document useful (0 votes)
3 views

Cao Solved Paper 2023

The document is a solved paper for Computer Architecture and Organisation, covering various topics such as the differences between microprocessors and microcontrollers, definitions of complements, computer organization vs architecture, RISC, cache memory, pipeline conflicts, subroutines, memory hierarchy, and Booth's algorithm for multiplication. It includes detailed explanations, examples, and diagrams to illustrate concepts. Additionally, it discusses Flynn's taxonomy and key components like the Memory Address Register (MAR).

Uploaded by

qwertyuiopno013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Cao Solved Paper 2023

The document is a solved paper for Computer Architecture and Organisation, covering various topics such as the differences between microprocessors and microcontrollers, definitions of complements, computer organization vs architecture, RISC, cache memory, pipeline conflicts, subroutines, memory hierarchy, and Booth's algorithm for multiplication. It includes detailed explanations, examples, and diagrams to illustrate concepts. Additionally, it discusses Flynn's taxonomy and key components like the Memory Address Register (MAR).

Uploaded by

qwertyuiopno013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Computer Architecture and

Organisation
RTU SOLVED PAPER

2023

ER SAHIL KA GYAN
PART A 10 QUESTIONS= 20 MARKS

ER SAHIL KA GYAN
Q.1 Is there any difference between microprocessor and microcontroller? Explain
with example.

Microprocessors and microcontrollers differ mainly in terms of functionality and integration:

● Microprocessor: It handles more diverse communication technologies and is designed for high-speed processing tasks.
For example, microprocessors can handle USB 3.0 or Gigabit Ethernet without requiring a secondary processor.

● Microcontroller: It is a compact integrated circuit designed for specific tasks and includes a processor, memory, and
input/output peripherals. Microcontrollers often require additional processors for tasks like high-speed data connectivity.

Example: A microprocessor might be used in a desktop computer, while a microcontroller is commonly used in embedded
systems like washing machines or microwave ovens.

ER SAHIL KA GYAN
Q.2 Define (r-1)'s complement and r's complement using an example.

Ans.

● (r-1)'s Complement: The (r-1)'s complement of a number is obtained by subtracting each digit of the number from r-1.
Example: For binary, r = 2. The (r-1)'s complement (1's complement) of 001 is 110.

● r's Complement: The r's complement is obtained by adding 1 to the least significant bit (LSB) of the (r-1)'s complement.
Example: For binary, the 2's complement of 001 is 111 (by adding 1 to the LSB of the 1's complement).

ER SAHIL KA GYAN
Q.3 Distinguish among computer organization and computer architecture.

S.N Computer Architecture Computer Organization


o.

1. Describes what the computer does. Describes how it does it.

2. Deals with functional behavior of computer Deals with structural relationships and implementation.
systems.

3. Focuses on high-level design. Focuses on low-level design and hardware


implementation.

4. Indicates the overall hardware design. Indicates performance and efficiency of hardware
components.

ER SAHIL KA GYAN
Q.4 Explain RISC.

RISC (Reduced Instruction Set Computer) is a type of microprocessor architecture


that uses a small, highly optimized set of instructions. The goal of RISC is to
simplify the instruction set so that the processor can execute instructions more
quickly, typically achieving higher performance in terms of MIPS (Millions of
Instructions Per Second).
Since fewer instructions are used, the processor operates more efficiently, but it
requires more instructions for complex tasks compared to CISC (Complex Instruction
Set Computer).

ER SAHIL KA GYAN
Q.5 Explain the use of cache memory.

Cache memory is a small, high-speed memory located close to the CPU that stores frequently accessed data and instructions to speed
up processing. The primary role of cache memory is to reduce the time taken to access data from the main memory, thereby increasing
the overall performance of the computer. Cache memory is faster but more expensive and has less capacity than main memory.

Use:

● Improved Performance: By reducing the time the CPU takes to access data.

● Optimized Memory Access: It stores a copy of frequently used data and instructions, making retrieval quicker.

ER SAHIL KA GYAN
Q.6 What are the different conflicts that arise in pipeline? How do you remove the
conflicts? Describe.

Conflicts in pipelines are categorized into:

1. Structural Hazards: Arise when resources like registers or functional units are insufficient to handle multiple instructions
simultaneously.
Solution: Use additional hardware units or functional units.

2. Data Hazards: Occur when one instruction depends on the data of a previous instruction that has not yet completed.
Solution: Forwarding (bypassing) or instruction reordering.

3. Control Hazards: Arise from branch instructions that affect the flow of execution.
Solution: Use branch prediction techniques and delay slots.

ER SAHIL KA GYAN
Q.7 Describe subroutine.

A subroutine (also known as a function or procedure) is a block of code that is designed to perform a specific task. It is a modular piece
of code that can be called and executed from various places within a program. Subroutines are used to promote code reuse, reduce
redundancy, and make programs easier to manage and debug.

Key Characteristics of a Subroutine:

1. Encapsulation of Code:
A subroutine encapsulates a set of operations into a single unit. This unit can then be executed whenever needed, without
repeating the same code multiple times. It helps in breaking down complex programs into smaller, manageable pieces.
2. Modularity:
Subroutines help in breaking large programs into smaller, self-contained modules, making it easier to update or debug a specific
section of the program without affecting the entire system.
3. Reusability:
Once written, subroutines can be reused multiple times within the program, or even in other programs, without needing to rewrite
the same code. This is especially helpful in reducing the overall program size and avoiding code duplication.
4. Control Flow:
A subroutine allows control to be transferred to it from the main program or from another subroutine. Once the subroutine
completes its task, control is returned to the point where the subroutine was called. ER SAHIL KA GYAN
Q.8 Draw and explain the memory hierarchy in a digital computer.

Memory hierarchy in a computer system is designed to balance cost, speed, and size. The hierarchy consists of different levels, each with
varying access speeds, costs, and sizes.

● Level 0: CPU Registers (fastest, smallest, most expensive)

● Level 1: Cache Memory (SRAM) CPU Registers <- Fastest, Smallest


Cache Memory
● Level 2: Main Memory (DRAM) Main Memory
Magnetic Disk
● Level 3: Magnetic Disk
Magnetic Tape <- Slowest, Largest
● Level 4: Magnetic Tape (slowest, largest, least expensive)

Diagram of Memory Hierarchy:

ER SAHIL KA GYAN
Q.9 Perform the 2's complement subtraction of smaller number (101011) from
large number (111001).

Find the 2's complement of 101011:


Invert the bits → 010100
Add 1 → 010101

Add this to 111001 (the larger number):


010101 + 111001 = 1001110

Find the 2's complement of the result (1001110):


Invert the bits → 0110001
Add 1 → 0110010
Final result: The true difference is -0110010.

ER SAHIL KA GYAN
Q.10 What are the basic differences among a branch instruction, a call subroutine
instruction, and a program interrupt?

Branch Instruction: Alters the flow of control unconditionally or conditionally, based on the
program’s logic.

Call Subroutine Instruction: Transfers control to a specific subroutine, storing the return address
so the program can continue after the subroutine finishes.

Program Interrupt: Temporarily halts the program to handle external events like I/O operations or
exceptions and returns control once the event is processed.

ER SAHIL KA GYAN
PART B 5/7 QUESTIONS= 20 MARKS

ER SAHIL KA GYAN
Q.11: Explain the Fetch Cycle with Diagram

✅ Definition:

The Fetch Cycle is the first phase of the instruction cycle where the CPU retrieves the next instruction to be executed from main
memory. This instruction is fetched using the address stored in the Program Counter (PC)

🧠 Purpose:

To load the instruction into the Instruction Register (IR) for decoding and execution.

Steps of Fetch Cycle:

At the start, the Program Counter (PC) holds the memory address of the next instruction.

ER SAHIL KA GYAN
🔹 Step-by-Step Process:

🔸 Step 1 (t1):

● The address in the Program Counter (PC) is copied to the Memory Address Register (MAR).
● This step prepares the memory to locate the instruction.

Micro-operation:
t1: MAR ← (PC)

🔸 Step 2 (t2):

● The address in the MAR is placed on the address bus.


● The Control Unit issues a READ signal.
● The instruction at that memory location is fetched into the Memory Buffer Register (MBR).
● Simultaneously, the PC is incremented by 1 to point to the next instruction.

Micro-operations:
t2: MBR ← Memory[MAR]
t2: PC ← PC + 1

🔸 Step 3 (t3):

● The content of MBR is copied into the Instruction Register (IR).

Micro-operation:
t3: IR ← MBR
ER SAHIL KA GYAN
Time Unit Operation

t1 MAR ← PC 📈 Flowchart of Complete Instruction Cycle:


pgsql
t2 MBR ← Memory[MAR], PC ← PC + 1 CopyEdit
+-------------------+
| Start |
t3 IR ← MBR +-------------------+

+-------------------+
🧩 Diagram: Fetch Cycle (Register View) | Fetch Instruction |
+-------------------+

[PC] → [MAR] → (Memory Address Bus) +-------------------+
↓ | Decode Instruction|
[Memory] +-------------------+
↓ (Control Unit sends READ) ↓
+-------------------+
↓ (Data Bus) | Execute Instruction|
[MBR] → [IR] +-------------------+

+-------------------+
| Interrupt? (Yes/No)|
+-------------------+
✅ Conclusion: ↓Yes ↓No
+----------------+ +----------------+
The fetch cycle plays a crucial role in the CPU’s operation by retrieving | Handle Interrupt| | Next Cycle |
+----------------+ +----------------+
instructions from memory before they can be decoded and executed. This
process ensures sequential execution and is tightly regulated by the clock
cycles of the processor.
ER SAHIL KA GYAN
Q.12 Multiply (-37) × (21) using Booth’s Algorithm and
Show All Steps

✅ Booth’s Multiplication Algorithm:

Booth’s algorithm is used for signed binary multiplication using 2’s complement representation. It reduces the number of arithmetic
operations by encoding the multiplier.

🔹 Given:

● Multiplicand (M) = -37


Value Decimal Binary (8-bit 2’s complement)
● Multiplier (Q) = 21
● Using 8-bit registers
M -37 11011011

🔸 Step 1: Convert to Binary (8 bits):


Q 21 00010101

ER SAHIL KA GYAN
Let:

● A = M (Multiplicand) = 11011011
● Q = Multiplier = 00010101
● Q(-1) = 0
● Initial Product = [A] [Q] [Q(-1)] = 0 00000000 00010101 0
● Accumulator (A) = 8 bits
● Multiplier (Q) = 8 bits
● Q(-1) = 1 bit
● Total Bits = 17 bits

🔹 Step 2: Apply Booth’s Algorithm (Repeat n = 8 times)

Booth's decision table (based on Q0 and Q-1):

● 10 → Subtract M (A = A - M)
● 01 → Add M (A = A + M)
● 00 or 11 → No arithmetic
● Then Arithmetic Right Shift (ARS) of [A, Q, Q(-1)]

ER SAHIL KA GYAN
Cycle A (Accumulator) Q (Multiplier) Q₋₁ Operation Performed

Init 00000000 00010101 0 Initial values

1 00100101 00010101 0 A = A - M (−M = 00100101)

00010010 10001010 1 Arithmetic Right Shift (ARS)

2 11101101 10001010 1 A = A + M (M = 11011011)

11110110 11000101 0 ARS

3 00011011 11000101 0 A=A-M

00001101 11100010 1 ARS

4 11100100 11100010 1 A=A+M

11110010 11110001 0 ARS

5 00010111 11110001 0 A=A-M

00001011 11111000 1 ARS

ER SAHIL KA GYAN
Cycle A (Accumulator) Q (Multiplier) Q₋₁ Operation Performed

6 11100110 11111000 1 A=A+M

11110011 11111100 0 ARS

7 11110011 11111100 0 No operation (Q₀Q₋₁ = 00)

11111001 11111110 0 ARS

8 11111001 11111110 0 No operation

11111100 11111111 0 ARS

🧾 Final Result

● Final A = 11111100

● Final Q = 11111111

● Combined result = 1111110011111111 (16-bit)

● 2’s complement → Decimal = −777

ER SAHIL KA GYAN
Q.13 Describe the Flynn Model and Explain the
Components

📚 Flynn's Taxonomy (1966):

Flynn’s Classification is a model to categorize computer architectures based on the number of instruction and data streams they
handle. It classifies into four major types:

Type Full Form Instruction Stream Data Stream Example Systems

SISD Single Instruction Single Data 1 1 Traditional Von Neumann CPU

SIMD Single Instruction Multiple Data 1 Many GPUs, Vector Processors

MISD Multiple Instruction Single Data Many 1 Rare / Experimental systems

MIM Multiple Instruction Multiple Data Many Many Multi-core CPUs, Clusters
D

ER SAHIL KA GYAN
🔍 Detailed Explanation:

✅ 1. SISD:
+----------------+ +----------------+
● Traditional sequential systems SISD → | Instruction | →→→→→ | Data |
● One instruction on one data at a time +----------------+ +----------------+

✅ 2. SIMD: +----------------+ +----------------+


SIMD → | Instruction | →→→→→ | Data1, Data2...|
● Parallel data processing with same instruction +----------------+ +----------------+
● Useful in image processing, scientific computation
+----------------+ +----------------+
✅ 3. MISD: MISD → |Instr1|Instr2...| →→→→→ | Data |
+----------------+ +----------------+
● Multiple instructions on single data stream
● Not practical; used only in redundant computation (e.g., +----------------+ +----------------+
space shuttles) MIMD → |Instr1|Instr2...| →→→→→ |Data1|Data2... |
+----------------+ +----------------+
✅ 4. MIMD:

● Each processor works on its own instruction and data


● Most commonly used in multi-core and distributed systems

ER SAHIL KA GYAN
Q.14: Write Short Notes On –

(a) Memory Address Register (MAR):

🧠 The Memory Address Register (MAR) is a CPU register that stores the address in memory from which data is to be fetched or to
which data is to be stored. It plays a critical role in memory operations during program execution.

🔹 Key Functions of MAR:

● ✅ Data Fetching:
MAR holds the address of the memory location from which data or instruction is to be fetched by the CPU.

● ✅ Instruction Execution:
During execution, MAR helps locate the required instruction or data from memory.

● ✅ Data Storing:
When writing data, MAR stores the destination memory address.

ER SAHIL KA GYAN
🔐 Role in Cybersecurity:

● 🔒 Memory Isolation: Prevents unauthorized access by ensuring that processes access only their allocated memory space.
● 🔄 Data Wiping: Used in secure data overwrite operations for permanent deletion.
● 🔐 Data Encryption: Helps retrieve the address of keys, ciphertext, etc., securely during encryption/decryption.

📌 Summary: MAR ensures correct addressing during memory operations and contributes to secure and efficient CPU-memory
communication.

(b) Program Counter (PC):

🧠 The Program Counter (PC) is a special-purpose register that holds the address of the next instruction to be fetched from memory for
execution. It is also known as the Instruction Pointer (IP) or Instruction Address Register (IAR).

🔹 Features of Program Counter:

● 🧾 Tracks Instruction Sequence:


PC always points to the memory address of the next instruction in the sequence.
● ➕ Auto-Increment:
After fetching an instruction, the PC automatically increments to point to the next instruction.
● 🔁 Reset Behavior:
On system reset/restart, the PC is set to 0x0000, starting program execution from the beginning.

ER SAHIL KA GYAN
🧭 Working Example:
Address Instruction
Program Memory Layout:
Program Memory
0x00 Instruction 0
+----------------------+
| Addr: 0x04 | ← PC points here
0x01 Instruction 1 | Inst: Example Inst | → Sent to Instruction Register
+----------------------+
0x02 Instruction 2

0x03 Instruction 3

0x04 Example Instruction

As each instruction is fetched and executed, the PC updates to the address of the next instruction, ensuring smooth sequential
flow.

📌 Summary:

The Program Counter controls the execution flow by pointing to the next instruction and is essential for
instruction sequencing and control in the CPU.

ER SAHIL KA GYAN
Q.15: Explain Paging and Segmentation with
Examples

✅ A. Paging

Definition:
Paging is a memory management technique where logical memory is divided into
fixed-size blocks called pages and physical memory into fixed-size blocks called
frames. The size of pages and frames is kept equal to avoid external fragmentation.

Frame Contains

🔹 Example of Paging:
F0 P1
● Process Size: 4 Bytes
● Page Size: 1 Byte F1 P0
● Pages: P0, P1, P2, P3
● Main Memory Frames Allocation: F2 P2

F3 P3

ER SAHIL KA GYAN
🔹 Logical to Physical Address Translation:
Page No Frame No
● Logical Address = Page Number + Page Offset
0 1
● Physical Address = Frame Number + Page Offset
1 0
🛠 This translation is done by the MMU (Memory Management Unit) using a Page
Table.
2 2

✅ Advantages of Paging: 3 3

● No external fragmentation
● Enables non-contiguous memory allocation
● Efficient swapping

❌ Disadvantages of Paging:

● May cause internal fragmentation


● Increased memory access time (due to two memory lookups)
● Overhead of maintaining page tables

ER SAHIL KA GYAN
✅ B. Segmentation

Definition:
Segmentation is a memory management scheme where a program is divided into logical segments based on its functional components such
as code, data, stack, etc.

Unlike paging, segments are of variable size and represent logical divisions in the program. EXAMPLES -

Segment No Segment Name Size Base Address


Logical Address = Segment Number + Offset

0 Code 2 KB 1000 Physical Address = Base Address of Segment +


Offset

1 Data 3 KB 3000

2 Stack 1 KB 6000

🧠 Segment Table:
Segment No Base Limit
This translation is Logical View Physical View
performed by MMU using a +---------+ +------------+
Segment Table and STBR 0 1000 2048
(Segment Table Base | Code | → Base 1000 → | Code |
Register). | Data | → Base 3000 → | Data |
1 3000 3072 | Stack | → Base 6000 → | Stack |
+---------+ +------------+
2 6000 1024
ER SAHIL KA GYAN
Feature Paging Segmentation

Division Type Fixed-size pages Variable-size segments

Fragmentation Internal External

Address Page Number + Offset Segment Number + Offset


Structure

Logical View Not preserved Preserves logical program view

✅ Conclusion:

Both paging and segmentation are essential memory management schemes. Paging improves
memory utilization, while segmentation enhances logical organization. Many modern systems
use a combined approach (segmented paging) for efficiency.

ER SAHIL KA GYAN
Q.16: Procedure for Addition and Subtraction of
Fixed-Point Numbers
✅ Fixed-Point Representation Overview:

Fixed-point numbers have a fixed number of digits before and after the radix (decimal) point. In computer systems, they are
commonly stored in binary form, using sign-magnitude, 1's complement, or 2's complement representation (commonly 2's
complement for arithmetic).

A. Fixed-Point Addition Procedure:


Step-by-step Process:

1. Check sign bits of both operands.


2. If signs are the same, perform normal binary addition.
3. If signs differ, perform subtraction (larger – smaller) and assign the sign of the larger magnitude.
4. Check for overflow, especially when:
○ Adding two positive numbers results in a negative
○ Adding two negative numbers results in a positive

ER SAHIL KA GYAN
B. Fixed-Point Subtraction Procedure:
Step-by-step Process:

1. Convert the number to be subtracted to its 2’s


complement.
2. Add the 2’s complement of the subtrahend to the
minuend.
3. Result is the subtraction value.
4. Discard any overflow bit (if exists) to maintain the bit
width.

C. Flowchart for Fixed-Point


Addition/Subtraction:
D. Example:
Assume 4-bit 2’s complement system:

● A = 0101 (5), B = 0011 (3)

➡ Addition: 0101 + 0011 = 1000 → -8 (overflow not occurred here)

● A = 0101 (5), B = 1101 (-3)

➡ Subtraction: 0101 + 1101 = 0010 → 2 (Correct) ER SAHIL KA GYAN


Q.17: Virtual to Real Address Translation in Segmented
Memory System
Segmented Memory System Overview:

Segmentation is a memory management technique in which logical memory is divided into variable-sized segments such as code,
data, and stack. Each segment is referenced by a segment number and an offset (also called displacement).

Virtual Address:

A virtual address in segmented memory consists of two parts:

● Segment Number (s)

● Offset (d) within that segment

Virtual Address = (Segment Number, Offset)

ER SAHIL KA GYAN
Translation Process:

To translate a virtual address into a real (physical) address, the system uses a Segment Table. Each process has its own segment
table maintained by the operating system.

Each segment table entry contains:

● Base address: Starting physical address of the segment


● Limit: Length (size) of the segment

Steps of Translation:

1. Fetch Segment Table Entry:


Use the segment number s to index into the segment table and get:
○ Base[s]: Starting physical address of segment s
○ Limit[s]: Maximum size of segment s
2. Check Validity:
Ensure that the offset d is less than the limit of the segment.
If d > Limit[s], it raises a segmentation fault.
3. Calculate Physical Address:
Add offset to base address: Physical Address = Base[s] + Offset (d)

ER SAHIL KA GYAN
Diagram: Virtual to Physical Address Translation Example:

Let’s say:
Virtual Address
---------------------
● Virtual Address = (Segment = 2, Offset = 120)
| Segment | Offset | ● Segment Table Entry for Segment 2:
| No. | d |
--------------------- ○ Base = 4000
↓ ○ Limit = 256
Segment Table
-----------------------
| Segment | Base | Limit | ✅ Since Offset (120) < Limit (256), it is valid.
-----------------------
| s | 1000 | 500 | 🔁 Physical Address = 4000 + 120 = 4120
-----------------------
↓ Advantages of Segmentation:
Check: Is d < 500? ✔
↓ ● Supports modular programming (code,
Physical Address = 1000 + d
data, stack)

● Protection via bounds checking

● Ease of sharing and dynamic linking

ER SAHIL KA GYAN
PART C 3/5 QUESTIONS= 30 MARKS

ER SAHIL KA GYAN
Q.18 Describe role of addressing modes used in computer architecture. Illustrate
direct and indirect addressing mode with suitable example. Demonstrate arithmetic
micro operation and draw diagram of 4-bit full adder.

Role of Addressing Modes in Computer Architecture:

Addressing modes define how the effective address of the operand is calculated. They are crucial in instruction execution because
they determine where and how to access operands in memory or registers.

Functions of Addressing Modes:

1. Flexibility in accessing data from memory, registers, or constants.

2. Support for pointers, arrays, and structures.

3. Efficient use of CPU instructions.

4. Reduced code size by reusing instructions with different addressing modes.

ER SAHIL KA GYAN
Types of Addressing Modes (with examples):

1. Direct Addressing Mode:

● Definition: The effective address is explicitly specified in the instruction.

Example:

Instruction: LOAD R1, 500


Meaning: Load the contents of memory location 500 into register R1.

2. Indirect Addressing Mode:

● Definition: The instruction specifies a memory location that contains the effective address.

Example:

Instruction: LOAD R1, @500


Meaning: Memory location 500 holds the address of the actual operand.

Suppose M[500] = 800, then R1 ← M[800].

ER SAHIL KA GYAN
Arithmetic Micro-operations:
These operations deal with basic arithmetic tasks executed at the register level within the CPU.

Common Arithmetic Micro-operations:

Operation Symbolic Representation Description

Addition R3 ← R1 + R2 Add contents of R1 and R2, store result in R3

Subtraction R3 ← R1 - R2 Subtract R2 from R1, store result in R3

1’s Complement R2 ← R2' Replace R2 by its 1's complement

2’s Complement R2 ← R2' + 1 Replace R2 by its 2's complement

Subtraction using 2’s R3 ← R1 + R2' + 1 Equivalent to R1 - R2


Comp.

Increment R1 ← R1 + 1 Increase contents of R1 by one

Decrement R1 ← R1 - 1 Decrease contents of R1 by one


ER SAHIL KA GYAN
4-Bit Binary Full Adder:
A 4-bit binary adder is made by cascading four 1-bit full adders. Each full adder adds
corresponding bits of two numbers along with the carry from the previous stage.

Diagram of 4-bit Full Adder:


sql
CopyEdit Where:
C3 C2 C1 C0
↓ ↓ ↓ ↓ ● FA = Full Adder
+-----+ +-----+ +-----+ +-----+
A3 →| | | | | | | |← A0 ● Inputs: A(3–0), B(3–0), Cin = 0
B3 →| FA | | FA | | FA | | FA |← B0
| | | | | | | |
● Outputs: S(3–0) = A + B, Cout = Final Carry
S3 ←| |←S2 ←--|←S1 ←--|←S0 |
+-----+ +-----+ +-----+ +-----+

Final Carry Out (C4) Full Adder Equation:

● Sum (Sᵢ): Sᵢ = Aᵢ ⊕ Bᵢ ⊕ Cᵢ

● Carry-out (Cᵢ₊₁): Cᵢ₊₁ = Aᵢ·Bᵢ + (Aᵢ ⊕ Bᵢ)·Cᵢ

ER SAHIL KA GYAN
Q.19 (a) Explain arithmetic pipeline with a suitable
example. Draw diagram also.

Arithmetic Pipeline:

An arithmetic pipeline is used to perform arithmetic operations like addition, multiplication, and division in a sequential segmented
manner. Instead of completing one arithmetic operation before starting the next, pipelining allows multiple operations to overlap in
execution, thus improving the overall throughput.

Arithmetic pipelines are divided into segments (stages). Each stage performs a part of the total computation. Once a stage finishes its
part, it passes the result to the next stage and begins processing the next input.

Example: Floating-Point Addition

Consider a floating-point addition operation that is divided into the following stages:

1. Exponent Comparison
2. Mantissa Alignment
3. Addition
4. Normalization
5. Rounding
ER SAHIL KA GYAN
Pipeline Diagram:
sql
CopyEdit
Time → Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8
------------------------------------------------------------------------------------------
Task 1 → Stage 1 → Stage 2 → Stage 3 → Stage 4 → Stage 5 → - → - → -
Task 2 → - → Stage 1 → Stage 2 → Stage 3 → Stage 4 → Stage 5 → - → -
Task 3 → - → - → Stage 1 → Stage 2 → Stage 3 → Stage 4 → Stage 5 → -
Task 4 → - → - → - → Stage 1 → Stage 2 → Stage 3 → Stage 4 → Stage 5

Q.19 (b) Factors affecting the performance of pipelining processor-based systems:


1. Timing Variations:

Not all stages in the pipeline take the same time to execute. Variations cause stalling or bubbling, reducing the efficiency.

2. Data Hazards:

Occur when instructions that are close in the pipeline access the same data. For example, if instruction I2 needs the result of instruction I1, it must wait
until I1 completes.

3. Branching:

Branch or jump instructions cause uncertainty in the flow of instructions. The processor might fetch the wrong instruction and waste cycles.
ER SAHIL KA GYAN
4. Interrupts:

External events (like I/O or exceptions) can interrupt the pipeline flow and force it to flush partially executed instructions.

5. Data Dependency:

When an instruction depends on the result of a previous instruction that is still in the pipeline, it creates read-after-write (RAW) hazards,
causing pipeline stalls.

Numerical Problem:
Given:

● Time taken by non-pipelined system per task = 100 ns


● Number of tasks = 200
● Pipeline has 6 segments, each with clock cycle = 20 ns

ER SAHIL KA GYAN
ER SAHIL KA GYAN
Q.20 (a) Explain cache coherency and why it's
necessary? Explain different approaches for cache
coherency.
Cache Coherency:

In modern computer systems, cache memory is used to temporarily store frequently accessed data to speed up processing. However, in
multiprocessor systems, where multiple CPUs have their own local caches, the same memory location might be cached in more than
one processor's cache.

Cache Coherency ensures that all the processors in the system have a consistent view of memory. If one processor updates a memory
location, that update must be visible to other processors. Without cache coherence, a processor might work on outdated data, leading to
data inconsistency and unpredictable program behavior.

Why Cache Coherency is Necessary:

● To maintain data consistency among caches and main memory.


● To prevent stale data from being used by one processor when another has updated it.
● To ensure correct program execution in multiprocessor environments.
● To improve system stability and reliability.

ER SAHIL KA GYAN
Approaches for Cache Coherency:

1. Directory-Based Protocols:

● A central directory keeps track of which caches have copies of each memory block.
● When a processor wants to read/write data, it queries the directory to check the status.
● Suitable for large-scale multiprocessors.

2. Snooping-Based Protocols:

● All processors monitor (or "snoop") on a shared communication bus.


● When a processor performs a write, it broadcasts this information so that other caches can invalidate or update their copies.
● Common in bus-based systems.

3. Write Invalidate Protocol:

● When a processor writes to a cache block, it sends an invalidate message to all other caches.
● Other processors then invalidate their copy of that block.
● Reduces the number of writes across the system.

4. Write Update (Write Broadcast) Protocol:

● Instead of invalidating, the writing processor broadcasts the new value to all caches that have a copy.
● All caches update their copy with the new value.
● Ensures faster propagation of updated data.

5. MESI Protocol (Modified, Exclusive, Shared, Invalid):

● Each cache block can be in one of the four states.


● Ensures accurate control over data access and modification.
● Widely used in modern multiprocessor systems. ER SAHIL KA GYAN
Q.20 (b) Construction of Memory System Using 1K × 4 RAM Chips
1. Constructing a 1K × 4 RAM Memory Bank:

● A single 1K × 4 RAM chip means it can store 1024 words, with each word being 4 bits wide.
● To build a memory bank of size 1K × 4, we already meet the required specification with just one chip.

✅ Required chips = 1

2. Constructing a 4K × 4 RAM Memory Bank:

● A 4K × 4 memory bank means 4096 words with 4 bits per word.


● Each chip provides 1024 words, so we divide the total required memory by the chip capacity:

Number of RAM Chips=4096/1024=4


✅ Required chips = 4

These 4 chips are connected in such a way that they handle different address ranges (using additional address lines like A10 and A11 for
chip selection).

ER SAHIL KA GYAN
Memory Bank Size RAM Chip Size Total Chips Required

1K × 4 1K × 4 1

4K × 4 1K × 4 4

ER SAHIL KA GYAN
Q.21 Differentiate between Hardwired Control Unit
and Micro-programmed Control Unit with their
Diagram
Introduction:

In computer architecture, the Control Unit (CU) is responsible for directing the operation of the processor by generating control signals.
There are two primary types of control units: Hardwired Control Unit (HCU) and Micro-programmed Control Unit (MCU). These units
differ in their design, operation, and complexity.

1. Hardwired Control Unit (HCU):

The Hardwired Control Unit uses fixed logic circuits to generate control signals. The operation of the control unit is determined by the
hardware, and the instructions are decoded into control signals directly through combinational logic circuits like PLAs (Programmable
Logic Arrays) or state machines.

ER SAHIL KA GYAN
Diagram of Hardwired Control Unit: Explanation of Hardwired Control Unit:

+-------------------+ +--------------------+ 1. Instruction Register: Holds the instruction currently


| Instruction | | Instruction | in execution, containing the operation code
| Register | | Decoder | (Op-code) and addressing mode.
| (Op-code, INS) | | |
+-------------------+ +--------------------+
2. Instruction Decoder: Decodes the instruction from
| |
the instruction register to generate the corresponding
| v
+-------------------+ +--------------------+ control signals.
| Step Counter |<---| Control Signal |
| (T₀, T₁, T₂) | | Generator | 3. Control Signal Generator: Generates the control
+-------------------+ +--------------------+ signals based on decoded instructions.
| |
v |
4. Step Counter: Keeps track of the steps involved in
+------------+ +---------------+
| Clock | | Control Signals| instruction execution (e.g., fetch, decode, execute).
+------------+ +---------------+
| 5. Clock: Provides the timing signals that control the
v flow of operations.
+----------------------+
| ALU, Memory or I/O | 6. Control Signals: These signals control the
| or Other Units |
operation of the ALU, memory, and I/O devices.
+----------------------+

ER SAHIL KA GYAN
2. Micro-programmed Control Unit (MCU):

The Micro-programmed Control Unit uses a control memory to store microprograms, which are sequences of microinstructions. Each
microinstruction corresponds to a specific control signal. The microprogrammed control unit decodes the instruction and generates control signals
through these stored microinstructions.

Diagram of Micro-programmed Control Unit:


Explanation of Micro-programmed
+-------------------------+
| Instruction Register | <--- Instruction (Op-code) Control Unit:
+-------------------------+
| 1. Instruction Register: Holds the instruction being
v
executed, which is passed to the microinstruction
+-------------------------+
| Microinstruction Memory | <--- Microprogram (Stored in control memory) memory.
+-------------------------+ 2. Microinstruction Memory: A special memory
| (control memory) that stores a sequence of
v microinstructions (each corresponding to a control
+----------------------+ signal).
| Control Signal | <--- Generates control signals for the hardware 3. Microinstruction Decoder: Decodes the current
| Generator |
instruction into a sequence of microinstructions.
+----------------------+
| 4. Control Signal Generator: Uses the
v microinstructions to generate control signals.
+-----------------------+ 5. Control Signals: These signals direct the ALU,
| ALU, Memory or I/O | memory, and I/O units.
| or Other Units |
+-----------------------+

ER SAHIL KA GYAN
Feature Hardwired Control Unit Micro-programmed Control Unit

Design Complexity Simple, uses combinational logic Complex, uses microinstructions and memory

Execution Speed Faster Slower (due to microprogramming overhead)

Flexibility Inflexible, difficult to modify Flexible, easy to modify and expand

Control Signals Generated by hardware circuits Generated by microprogram (control memory)

Modifications Difficult and costly to modify Easier to modify by changing microprogram

Best Used For Fixed instruction sets and simpler Complex instruction sets and general-purpose
designs processors

Conclusion:

● Hardwired Control Units are faster and simpler, suitable for fixed and simpler systems. However, they are less flexible and
difficult to modify.
● Micro-programmed Control Units are more flexible, capable of handling complex instruction sets, and easier to modify,
but they are generally slower due to the additional layer of microprogramming.
ER SAHIL KA GYAN
Q.22 (a) Draw and Explain the Diagram of a DMA
Controller. Why are the Read/Write Lines of DMA
Bidirectional?
📌 Block Diagram of DMA Controller
🔷 DMA (Direct Memory Access)
+-------------------------------+
Controller | DMA Controller |
+-------------------------------+
A DMA controller is a dedicated | Address Register | --> Memory Address
hardware component used to transfer |-------------------------------|
data between memory and I/O devices | Count Register | --> Word Count
without involving the CPU. It improves |-------------------------------|
overall system performance by freeing | Control Register | --> Mode, Direction, Status
|-------------------------------|
the CPU from managing data transfer
| Data Buffer | <--> Data Bus
tasks.
|-------------------------------|
| DMA Request Lines (DRQ) | <-- From I/O Devices
|-------------------------------|
| DMA Acknowledge (DACK) | --> To I/O Devices
|-------------------------------|
| Read/Write Control Logic | <--> Read/Write Signals

ER SAHIL KA GYAN
🧠 Function of Key Components:

Component Function

Address Register Holds the memory address where data is to be read/written.

Count Register Keeps track of the number of bytes/words to transfer.

Control Register Sets the direction (read/write), mode (block, burst), and enables interrupts.

Data Buffer Temporarily stores the data being transferred.

DMA Request (DRQ) Activated by an I/O device to request a data transfer.

DMA Acknowledge (DACK) Sent by DMA controller to confirm the transfer to the I/O device.

Control Logic Manages timing and control signals, including bidirectional R/W lines.

ER SAHIL KA GYAN
🔄 Why Are DMA Read/Write Lines Bidirectional?

The read and write lines of a DMA controller are bidirectional because the controller must both read from and write to memory
and I/O devices, depending on the transfer direction.

✅ Reasons:

1. Flexibility:

○ Same lines can handle both read and write operations.


○ No need for separate unidirectional lines.

2. Efficiency:

○ Reduces the number of wires/pins.


○ Simplifies hardware design.

3. Resource Optimization:

○ Saves space in embedded systems or chip-constrained environments.

4. Synchronization:

○ Enables better coordination of read/write sequences.

○ Reduces data collisions and errors during transfer. ER SAHIL KA GYAN


Q.22 (b) What is the Function of IOP? Explain with Block Diagram

🔷 IOP (Input/Output Processor)

An Input/Output Processor (IOP) is a special-purpose processor designed to handle input and output operations. It works
independently from the CPU, reducing CPU load and managing I/O devices efficiently.

🧠 Functions of IOP:

● Controls and manages multiple I/O devices.

● Offloads I/O processing from the CPU.

● Communicates with memory using DMA.

● Interprets I/O commands issued by the CPU.

● Performs error detection and correction.

● Handles interrupt signals from I/O devices.

ER SAHIL KA GYAN
📊 Block Diagram of IOP System

+-----------------------+
| CPU | Component Function
+-----------------------+
| CPU Issues I/O commands to the IOP.
| Control/Command
v
IOP Receives commands, controls I/O devices,
+-----------------------+
| IOP | and manages data transfers.
+-----------------------+
| I/O Command Decoder | I/O Devices like disk, printer, keyboard, etc.
| Channel Control | Devices
| Buffer |
+-----------------------+ Main Stores data being read from or written to I/O.
| Memory
-----------------------
| DMA Bus | DMA Bus
-----------------------
Used by IOP to transfer data directly to
| memory.
+-----------------------+
| Main Memory | ✅ Benefits of IOP:
+-----------------------+
| ● Reduces CPU load.
+-----------+------------+ ● Allows parallel execution of I/O and processing
| |
tasks.
+------------------+ +------------------+
| I/O Device 1 | | I/O Device 2 | ● Efficient and scalable for complex systems.
+------------------+ +------------------+ ● Improves overall system throughput. ER SAHIL KA GYAN
✅ Conclusion

● (a) A DMA controller facilitates high-speed data transfer by bypassing the CPU, and its bidirectional read/write lines
ensure efficient and flexible communication.

● (b) An IOP enhances system performance by independently managing I/O operations, allowing the CPU to focus on
computation.

ER SAHIL KA GYAN
THANK YOU
ER SAHIL KA GYAN

ER SAHIL KA GYAN

You might also like