0% found this document useful (0 votes)
2 views

Computer Architecture Important Question

The document provides an overview of computer architecture, detailing the differences between computer architecture and organization, the Von Neumann architecture and its limitations, key CPU components, and the memory hierarchy. It also explains cache memory and its mapping techniques, the differences between SRAM and DRAM, virtual memory and paging, RISC vs. CISC architectures, pipelining and its hazards, and the instruction cycle. Overall, it serves as a comprehensive guide to fundamental concepts in computer architecture and organization.

Uploaded by

isha chatterjee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Computer Architecture Important Question

The document provides an overview of computer architecture, detailing the differences between computer architecture and organization, the Von Neumann architecture and its limitations, key CPU components, and the memory hierarchy. It also explains cache memory and its mapping techniques, the differences between SRAM and DRAM, virtual memory and paging, RISC vs. CISC architectures, pipelining and its hazards, and the instruction cycle. Overall, it serves as a comprehensive guide to fundamental concepts in computer architecture and organization.

Uploaded by

isha chatterjee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Computer Architecture Important Question:

1. What is Computer Architecture? How is it different from Computer Organization?


Answer:
Computer Architecture:
Computer Architecture is the conceptual design and fundamental operational structure of a
computer system. It focuses on how a computer is designed to work from a programmer’s
perspective.
It includes:
 Instruction Set Architecture (ISA): The set of instructions the CPU can understand
(e.g., x86, ARM).
 Data types, addressing modes, and registers available.
 Design of the CPU: how it fetches, decodes, and executes instructions.
 Memory hierarchy: cache, RAM, storage.

Computer Organization:
Computer Organization deals with the implementation details — how everything in the
architecture is actually built and made to work.
It includes:
 Control signals
 Data paths
 Memory technology
 Processor logic
 Circuit-level implementation

So this is more about the physical realization and the how a computer performs its operations.

🔍 Key Differences:
Feature Computer Architecture Computer Organization
High-level design, functionality, and Implementation details and physical
Focus
performance aspects
Perspective Programmer’s view Hardware engineer’s view
Types of instructions a CPU can How those instructions are implemented
Example
execute in hardware
Instruction sets, data formats, Control units, ALUs, buses, memory
Deals with
addressing modes hardware
Abstraction
Abstract / Logical Concrete / Physical
Level

2. Describe the Von Neumann Architecture. What are its limitations?


Answer:
Von Neumann Architecture:
The Von Neumann Architecture is a computer architecture model proposed by John von
Neumann in the 1940s. It forms the basis for most traditional computers. The central idea is that
both data and instructions are stored in the same memory space.
Key Components:
1. Central Processing Unit (CPU):
o Arithmetic Logic Unit (ALU): Performs arithmetic and logical operations.
o Control Unit (CU): Directs the operation of the processor.
2. Memory:
o Stores both instructions (programs) and data.
3. Input / Output Devices:
o Used for communication between the computer and the external world.
4. System Bus:
o A communication pathway used for data transfer between components. Includes:
 Data Bus
 Address Bus
 Control Bus

How It Works:
 The CPU fetches an instruction from memory.
 It decodes and executes it.
 It then moves to the next instruction in a sequential manner.

 Limitations of Von Neumann Architecture:


1. Von Neumann Bottleneck:
o A major performance issue caused by the limited data transfer rate between the
CPU and memory. Because data and instructions share the same bus, the CPU
often has to wait while data is transferred.
2. Sequential Execution:
o Instructions are typically executed one at a time in a sequence, limiting
parallelism and speed.
3. Shared Memory for Instructions and Data:
o Potential for errors, such as unintentionally overwriting instructions.
4. Security Vulnerabilities:
o Since instructions and data are in the same memory, it’s easier for malicious code
to alter program flow.
5. Inefficient for Some Modern Applications:
o Tasks like graphics processing or AI benefit more from parallel processing
architectures (e.g., GPUs, neural processing units), which are not based strictly on
Von Neumann models.

3. What are the key components of a CPU?


Answer:
The key components of a CPU (Central Processing Unit) are the core parts responsible for
executing instructions and processing data. Here's a breakdown of the main components:

i. Arithmetic Logic Unit (ALU)


 Function: Performs arithmetic operations (like addition and subtraction) and logical
operations (like AND, OR, NOT).
 Role: It's the "math brain" of the CPU.
ii. Control Unit (CU)
 Function: Directs the flow of data and instructions within the CPU.
 Role: Acts like a traffic controller — fetching, decoding, and executing instructions by
coordinating the ALU, memory, and I/O.

iii. Registers
 Function: Small, high-speed storage locations within the CPU.
 Common Registers:
o Program Counter (PC): Holds the address of the next instruction.
o Instruction Register (IR): Holds the current instruction being executed.
o Accumulator: Temporarily holds data during operations.
o General Purpose Registers: Used for temporary data storage during execution.

iv. Cache Memory


 Function: Small-sized, very fast memory located close to the CPU cores.
 Purpose: Stores frequently used data and instructions to reduce access time compared to
RAM.
 Levels:
o L1 Cache (smallest and fastest)
o L2 Cache
o L3 Cache (larger, shared across cores)

v. Clock
 Function: Generates a consistent timing signal to synchronize all CPU operations.
 Clock Speed: Measured in GHz (gigahertz) — higher means more instructions per
second.

vi. Bus Interface Unit (BIU)


 Function: Manages communication between the CPU and other components (memory,
I/O) via buses.

Optional (Modern CPUs):


 Multiple Cores: Modern CPUs may have multiple cores, allowing them to process
multiple tasks (threads) simultaneously.
 Instruction Pipelines: For parallel instruction processing.
 Integrated Graphics Processing Unit (i GPU): Some CPUs include built-in graphics
capabilities.

4. Explain the memory hierarchy. Why is it important?


Answer:

Memory Hierarchy:
The memory hierarchy is a structure that organizes computer memory/storage systems based on
speed, cost, and capacity. It ranges from the fastest and most expensive (used frequently) to the
slowest and cheapest (used infrequently).
 Levels of the Memory Hierarchy (Top to Bottom):
Level Type Speed Cost/Bit Size Volatility
1. Registers Inside the CPU Fastest Very high Very small Volatile
2. Cache L1, L2, L3 Very fast High Small Volatile
3. Main Memory RAM (DRAM) Moderate Medium Medium Volatile
4. Secondary Storage SSD, HDD Slower Low Large Non-volatile
5. Tertiary Storage Tape, Cloud Storage Slowest Very low Very large Non-volatile
Q. Why Is Memory Hierarchy Important?
1. Performance Optimization:
o Faster memory (like cache) is close to the CPU, allowing quicker data access and
reducing wait time.
2. Cost Efficiency:
o Faster memory is expensive. A hierarchy allows a blend of speed and
affordability by using fast memory for critical tasks and slower, cheaper memory
for bulk storage.
3. Efficient Data Management:
o Frequently accessed data is stored in higher (faster) levels, while rarely accessed
data stays in lower levels — improving overall speed.
4. Scalability:
o As programs grow in size and complexity, the hierarchy allows systems to scale
without a massive cost increase.

5. What is cache memory? What are different cache mapping techniques (direct, associative, and
set-associative)?
Answer:
Cache Memory:
Cache memory is a small, high-speed memory located close to the CPU (often inside it) that
stores frequently accessed data and instructions to speed up processing.
Since accessing data from main memory (RAM) is relatively slow, the cache reduces the
average time to access memory by keeping copies of frequently used data closer to the CPU.

Q. Why Is It Fast?
 Cache uses SRAM (Static RAM) which is faster (but more expensive) than the DRAM
used in main memory.
 It operates at CPU speeds and drastically improves performance by reducing memory
access latency.

🔍 Cache Levels:
 L1: Smallest, fastest, located on the processor core.
 L2: Larger, slightly slower, also often on the chip.
 L3: Even larger, shared between cores, slower than L1/L2 but faster than RAM.
Cache Mapping Techniques:
These are the methods used to place and find data in the cache from main memory.
I. Direct Mapping
Q. How It Works:
 Each block of main memory maps to exactly one cache line.
 Simple and fast but can cause conflicts if multiple blocks map to the same line.

Formula:
Cache Line Index = (Block Address) MOD (Number of Cache Lines)
Pros:
 Simple and cheap to implement.
 Fast access time.

Cons:
 High chance of cache misses if multiple data blocks map to the same line (conflict
misses).

II. Fully Associative Mapping


Q. How It Works:
 Any block from main memory can be stored in any cache line.
 Cache lines are searched by comparing tags.

Pros:
 Very low conflict misses.
 More flexible placement.
Cons:
 Expensive and slower to implement (needs hardware to compare tags for all cache lines).

III. Set-Associative Mapping (Hybrid Approach)


Q. How It Works:
 Cache is divided into sets.
 A block maps to one set, but can occupy any line within that set.
 Common types: 2-way, 4-way, 8-way set-associative (number = lines per set).

Example (4-way):
 A block maps to 1 of N sets, but has 4 possible lines in that set it can be placed in.
Pros:
 Balance between speed and flexibility.
 Lower conflict rate than direct mapping.
 Less expensive than fully associative.

Cons:
 Slightly more complex than direct mapping.
6. Difference between SRAM and DRAM.
Feature SRAM (Static RAM) DRAM (Dynamic RAM)
Full Form Static Random Access Memory Dynamic Random Access Memory
Speed Faster Slower
Cost More expensive Cheaper
Density Lower (takes more space per bit) Higher (more compact)
Power
Lower (no need for frequent refresh) Higher (requires periodic refresh)
Consumption
Storage Cell Uses 6 transistors per bit Uses 1 transistor + 1 capacitor per bit
Not required (holds data as long as Required (data must be refreshed
Data Refreshing
powered) periodically)
Cache memory (L1, L2, L3) inside
Used In Main memory (RAM modules)
CPU
Complexity More complex design Simpler design
Volatile (loses data when power is
Volatility Volatile
off)

7. What is virtual memory and how does paging work?


Answer:
Virtual Memory:
Virtual memory is a memory management technique that gives the illusion of a large,
continuous memory space to programs — even if the physical RAM is smaller.
It allows a system to:
 Run larger applications than physical memory would normally allow.
 Isolate processes for security and stability.
 Use disk space as "extra" memory when RAM runs out.

Virtual memory is managed by the Operating System (OS) using a combo of hardware (like the
Memory Management Unit, or MMU) and software.

Q. How Does Paging Work?


Paging is one of the main techniques used to implement virtual memory.
🔍 Key Concepts:
 Virtual Memory is divided into pages (usually 4KB each).
 Physical Memory (RAM) is divided into frames of the same size.
 The OS keeps a page table to map virtual pages to physical frames.

🔍 Process (Simplified):
1. Program requests data at a virtual address.
2. The MMU checks the page table to find the corresponding physical frame.
3. If the page is in RAM (a page hit), it retrieves it.
4. If the page is not in RAM (a page fault):
o The OS loads the page from the disk (swap space) into RAM.
o If RAM is full, it may evict another page to make room (based on a replacement
algorithm like LRU).
🔍 Page Table:
A data structure that maps virtual page numbers to physical frame numbers. Can include:
 Present/absent bit (is it in RAM?)
 Access permissions
 Dirty bit (has it been modified?)

Page Replacement Algorithms (when RAM is full):


 FIFO (First-In, First-Out)
 LRU (Least Recently Used)
 Optimal (theoretical, for comparison)

Q. Why Use Paging?


 Efficient memory usage
 Process isolation and protection
 No need for contiguous memory allocation
 Can run programs larger than physical memory

8. Explain RISC vs. CISC architectures. Give examples.

RISC (Reduced Instruction Set CISC (Complex Instruction Set


Feature
Computer) Computer)
Large, complex, and variable-
Instruction Set Small, simple, and fixed-length
length
Instruction Execution Usually 1 clock cycle per
Can take multiple cycles
Time instruction
Focus Speed and efficiency Code density and functionality
Instructions can access memory
Memory Access Load/store architecture
directly
Instruction Decoding Simple hardware Complex decoding unit
Smaller (fewer instructions, more
Code Size Larger (more instructions for tasks)
powerful)
Heavily reliant on compiler
Compiler Dependency Less dependent on compiler
optimization
Power Consumption Lower (generally more efficient) Higher
Examples ARM, MIPS, RISC-V, SPARC x86, Intel 8086, AMD64

RISC (Reduced Instruction Set Computer):


 Uses a small set of simple instructions.
 Every instruction is designed to execute in a single clock cycle.
 Makes use of many general-purpose registers.
 Load/Store Architecture: Memory access is only through explicit load and store
instructions.
Advantages:
 Faster execution per instruction
 Easier to pipeline
 Simpler hardware, lower power

Examples:
 ARM processors (used in smartphones, tablets)
 RISC-V (open-source ISA, growing in popularity)
 MIPS, SPARC

CISC (Complex Instruction Set Computer):


 Has a large set of instructions, some very complex (e.g., one instruction can do a
memory load, arithmetic, and store).
 Instructions are variable in length and may take multiple cycles.
 Designed to reduce the number of instructions per program (thus saving memory in
earlier computing eras).

Advantages:
 More powerful individual instructions
 Compact programs (smaller binaries)
 Less burden on compilers

Examples:
 x86 architecture (used in most desktops and laptops)
 Intel 8086, Pentium, AMD Ryzen

9. What is pipelining? What are the types of hazards (data, control, structural)?
Answer:
Pipelining:
Pipelining is a technique where multiple instruction stages are overlapped during execution. It’s
similar to an assembly line in a factory.
Instead of waiting for one instruction to finish entirely before starting the next, pipelining breaks
instruction execution into stages:
1. Fetch
2. Decode
3. Execute
4. Memory Access
5. Write Back

Each stage works simultaneously on different instructions, so multiple instructions are in flight
at once.
 Goal:
Increase instruction throughput (instructions per unit time) — not the execution time of a
single instruction.
Hazards in Pipelining:
While pipelining improves speed, it also introduces hazards — situations that prevent the next
instruction from executing in the next cycle.

There are three main types:

a) Data Hazards:
Occurs when instructions depend on the results of previous instructions that haven’t
completed yet.

Types:
 RAW (Read After Write) – Most common:
o ADD R1, R2, R3 ; R1 = R2 + R3
o SUB R4, R1, R5 ; uses R1 before it's written
 WAR (Write After Read) – Rare in pipelines:
 WAW (Write After Write) – Can happen in out-of-order execution.

Solutions:
 Forwarding/Bypassing
 Stalling (pipeline bubble)

b) Control Hazards (Branch Hazards):


Occurs due to branching/jumping instructions where the next instruction isn’t known
immediately.

Example:
BEQ R1, R2, LABEL
 The CPU doesn’t know which instruction to fetch next until the branch decision is
made.
Solutions:
 Branch prediction
 Delayed branching
 Stalling until decision is known

c) Structural Hazards
Occurs when hardware resources are insufficient to handle overlapping instructions.

Example:
 If the CPU has only one memory unit and both an instruction fetch and data access
occur simultaneously — they conflict.
Solutions:
 Duplicate resources (e.g., separate instruction/data caches)
 Pipeline scheduling
10. What is an instruction cycle? Explain its stages.
Answer:
Instruction Cycle:
An instruction cycle is the complete process by which a computer fetches, decodes, and
executes an instruction.
Each instruction a program runs must go through this cycle to be processed by the CPU.

 Stages of the Instruction Cycle:


Most commonly, the instruction cycle has five stages:
a. Fetch:
Goal: Get the instruction from memory.
 The Program Counter (PC) holds the address of the next instruction.
 The CPU sends this address to memory and fetches the instruction.
 The instruction is stored in the Instruction Register (IR).
 The PC is updated to point to the next instruction.

b. Decode:
Goal: Understand what the instruction means.
 The Control Unit (CU) interprets the binary instruction.
 It identifies the operation (opcode) and the operands (data or addresses involved).

c. Execute:
Goal: Perform the operation.
 The Arithmetic Logic Unit (ALU) or other CPU components perform the task.
 This could be arithmetic, logical operations, memory access, etc.

d. Memory Access (Optional, for some instructions):


Goal: Read or write data in memory.
 If the instruction involves memory (like a load or store), this is when the CPU reads from
or writes to memory.

e. Write Back (Optional, for some instructions)


Goal: Save the result.
 The result of the operation (like from the ALU) is written back to a register or memory
location.

 Cycle vs. Clock Cycle:


 An instruction cycle may take multiple clock cycles to complete, depending on
complexity and CPU architecture.
 Pipelining helps to overlap instruction cycles and increase throughput.

11. How does DMA (Direct Memory Access) work?


ANSWER:
DMA (Direct Memory Access):
DMA is a feature that allows hardware devices (like disk drives, sound cards, or network cards)
to transfer data directly to/from main memory without involving the CPU for each byte or
word of data.
Q. Why Use DMA?
Without DMA:
 The CPU must manually read data from a device and write it to memory (or vice
versa), slowing it down and wasting cycles.
With DMA:
 The DMA controller takes over the data transfer process, freeing the CPU to do
other tasks — increasing efficiency and performance.

 Components Involved:
1. I/O Device (e.g., hard disk, NIC, etc.)
2. Main Memory (RAM)
3. DMA Controller
4. CPU (only initiates and gets notified when done)

Q. How DMA Works – Step-by-Step:


1. CPU initializes DMA:
o CPU programs the DMA controller with:
 Source and destination addresses
 Amount of data to transfer
 Direction (read/write)
2. DMA controller starts the transfer:
o It directly manages the bus to move data between the I/O device and memory.
3. CPU is free:
o While DMA transfers data, the CPU can continue executing other instructions
(unless there's a bus conflict).
4. DMA completes the transfer:
o It sends an interrupt to the CPU signaling that the operation is done.

 Modes of DMA Transfer:


Mode Description
Burst Mode DMA transfers the entire block in one go; locks the bus during transfer.

Cycle Stealing DMA transfers one byte/word at a time, "stealing" CPU cycles occasionally.

Transparent DMA only transfers when the CPU is idle — causes no interference but
Mode slower.

 Benefits of DMA:
 Increases system throughput
 Reduces CPU load
 Enables efficient handling of large data transfers (like file copying, streaming, etc.)

12. Explain I/O mapped I/O vs. memory-mapped I/O.


What Is I/O Communication:
When a CPU wants to send data to or receive data from an I/O device, it needs a way to
address and access the device’s registers (e.g., status, control, data buffers). The two main
methods are:
A. I/O-Mapped I/O (Also called Isolated I/O):
 How It Works:
 I/O devices are given a separate address space.
 CPU uses special instructions to access them:
o Example: IN and OUT instructions in x86.

 Characteristics:
 Separate address space for memory and I/O.
 CPU knows whether it’s accessing memory or I/O based on the instruction used.
 Typically fewer address lines are needed for I/O.

Pros:
 Keeps memory and I/O spaces separate — no overlap.
 May use simpler, faster hardware for basic I/O operations.

Cons:
 Requires special CPU instructions.
 Less flexible (can’t use regular load/store instructions on I/O).

B. Memory-Mapped I/O:
 How It Works:
 I/O devices are assigned addresses within the same address space as RAM.
 The CPU accesses I/O just like it accesses memory using standard instructions (like
MOV, LOAD, STORE).

 Characteristics:
 No separate I/O instructions needed — regular memory instructions work.
 Requires careful address allocation to avoid overlapping memory and I/O.

Pros:
 Simple and flexible (e.g., use of full set of memory instructions on I/O).
 Easier to design for systems with unified memory and I/O space.
 Can use powerful addressing modes and optimizations.

Cons:
 Reduces available address space for RAM.
 May need more complex address decoding.

 Comparison Table:
Feature I/O-Mapped I/O Memory-Mapped I/O
Address Space Separate I/O address space Shared with memory
Instructions Special I/O instructions (e.g., IN, OUT) Regular memory instructions
Address Size Typically smaller Same as memory address size
Hardware Simplicity Simpler for small systems More integrated/flexible
Flexibility Less flexible More flexible

You might also like