0% found this document useful (0 votes)
9 views6 pages

Cso Micro All Units

The document outlines the architecture and functional units of a computer, including the CPU, memory, and input/output devices, emphasizing the fetch-decode-execute cycle. It discusses the Von Neumann architecture, types of control units (hardwired and microprogrammed), and various operations such as arithmetic, logical, and shift operations. Additionally, it addresses the limitations of the Von Neumann architecture, particularly the bottleneck in data transfer rates between the CPU and memory.

Uploaded by

pewdiepienikal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views6 pages

Cso Micro All Units

The document outlines the architecture and functional units of a computer, including the CPU, memory, and input/output devices, emphasizing the fetch-decode-execute cycle. It discusses the Von Neumann architecture, types of control units (hardwired and microprogrammed), and various operations such as arithmetic, logical, and shift operations. Additionally, it addresses the limitations of the Von Neumann architecture, particularly the bottleneck in data transfer rates between the CPU and memory.

Uploaded by

pewdiepienikal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

A computer has five functional units: ➢ Buffer Registers: ▫ The devices connected to a bus vary widely in their ▫ This

ed to a bus vary widely in their ▫ This circulates or rotates the bits of register around the two ends
Input unit: Consists of input devices that convert data into binary speed of operation. ▫ To synchronize their operational-speed, buffer- without any loss of data or contents. In this, the serial output of the shift
language. Input devices include keyboards, mice, joysticks, and registers can be used ▫ are included with the devices to hold the register is connected to its serial input. ▫ "cil" and "cir" is used for
scanners. information during transfers. ▫ prevent a high-speed processor from circular shift left and right respectively
Memory unit: Stores program information. being locked to a slow I/O device during data transfers. c) Arithmetic Shift:
Arithmetic and logic unit: Also known as the ALU. ▫ This shifts a signed binary number to left or right. ▫ An arithmetic shift
Output unit: The fifth functional unit. left multiplies a signed binary number by 2 and shift left divides the
Control unit: A functionally independent main part. Basic operational Concepts: number by 2.
These five units work together in a cycle called the fetch-decode- ▫ Arithmetic shift micro-operation leaves the sign bit unchanged
execute cycle: because the signed number remains same when it is multiplied or
Fetch: The control unit retrieves an instruction from memory. divided by 2.
Decode: The control unit breaks down the instruction into its ▫ An left arithmetic shift operation must be checked for the overflow
components (operation, operands) and sends them to the ALU.
Execute: The ALU performs the operation on the operands and stores
the result back in memory or sends it to the output unit.
❑Von Neumann architecture: Register Transfer Micro Programmed Control:
• Von Neumann architecture was first published by John von Neumann Definition: The process of moving data between registers within The function of the control unit in a digital computer is to initiate
in 1945. • His computer architecture design consists of a Control Unit, a computer system, under the control of the control unit. sequence of microoperations.
Arithmetic and Logic Unit (ALU), Memory Unit, Registers and Purpose: To enable data manipulation, storage, and retrieval for Control unit can be implemented in two ways :
Inputs/Outputs. • Historically there have been 2 types of Computers: 1. various operations. o Hardwired control
Fixed Program Computers – Their function is very specific and they Key Components:
o Microprogrammed control
couldn’t be reprogrammed, e.g. Calculators. 2. Stored Program Registers: Small, high-speed storage units within the processor.
Computers – These can be programmed to carry out many different Buses: Data pathways connecting registers and other Hardwired Control:
tasks, applications are stored on them, hence the name. • Von components. When the control signals are generated by hardware using
Neumann architecture is based on the stored-program computer Control Unit: Orchestrates data flow and register operations. conventional logic design techniques, the control unit is said to be
concept, where instruction data and program data are stored in the Register Transfer Language (RTL): hardwired.
same memory. This design is still used in most computers produced Symbolic notation for describing register transfers. The key characteristics are
today. Used for: High speed of operation
Modeling hardware behavior at a detailed level.
Designing digital circuits and systems. Expensive
Verifying correctness of hardware designs. Relatively complex
Example: R1 ← R2 + R3 (Add contents of R2 and R3, store No flexibility of adding new instructions
result in R1) Examples of CPU with hardwired control unit are Intel 8085, Motorola
Types of Register Transfer Operations: 6802, Zilog 80, and any RISC CPUs.
Register-to-register transfer: Data moves between registers. Microprogrammed Control:
Register-to-memory transfer: Data moves between a register and Control information is stored in control memory.
memory.
Control memory is programmed to initiate the required sequence of
Memory-to-register transfer: Data moves from memory to a
register. micro-operations.
Arithmetic operations: Performed on data in registers The key characteristics are
(e.g., addition, subtraction). Speed of operation is low when compared with hardwired
Logical operations: Performed on data in registers Less complex
(e.g., AND, OR, NOT). Less expensive
Shift operations: Shift data bits within a register (e.g., left Flexibility to add new instructions
shift, right shift).
Examples of CPU with microprogrammed control unit are Intel 8080,
What is Register Transfer Language (RTL)?
A symbolic notation used to describe the micro-operations that Motorola 68000 and any CISC CPUs.
move data between registers within a digital system. What is a microprogrammed control unit (MCU)?
An MCU is a type of control unit that uses a microprogram to
➢ Central Processing Unit (CPU): ▫ The Central Processing Unit (CPU) is Provides a concise and precise way to model the hardware-level
behavior of a system, independent of specific hardware control the operation of the processor.
the electronic circuit responsible for executing the instructions of a
implementation. MCUs are generally slower than hardwired control units, but they
computer program. ▫ It is sometimes referred to as the microprocessor
It's like a language that hardware designers use to communicate offer advantages like flexibility and easier programmability.
or processor. ▫ The CPU contains the ALU, CU and a variety of registers. What is a microprogram?
▪ Registers: Registers are high speed storage areas in the CPU. All data and document their designs.
RTL Syntax: A microprogram is a set of low-level instructions that specify the
must be stored in a register before it can be processed. MAR Memory exact sequence of micro-operations needed to execute a single
Uses symbols to represent registers, operations, and control
Address Register Holds the memory location of data that needs to be machine instruction.
signals.
accessed MDR Memory Data Register Holds data that is being Each micro-operation involves activating specific control signals
Example: R1 ← R2 + R3 (Add the contents of registers R2 and
transferred to or from memory AC Accumulator Where intermediate within the processor to perform tasks like fetching
R3, and store the result in R1)
arithmetic and logic results are stored PC Program Counter Contains the data, decoding instructions, or performing arithmetic operations.
Memory Transfer Operations: Refer to the fundamental
address of the next instruction to be executed CIR Curren Instruction What is control memory?
processes of reading data from memory and writing data to
Register Contains the current instruction during processing Control memory is a type of read-only memory (ROM) that
memory.
▪ Arithmetic and Logic Unit (ALU): The ALU allows arithmetic (add, stores the microprogram.
Common Notation:
subtract etc) and logic (AND, OR, NOT etc) operations to be carried out. During the fetch-decode-execute cycle, the control unit retrieves
DR ← M[AR] (Read operation): Transfers data from memory
▪ Control Unit (CU): The control unit controls the operation of the the appropriate microinstruction from control memory based on
location specified by address register (AR) to data register (DR).
computer’s ALU, memory and input/output devices, telling them how the current machine instruction being executed.
M[AR] ← DR (Write operation): Transfers data from data
to respond to the program instructions it has just read and interpreted The microinstruction then provides the control signals necessary
register (DR) to memory location specified by AR. to perform the next micro-operation.
from the memory unit. The control unit also provides the timing and Arithmetic micro-operations: Key characteristics of control memory:
control signals required by other computer components. ➢ Memory • Some of the basic micro-operations are addition, subtraction, Read-only: The microprogram in control memory is typically fixed
Unit: ▫ The memory unit is usually primary memory. Inside the primary increment and decrement. and cannot be modified dynamically.
memory consists of the Random Access Memory (RAM) and the Read- ➢ Add Micro-Operation: Fast access: Control memory needs to be fast enough to keep
Only Memory (ROM). ▫ It is defined by the following statement: R3 → R1 + R2 ▫ The above up with the execution speed of the processor.
▫ The RAM is used to store data that is currently in use. This is when statement instructs the data or contents of register R1 to be added to Limited capacity: Control memory typically stores only the
the computer is on, data that is used is added into the RAM. However, data or content of register R2 and the sum should be transferred to microprograms for frequently used instructions. Less common
since this is a volatile (temporary) memory, once the computer is off, all register R3. instructions may be decoded and executed using routines in
the data that was in the RAM is lost. ➢ Subtract Micro-Operation: main memory.
▫ The ROM is used to store permanent data and basic instructions such ▫ Let us again take an example: R3 → R1 + R2' + 1 ▫ In subtract micro- Benefits of using control memory:
as the BIOS/startup instructions for your computer. This is different to Flexibility: The microprogram can be easily changed to support
operation, instead of using minus operator we take 1's compliment and
RAM as the memory is non-volatile (permanent), therefore even when new instructions or modify existing functionality.
add 1 to the register which gets subtracted, i.e R1 - R2 is equivalent to
the computer is off, all these data is stored and retained in the ROM. Modular design: Microprograms can be broken down into
R3 → R1 + R2' + 1 smaller, manageable units.
➢ Input/Output Devices: ▫ Input Devices — devices that sends ➢ Increment/Decrement Micro-Operation: Cost-effective: MCUs with control memory can be cheaper to
information into the computer, eg: keyboard, mouse, microphone, ▫ Increment and decrement operation are generally performed by manufacture than hardwired control units.
touchscreen, etc. ▫ Output devices — devices that sends information adding and subtracting 1 to and from the register respectively. [R1 → Address sequencing
out of the computer, eg: monitor, speaker, printer, etc. R1 + 1R1 → R1 – 1] Symbolic Designation Description R3 ← R1 + R2 Address sequencing is a fundamental concept in computer
Buses Contents of R1+R2 transferred to R3. R3 ← R1 - R2 Contents of R1-R2 architecture that refers to the process of determining the next
 Data is transmitted from one part of a computer to another, transferred to R3. R2 ← (R2)' Compliment the contents of R2. R2 ← (R2)' address in the control memory where the next microinstruction for
connecting all major internal components to the CPU and memory, by + 1 2's compliment the contents of R2. R3 ← R1 + (R2)' + 1 R1 + the 2's executing a machine instruction is stored. It's like following a
the means of Buses.  A standard CPU system bus is comprised of a compliment of R2 (subtraction). R1 ← R1 + 1 Increment the contents of roadmap to navigate through the microprogram stored in control
control bus, data bus and address bus. R1 by 1. R1 ← R1 - 1 Decrement the contents of R1 by 1. ❑Logic micro- memory.
Types of Buses operations: Purpose:
Address Bus Carries the addresses of data (but not the data) between Defines the order in which microinstructions are fetched from
• These are binary micro-operations performed on the bits stored in the
the processor and memory control memory to execute a machine instruction.
registers. These operations consider each bit separately and treat them
Data Bus Carries data between the processor, the memory unit and the Ensures smooth and efficient execution of the machine instruction
as binary variables. • Let us consider the X-OR micro-operation with the by fetching the right microinstructions at the right time.
Input/Output devices contents of two registers R1 and R2. P: R1 ← R1 X-OR R2 • In the above Capabilities:
Control Bus Carries control signals/commands from the CPU (and status statement we have also included a Control Function. • Assume that Incrementing the Control Address Register (CAR): This is the
signals from other devices) in order to control and coordinate all the each register has 3 bits. Let the content of R1 be 010 and R2 be 100. The most basic capability, where the address in the CAR is simply
activities within the computer. XOR micro-operation will be: incremented to fetch the next microinstruction in sequence.
von Neumann Bottleneck: Limitations of von Neumann Architecture  ❑Shift micro-operations: Conditional Branching: Based on the outcome of a previous
It is the computing system throughput limitation due to inadequate rate • These are used for serial transfer of data. That means we can shift the operation (e.g., the value of a status register), the address
of data transfer between memory and the CPU. contents of the register to the left or right. In the shift left operation the sequencing can jump to a different location in the control memory
 The VNB causes CPU to wait and idle for a certain amount of time serial input transfers a bit to the right most position and in shift right to fetch the next microinstruction. This allows for conditional
while low speed memory is being accessed. operation the serial input transfers a bit to the left most position. • execution of different parts of the microprogram.
 The VNB is named after John von Neumann, a computer scientist who There are three types of shifts as follows: Unconditional Branching: This is similar to conditional
was credited with the invention of the bus based computer a) Logical Shift: branching, but the jump to a different location happens regardless
architecture. of any condition. It's often used for loops or subroutine calls.
▫ It transfers 0 through the serial input. The symbol "shl" is used for
 To allow faster memory access, various distributed memory “non- Subroutine Calls and Returns: Address sequencing facilitates
logical shift left and "shr" is used for logical shift right. R1 ← she R1R1
von” systems were proposed calling subroutines, which are smaller routines that can be used
← she R1 ▫ The register symbol must be same on both sides of arrows.
repeatedly within a program. It keeps track of the return address
b) Circular Shift:
so that after the subroutine execution, the control flow can return resources more effectively, reducing idle time and maximizing
to the main program. throughput.
Importance: Reduced Programming Complexity: Vector instructions can
Efficient address sequencing is crucial for optimal performance of simplify the code for performing repetitive operations on large
the processor. Any delays or errors in fetching the correct arrays, making it easier for programmers to express these types
microinstructions can significantly slow down the execution of the of computations.
machine instruction. Challenges of Vector Processing:
It enables complex operations by breaking down machine Increased Hardware Complexity: Vector processors require
instructions into smaller, manageable microinstructions and specialized hardware, such as vector registers and parallel
fetching them in the correct order. processing units, which can increase the cost and complexity of
The design of a control unit the processor.
involves defining the hardware and logic necessary to sequence Not all algorithms benefit: Not all algorithms are well-suited for
and orchestrate the execution of instructions within a processor. vector processing. Some algorithms may have dependencies
It plays a crucial role in directing the flow of data, activating between data elements that prevent them from being processed
various units in the processor, and ultimately determining the in parallel.
behavior of the computer system. Memory Access Bottleneck: Accessing data in memory can be
Components: a bottleneck for vector processing, as fetching large vectors can
Instruction Register (IR): Stores the currently fetched instruction take longer than the actual operation itself.
being executed.
Program Counter (PC): Points to the address of the next Applications of Vector Processing:
instruction to be fetched. Scientific Computing: Vector processing is widely used in
Decoder: Decodes the opcode (operation code) in the instruction scientific computing applications that involve large datasets, such
to determine the operation to be performed. as weather forecasting, climate modeling, and computational fluid
Control Logic: Generates control signals based on the decoded dynamics.
opcode and other inputs like flags and interrupt signals. Image and Signal Processing: Vector processing is also used
Pipelining is used to improve overall performance.
Sequential Logic: Responsible for updating the PC and fetching in image and signal processing applications, such as
the next instruction after completion of the current one. Features of the RISC pipeline filtering, compression, and transformation of images and audio
Control Memory (MCU only): Stores the microprogram, a RISC pipeline can use many registers to decrease the processor signals.
sequence of microinstructions detailing the steps for executing memory traffic and enhance operand referencing. Machine Learning: Vector processing is playing an increasingly
each machine instruction. It keeps the most frequently accessed operands in the CPU registers. important role in machine learning applications that involve
In the RISC pipeline, simplified instructions are used, leaving complex training algorithms on large datasets, such as deep learning and
What is Pipelining: instructions. image recognition.
In computer architecture, pipelining refers to a technique for
Here register to memory operations is reduced.
improving instruction execution speed by overlapping the what is array Processing:
execution stages of different instructions. Imagine it like an Instructions take a single clock cycle to get executed. The term "array processing" can be interpreted in two different
assembly line in a factory, where multiple stages of production ways, depending on the context:
happen simultaneously on different products 1. Processing data within an array:
. Example In this context, array processing refers to the act of applying
Here's how pipelining works: Let’s consider Instruction in the circumstances related to RISC operations to all elements of an array of data simultaneously. This
Instruction Fetch: The processor fetches an instruction from architecture. In RISC machines, registering is most of the operations. can involve simple operations like addition or multiplication, as
memory. well as more complex operations like filtering, sorting, or
Therefore, the instructions can be performed in two phases:
Instruction Decode: The instruction is decoded to determine performing statistical analyses.
the operation to be performed and the operands needed. E: Execute Instruction on register operands and keep/store the results
Operand Fetch: The operands (data) required for the operation in the register.
are fetched from memory or registers. F: Instruction Fetch to get the Instruction. Array processing can be done in different ways:
Execution: The ALU (Arithmetic Logic Unit) performs the Generally, the memory access in RISC is performed through STORE and Using traditional loops: This is the most basic approach, where
operation on the operands. LOAD operations. For these types of instructions, the following steps you iterate through each element of the array individually and
Write Back: The result of the operation is written back to a are required: apply the desired operation. However, this can be slow and
register or memory location inefficient for large arrays.
F: Fetch instructions to get the Instruction
Using specialized libraries or frameworks: Many programming
Benefits of Pipelining: E: Effective address calculation for the required memory operand languages and libraries offer optimized functions for performing
Increased processor speed: Pipelining significantly improves D: register-to-memory or memory-to-register data transfer through the common operations on arrays. These functions use vectorization
instruction throughput, potentially doubling or even tripling the bus and other techniques to improve performance.
execution speed compared to a non-pipelined processor. Using parallel processing techniques: If you have multiple
Improved efficiency: Resources are used more Importance of RISC processors or cores available, you can distribute the work of
effectively, reducing idle time and maximizing the utilization of The importance of RISC processors is as follows: processing the array across them. This can significantly improve
the processor's components. ▫ Register-based execution performance for large arrays.
▫ Fixed Length instruction and Fixed Instruction Format
Challenges/limitations of Pipelining: 2. Array processors:
▫ Few Powerful Instructions
Increased complexity: Pipelined processors require more In a different context, "array processing" can also refer to a
complex hardware and control logic to manage the overlaps and ▫ Hardwired control unit specific type of hardware processor designed to efficiently handle
potential hazards (data dependencies between instructions). ▫ Highly Pipelined Superscalar Architecture computations involving large arrays of data. These processors
Pipeline hazards: In certain situations, instructions waiting in ▫ Highly Integrated Architecture typically have special features like:
the pipeline may have to stall or be flushed due to data Advantages of RISC Vector registers: These registers can hold multiple data
dependencies, which can reduce the overall speedup. ▫ This RISC architecture allows developers the freedom to elements from an array, allowing for faster access and
make use of the space on the microprocessor. manipulation compared to regular registers.
what is Instruction Pipeline: Parallel processing units: These units can perform the same
▫ RISC allows high-level language compilers to generate
Instruction pipelining is a specific type of pipelining used in operation on all elements of a vector simultaneously, further
computer architecture to improve the speed of instruction efficient code due to the architecture having a set of improving performance.
execution by dividing the instruction cycle into smaller, instructions. Specialized instructions: Array processors often have
overlapping stages that run concurrently. Think of it like an ▫ RISC processors utilize only a few parameters; besides, instructions designed specifically for manipulating arrays, such as
assembly line in a factory, where different parts of the same RICS processors cannot call instructions; hence, it uses vector addition, multiplication, and sorting.
instruction are processed simultaneously on different "stations" fixed-length instructions that are easy to pipeline.
within the processor. ▫ RISC reduces the execution time while increasing the Examples of applications that benefit from array processing
include:
overall operation speed and efficiency.
what is Arithmetic Pipeline: Scientific computing: Calculations involving large datasets in
An arithmetic pipeline is a technique used in computer ▫ RISC is relatively simple because it has very few areas like weather forecasting, climate modeling, and fluid
architecture to improve the performance of arithmetic operations, instruction formats; also, a small number of instructions dynamics.
particularly multiplication and floating-point calculations. It works and a small number of addressing modes are needed. Image and signal processing: Operations like
by dividing the operation into smaller, overlapping stages that can filtering, compression, and transformation of images and audio
be executed concurrently on different units within the processor. what is Vector Processing: signals.
This is similar to how an assembly line in a factory works, where Vector processing is a technique used in computer architecture to Machine learning: Training algorithms on large datasets for
different parts of the same product are processed simultaneously improve the performance of computations that involve operating tasks like natural language processing and image recognition.
on different stations. on large arrays of data simultaneously. Instead of processing
each element of the array individually, vector processors can An instruction set, sometimes called an Instruction Set
RISC architecture : apply the same operation to all elements in parallel, significantly Architecture (ISA), is the fundamental collection of instructions
Reduced Instruction Set Computer is a special kind of Instruction Set boosting speed. Think of it like processing a bunch of apples on a that a microprocessor can understand and execute. It's like a
Architecture with attributes with lower cycles per Instruction (CPI) conveyer belt instead of doing them one by one. dictionary defining the commands the processor can interpret
than CISC. and the operations it can perform. Understanding the instruction
vector processing works: set is crucial for programmers and computer architects as it
RISC is a load/ store architecture as memory is only accessed through
Fetch Vector Instructions: The processor fetches an instruction determines the capabilities and limitations of the processor.
specific instructions rather than as a part of most instructions. Here are some key aspects of microprocessor instruction
that specifies the operation to be performed on the vector (array)
RISC architecture is widely used across various platforms, from cellular sets:
of data.
telephones to the fastest supercomputer Load Vector Data: The vector data is loaded from memory into Components of an Instruction:
special registers within the processor called vector Opcode: This is the code that identifies the specific operation to
RISC Pipeline: registers. These registers can hold multiple data elements, unlike be performed.
Pipelining, a (standard feature in RISC processors) is like an assembly regular registers which hold only one. Operands: These are the data elements involved in the
line. Because the processor function on different steps of the Execute Vector Operation: The ALU (Arithmetic Logic Unit) operation, such as register numbers or memory addresses.
Instruction at the same time, more instructions can be performs the specified operation on all elements of the vector data Addressing modes: These specify how the operands are
simultaneously using specialized parallel processing units. located in memory or registers.
operated/executed in a short time. Several steps vary in different
Store Vector Result: The result of the operation is stored back to Flags: These are status bits that indicate the outcome of
processors, but that steps are generally variations of these five: previous operations, like carry or overflow.
a vector register or memory location.

Benefits of Vector Processing: Categories of Instructions:


Increased Performance: Vector processing can significantly Arithmetic and logic instructions: These perform basic
improve the speed of computations that involve large arrays of operations like addition, subtraction, multiplication, and
data, especially for operations like comparisons.
addition, subtraction, multiplication, and other basic arithmetic Data transfer instructions: These move data between
operations. registers, memory, and input/output devices.
Improved Efficiency: By processing multiple data elements
simultaneously, vector processors can utilize the processor's
Control flow instructions: These control the flow of execution > Where are operands stored? - registers, memory, stack, accumulator Stack Segment Register (SS): Stack segment holds addresses and data
within a program, including loops, branches, and subroutine > How many explicit operands are there? - 0, 1, 2, or 3 of subroutines. It also holds the contents of registers and memory
calls. How is the operand location specified? - register, immediate, indirect, . locations given in PUSH instruction.
Processor control instructions: These manage the internal Extra Segment Register (ES): Extra segment holds the destination
..
state of the processor, such as setting flags or enabling/disabling addresses of some data of certain string instructions.
interrupts. > What type & size of operands are supported? - byte, int, float, double,
string, vector. . . Instruction Pointer (IP): The instruction pointer in the 8086
microprocessor acts as a program counter. It indicates to the address of
Types of Instruction Sets: > What operations are supported? - add, sub, mul, move, compare . .
the next instruction to be executed.
Complex Instruction Set Architecture (CISC): Offers a large
Execution Unit (EU):
and diverse set of instructions, often tailored for specific Registers Advantages and Disadvantages The EU receives opcode of an instruction from the queue, decodes it
tasks. Examples include x86 and ARM. Advantages:
Examples of CISC processors are AMD, Intel x86, and the System/360. and then executes it. While Execution, unit decodes or executes an
Faster than cache or main memory (no addressing mode) instruction, then the BIU fetches instruction codes from the memory
CISC Architecture:
Deterministic (no misses) and stores them in the queue.
Can replicate (multiple read ports) The BIU and EU operate in parallel independently. This makes
Short identifier (typically 3 to 8 bits) processing faster.
Reduce memory traffic General purpose registers, stack pointer, base pointer and index
Disadvantages: registers, ALU, flag registers (FLAGS), instruction decoder and timing
Need to save and restore on procedure calls and context switch and control unit constitute execution unit (EU). Let's discuss them:
Can’t take the address of a register (for pointers) General Purpose Registers: There are four 16-bit general purpose
registers: AX (Accumulator Register), BX (Base Register), CX (Counter)
Fixed size (can’t store strings or structures efficiently) Compiler must
and DX. Each of these 16-bit registers are further subdivided into 8-bit
manage
registers as shown below:
16-bit 8-bit 8-bit
Introduction of Intel 8086: registers high- low-
• Intel 8086 microprocessor is the enhanced version of Intel 8085 order order
microprocessor. It was designed by Intel in 1976. registers registers
• The 8086 microprocessor is a16-bit, N-channel, HMOS AX AH AL
microprocessor. Where the HMOS is used for "High-speed Metal Oxide
Semiconductor". BX BH BL
• Intel 8086 is built on a single semiconductor chip and packaged in a
40-pin IC package. The type of package is DIP (Dual Inline Package). CX CH CL
•intel 8086 uses 20 address lines and 16 data- lines. It can directly
Features of CISC Processor: address up to 220 = 1 Mbyte of memory. DX DH DL
▫ CISC may take longer than a single clock cycle to execute the • It consists of a powerful instruction set, which provides operation like
code. division and multiplication very quickly.
▫ The length of the code is short, so it requires minimal RAM. • 8086 is designed to operate in two modes, i.e., Minimum and
▫ It provides more accessible programming in assembly language. Maximum mode. Index Register: The following four registers are in the group of pointer
and index registers:
▫ It focuses on creating instructions on hardware rather than
o Stack Pointer (SP)
software because they are faster to develop.
o Base Pointer (BP)
▫ It comprises fewer registers and more addressing nodes,
o Source Index (SI)
typically 5 to 20. o Destination Index (DI)
ALU: It handles all arithmetic and logical operations. Such as addition,
Difference Between RISC And CISC: subtraction, multiplication, division, AND, OR, NOT operations.
Flag Register: It is a 16?bit register which exactly behaves like a flip-flop,
means it changes states according to the result stored in the
accumulator. It has 9 flags and they are divided into 2 groups i.e.
conditional and control flags.
Conditional Flags: This flag represents the result of the last arithmetic
or logical instruction executed. Conditional flags are:
▪ Carry Flag
▪ Auxiliary Flag
Block Diagram of 8086
▪ Parity Flag
▪ Zero Flag
▪ Sign Flag
▪ Overflow Flag
Control Flags: It controls the operations of the execution unit. Control
flags are:
▪ Trap Flag
▪ Interrupt Flag
▪ Direction Flag
Pins Diagram and Description of 8086:

Reduced Instruction Set Architecture (RISC): Employs a


smaller set of simpler, core instructions that are frequently
used. Examples include MIPS and PowerPC.

8086 contains two independent functional units: a Bus Interface Unit


(BIU) and an Execution Unit (EU).
Bus Interface Unit (BIU):
The segment registers, instruction pointer and 6-byte instruction queue
are associated with the bus interface unit (BIU).
It handles transfer of data and addresses,
It Fetches instruction codes, stores fetched instruction codes in first-in-
Examples of RISC processors are PowerPC, Microchip PIC, SUN's SPARC, first-out register set called a queue,
RISC-V. It Reads data from memory and I/O devices,
Features of RISC Processor : It Writes data to memory and I/O devices,
▫ RISC processors use one clock per cycle (CPI) to execute each It relocates addresses of operands since it gets un-relocated operand
addresses from EU. The EU tells the BIU from where to fetch instructions
instruction in a computer. Each CPI also comprises the methods
or where to read data.
for fetching, decoding, and executing computer instructions.
It has the following functional parts:
▫ Multiple registers in RISC processors allow them to hold Instruction Queue: When EU executes instructions, the BIU gets 6-bytes
instructions, reply fast to the computer, and interact with of the next instruction and stores them in the instruction queue and this
computer memory as little as possible. process is known as instruction pre fetch. This process increases the
▫ The RISC processors use the pipelining technique to execute speed of the processor.
multiple parts or stages of instructions to perform more Segment Registers: A segment register contains the addresses of
efficiently. instructions and data in memory which are used by the processor to
AD0-AD15 (Address Data Bus): Bidirectional address/data lines. These
▫ RISC has a simple addressing mode and fixed instruction length access memory locations. It points to the starting address of a memory
are low order address bus. They are multiplexed with data. When these
for the pipeline execution. segment currently being used.
There are 4 segment registers in 8086 as given below: lines are used to transmit memory address, the symbol A is used instead
▫ It uses LOAD and STORE instruction to access the memory
Code Segment Register (CS): Code segment of the memory holds of AD, for example, A0- A15.
location.
instruction codes of a program. A16 - A19 (Output): High order address lines. These are multiplexed
Relatively used few instructions and few addressing modes. It is
Data Segment Register (DS): The data, variables and constants given in with status signals.
Hardwired rather than micro programmed control
the program are held in the data segment of the memory. A16/S3, A17/S4: A16 and A17 are multiplexed with segment identifier
signals S3 and S4.
Instruction Set Design Issues
A18/S5: A18 is multiplexed with interrupt status S5.
A19/S6: A19 is multiplexed with status signal S6. ES (Extra Segment Register): Optional segment register, mainly line. The microprocessor remains in the wait state as long as READY line
BHE/S7 (Output): Bus High Enable/Status. During T1, it is low. It enables used for additional data segments. is low. During the wait state, the contents of the address, address/data
the data onto the most significant half of data bus, D8-D15. 8-bit device Flag Register (1 x 16 bits): and control buses are held constant.
Contains various flags indicating the state of the processor after 4. What are the functions of an accumulator?
connected to upper half of the data bus use BHE signal. It is multiplexed
an instruction is executed, such as carry, zero, parity, etc The accumulator is the register associated with the ALU operations and
with status signal S7. S7 signal is available during T3 and T4.
RD (Read): For read operation. It is an output signal. It is active when sometimes I/O operations. It is an integral part of ALU. It holds one of d
Interrupt Mechanism in 8086 Microprocessor a t a t o be processed by ALU. It also temporarily stores the result of the
LOW. The 8086 microprocessor utilizes interrupts to handle operation performed by the ALU.
Ready (Input): The addressed memory or I/O sends acknowledgment asynchronous events while executing the main program. This
5. What is meant by polling?
through this pin. When HIGH, it denotes that the peripheral is ready to allows peripheral devices, external signals, or internal conditions
Polling or device polling is a process which identifies the device that has
transfer data. to temporarily halt the current program and prioritize the urgent
interrupted the microprocessor.
RESET (Input): System reset. The signal is active HIGH. event before resuming the original execution.
Key Components: 6. What is meant by interrupt?
CLK (input): Clock 5, 8 or 10 MHz. Interrupt is an external signal that causes a microprocessor to jump to
Interrupt Requests (IRQs): Signals sent by devices or the CPU
INTR: Interrupt Request. itself signifying the need for immediate attention. The 8086 has a specific subroutine.
NMI (Input): Non-maskable interrupt request. two main IRQ pins: INTR and NMI. 7. Define instruction cycle, machine cycle and T-state?
TEST (Input): Wait for test control. When LOW the microprocessor Interrupt Service Routine (ISR): A dedicated sub-program Instruction cycle is defined as the time required completing the
continues execution otherwise waits. designed to handle the specific event triggered by the interrupt. execution of an instruction. Machine cycle is defined as the time
VCC: Power supply +5V dc. Interrupt Descriptor Table (IDT): A data structure holding required completing one operation of accessing memory, I/O or
GND: Ground. information about available interrupts, including the memory acknowledging an external request. T cycle is defined as one subdivision
address of the corresponding ISR for each interrupt number. of the operation performed in one clock period.
Operating Modes of 8086:
8. Explain the signals HOLD, READY and SID.
There are two operating modes of operation for Intel 8086, namely the Benefits of Interrupts: HOLD indicates that a peripheral such a DMA controller is requesting
minimum mode and the maximum mode. Enhance responsiveness to external events and improve the use of address bus, data bus and control bus.
When only one 8086 CPU is to be used in a microprocessor system, the multitasking capabilities. READY is used to delay the microprocessor read or write cycles until a
8086 is used in the Minimum mode of operation. Prioritize urgent tasks without halting the entire program slow responding peripheral is ready to accept or send data.
In a multiprocessor system 8086 operates in the Maximum mode. execution.
SID is used to accept serial data bit by bit.
Pin Description for Minimum Mode: Efficiently handle asynchronous events without polling, reducing
9. What is interfacing?
processor overhead.
In this minimum mode of operation, the pin MN/MX is connected to 5V An interface is a shared boundary between the devices which involves
D.C. supply i.e. MN/MX = VCC. Addressing modes in the 8086 microprocessor sharing information. Interfacing is the process of making two different
The description about the pins from 24 to 31 for the minimum mode define how the operand for an instruction is located in memory. systems communicate with each other.
is as follows: By understanding these modes, you can effectively write 10. What is memory mapping?
INTA (Output): Pin number 24 interrupts acknowledgement. On assembly code for the 8086. The assignment of memory address to various registers in a memory
receiving interrupt signal, the processor issues an interrupt key addressing modes: chip is called as memory mapping.
acknowledgment signal. It is active LOW. 1. Immediate Addressing:
The operand is directly encoded within the instruction itself. Assembly Language
ALE (Output): Pin no. 25. Address latch enable. It goes HIGH during T1.
Example: MOV AX, 5 - This moves the immediate value 5 into Assembly language, often abbreviated as ASM, is a low-level
The microprocessor 8086 sends this signal to latch the address into the the AX register. programming language that bridges the gap between the
Intel 8282/8283 latch. 2. Register Addressing: hardware of a computer and the high-level languages
DEN (Output): Pin no. 26. Data Enable. When Intel 8287/8286 octal bus The operand is directly represented by a register. programmers typically use. Unlike its more beginner-friendly
transceiver is used this signal. It is active LOW. Example: ADD BX, DX - This adds the contents of the DX counterparts like Python or Java, assembly language instructions
DT/R (output): Pin No. 27 data Transmit/Receives. When Intel register to the BX register. directly correspond to the machine code understood by the CPU.
8287/8286 octal bus transceiver is used this signal controls the direction 3. Direct Addressing: This means that assembly code provides fine-grained control over
The operand's address is explicitly encoded within the instruction the hardware, but at the cost of being much more difficult to write
of data flow through the transceiver. When it is HIGH, data is sent out.
using a 16-bit offset. and understand for humans.
When it is LOW, data is received. example of a simple assembly language instruction:
Example: MOV EAX, [1000] - This moves the data at memory
M/IO (Output): Pin no. 28, Memory or I/O access. When this signal is address 1000 into the EAX register. MOV AX, BX
HIGH, the CPU wants to access memory. When this signal is LOW, the 4. Register Indirect Addressing:
CPU wants to access I/O device. The operand's address is stored in a specific register. Why Use Assembly Language?
WR (Output): Pin no. 29, Write. When this signal is LOW, the CPU Example: MOV EAX, [BX] - This moves the data at the Performance: Assembly code can be highly optimized for
memory address stored in the BX register into the EAX register. specific hardware, leading to significantly faster execution
performs memory or I/O write operation.
5. Indexed Addressing: compared to high-level languages. This makes it attractive for
HLDA (Output): Pin no. 30, Hold Acknowledgment. It is sent by the performance-critical applications like operating systems, device
processor when it receives HOLD signal. It is active HIGH signal. When The operand's address is calculated by adding a register value
(SI or DI) to a base address. drivers, and embedded systems.
HOLD is removed HLDA goes LOW. Direct Hardware Access: Assembly language allows
Example: MOV ECX, [BX + SI] - This moves the data at the
HOLD (Input): Pin no. 31, Hold. When another device in microcomputer memory address stored in BX + SI into the ECX register. programmers to directly interact with the hardware components of
system wants to use the address and data bus, it sends HOLD request 6. Based Addressing: a computer, giving them fine-grained control over things like
to CPU through this pin. It is an active HIGH signal. The operand's address is calculated by adding a base register memory management and peripheral devices.
value (BP) to an offset. Understanding Computer Architecture: Learning assembly
Pin Description for Maximum Mode:
Example: MOV DX, [300 + BP] - This moves the data at the language can provide a deeper understanding of how computers
In the maximum mode of operation, the pin MN/¯MX is made LOW. It is
work at the fundamental level, which can be valuable for software
grounded. The description about the pins from 24 to 31 is as follows: memory address 300 + BP into the DX register.
engineers and hardware developers.
QS1, QS0 (Output): Pin numbers 24, 25, Instruction Queue Status. 7. Based Indexed Addressing:
However, assembly language also has its drawbacks:
Combines both based and indexed addressing, adding a base
S0, S1, S2 (Output): Pin numbers 26, 27, 28 Status Signals. These signals Complexity: As mentioned earlier, assembly language is much
register (BP), an index register (SI or DI), and an offset.
are connected to the bus controller of Intel 8288. This bus controller more difficult to write and understand than high-level
Example: MOV AX, [BX + SI + 50] - This moves the data at
generates memory and I/O access control signals. languages. This makes it less accessible to beginners and
the memory address BX + SI + 50 into the AX register.
LOCK (Output): Pin no. 29. It is an active LOW signal. When this signal requires specialized knowledge of the specific CPU architecture
8. Inter-segment Addressing:
being used.
is LOW, all interrupts are masked and no HOLD request is granted. In a Accesses data in a different segment than the current one using
Error-prone: The low-level nature of assembly language makes
multiprocessor system all other processors are informed through this segment registers.
it more prone to errors, as even small mistakes can have
signal that they should not ask the CPU for relinquishing the bus control. Requires additional instructions and prefix bytes for effective
significant consequences. Debugging assembly code can be a
RG/GT1, RQ/GT0 (Bidirectional): Pin numbers 30, 31, Local Bus Priority access.
challenging task.
Control. Other processors ask the CPU by these lines to release the local Platform-specific: Assembly language instructions are specific
bus. to the underlying CPU architecture. This means that code written
In the maximum mode of operation signals WR, ALE, DEN, DT/R etc. are for one processor won't necessarily work on another, limiting its
portability.
not available directly from the processor. These signals are available Instruction Set Categories of 8086:
from the controller 8288. 1. Data Transfer Instructions:
Move data between registers, memory, and I/O ports.
Examples: MOV, PUSH, POP, XCHG, IN, OUT
Register Structure of 8086:
2. Arithmetic Instructions:
The 8086 microprocessor uses a segmented memory Perform mathematical operations like
architecture and has several specific registers with designated addition, subtraction, multiplication, division, increment, decreme
functions. Here's a breakdown of the Register Structure of 8086: nt, and comparison.
General-Purpose Registers (8 x 16 bits): Examples: ADD, SUB, MUL, DIV, INC, DEC, CMP
AX (Accumulator): Main arithmetic and data register, often used 3. Bit Manipulation Instructions: Assembler: The Bridge Between Humans and Computers
for temporary storage. Work with individual bits within bytes or words. Imagine a translator who can turn your everyday words into the
AL (Lower byte): Directly accessed for byte operations. Examples: AND, OR, XOR, NOT, SHL, SHR, ROL, ROR, TEST intricate, low-level language a computer understands. That's
AH (Upper byte): Used for higher-order operations and specific 4. String Instructions: essentially what an assembler does! It's a special program that
instructions. Handle operations on strings (sequences of bytes or words). takes instructions written in assembly language, a language
BX (Base Register): Used for addressing data in memory using Examples: MOVS, CMPS, SCAS, LODS, STOS closer to the inner workings of the computer, and translates them
the base pointer. 5. Program Execution Transfer Instructions: into machine code, the binary language directly understood by the
CX (Counter Register): Used for loop counters and string Control the flow of program execution. processor.
operations. Examples: JMP, CALL, RET, LOOP, JCXZ some key assembler directives used in 8086 assembly
DX (Data Register): Used for data manipulation and I/O 6. Processor Control Instructions: language:
operations. Manage the processor's state and operations. Segment Directives:
SP (Stack Pointer): Points to the top of the stack, used for storing Examples: HLT, NOP, STC, CLC, CMC, STD, CLD, STI, CLI SEGMENT and ENDS: Define the beginning and end of a
return addresses and temporary data. 7. Flag Manipulation Instructions: memory segment (code, data, stack, extra).
BP (Base Pointer): Used for addressing data in memory relative Set or clear the status flags in the flag register. ASSUME: Informs the assembler about the intended segment
to the stack segment. Examples: STC, CLC, CMC, STD, CLD, STI, CLI register for a given memory reference.
SI (Source Index): Used for indexed addressing of data in Procedure Directives:
memory. PROC and ENDP: Define the start and end of a procedure (a
1. What is an opcode?
DI (Destination Index): Used for indexed addressing of data in reusable block of code).
The part of the instruction that specifies the operation to be performed
memory during string operations. Macro Directives:
is called the operation code or opcode.
Segment Registers (4 x 16 bits): MACRO and ENDM: Define a macro, which is a template for
CS (Code Segment Register): Points to the beginning of the 2. What is an operand?
generating multiple instructions with different parameters.
current code segment. The data on which the operation is to be performed is called as an
Data Definition Directives:
DS (Data Segment Register): Points to the beginning of the operand. DB (Define Byte): Allocates one or more bytes of memory and
current data segment. 3. What is meant by wait state? optionally initializes them with values.
SS (Stack Segment Register): Points to the beginning of the This state is used by slow peripheral devices. The peripheral devices can DW (Define Word): Allocates one or more words (2 bytes each)
current stack segment. transfer the data to or from the microprocessor by using READY input of memory.
DD (Define Doubleword): Allocates one or more doublewords (4 efficient. Imagine a bucket with a tight lid that retains the water often the information needed by the processor is readily
bytes each) of memory. without needing refills. available in the cache and how often it needs to be fetched from
Pointer Directives: Key Differences: the slower main memory.
PTR: Specifies the size of a memory operand (byte ptr, word Speed:
ptr, dword ptr). SRAM: Significantly faster than DRAM due to its static storage Hit Ratio:
Assembly Control Directives: mechanism. Access times can be 10 times faster or even more. Represents the percentage of memory accesses that the cache
ORG: Sets the origin of a segment, specifying a starting address DRAM: Slower due to the need for refreshing, with access times successfully fulfills.
in memory. typically in the range of nanoseconds compared to picoseconds Indicates how often the information requested by the processor
END: Marks the end of the assembly code. for SRAM. is already stored in the cache, saving time and improving
Model Directives: Power Consumption: performance.
.MODEL: Specifies the memory model used SRAM: Consumes more power than DRAM because the flip-flops A higher hit ratio signifies a more efficient cache, meaning the
(small, medium, large, compact) to determine how segments are constantly draw current to maintain the data. processor often finds what it needs close at hand.
organized. DRAM: More power-efficient because of the simpler capacitor- Miss Ratio:
based storage and the need for refreshing only occasionally. Represents the percentage of memory accesses that result in a
macro Cost: miss, meaning the requested information is not found in the
In assembly language, a MACRO is a powerful tool that allows SRAM: More expensive than DRAM due to the complexity of the cache.
you to create reusable code templates, effectively reducing code flip-flop circuits. Requires the processor to fetch the data from the slower main
repetition and improving readability. DRAM: Less expensive because of the simpler capacitor storage memory, leading to a performance penalty.
Here's how macros work: and denser chip design. A lower miss ratio is desirable, as it minimizes the need for
Definition: Applications: slower memory access and improves overall system
You define a macro using the MACRO directive, followed by its SRAM: Used in situations where speed is critical, such as cache performance.
name and optional parameters. memory in processors, embedded controllers, and high- Calculation:
The body of the macro contains the code template you want to performance networking devices. Hit Ratio = (Number of cache hits) / (Total number of memory
reuse. DRAM: Used for large-capacity main memory in computers and accesses)
The ENDM directive marks the end of the macro definition. various devices due to its lower cost and adequate speed for most Miss Ratio = (Number of cache misses) / (Total number of
Expansion: applications. memory accesses)
When you invoke the macro in your code (by using its name and
providing any necessary arguments), the assembler expands it Cache memory Physical Address:
inline. Cache memory is a small, fast memory that sits closer to the Imagine this: your house has a unique street address, like 123
It substitutes the provided arguments for the corresponding processor than the main memory (RAM). It acts as a temporary Main Street. This is the physical address in the real world.
parameters in the macro body. storage area for frequently accessed data and instructions, Similarly, in a computer's memory, each byte of data has a
The expanded code is then assembled as if you had written it out bridging the speed gap between the processor and main unique physical address, a specific location in the memory chip.
manually. memory. This address is directly understandable by the hardware and
Example: Think of it as a personal assistant to the processor: tells it where to find the data.
This macro defines a template for printing two strings to the The processor requests data or instructions. Physical addresses are typically long strings of binary digits (0s
console. To use it: The cache memory, being closer and faster, checks if it has the and 1s) and can be difficult to remember or work with for
PrintString "Hello,", "world!" needed information already stored. humans.
If it does, it quickly provides it to the processor, saving time and Logical Address:
Advantages improving performance. Now, think about how you typically refer to your house. Instead
It allows complex jobs to run in a simpler way. If not, it fetches the data from the slower main memory, stores it of using the complex street address, you might use a more
It is memory efficient, as it requires less memory. in cache for future use, and then delivers it to the processor. relatable name, like "My home" or "123 House." This is similar to
It is faster in speed, as its execution time is less. a logical address in a computer.
It is mainly hardware-oriented. Key characteristics of cache memory: It's a symbolic or relative address that is easier for humans to
Smaller in size: Typically ranges from a few kilobytes to several understand and use in programs.
It requires less instruction to get the result.
megabytes, compared to gigabytes of main memory. The program accesses data using logical addresses, and then
It is used for critical jobs.
Much faster: Access times are often 10-100 times faster than the operating system or a special hardware unit called a Memory
It is not required to keep track of memory locations. main memory. Management Unit (MMU) translates these logical addresses into
It is a low-level embedded system. Costlier: Due to its speed and design, it's more expensive per the corresponding physical addresses.
Disadvantages byte than main memory.
It takes a lot of time and effort to write the code for the same. Multi-level: Modern processors often have multiple levels of Virtual memory
It is very complex and difficult to understand. cache (L1, L2, L3) for even better performance. it is an ingenious memory management technique that allows a
The syntax is difficult to remember. computer to use more memory than it physically has. It's like an
It has a lack of portability of program between different computer Benefits of cache memory: illusionist's trick that makes a small stage appear much larger,
architectures. Reduces access time to data and instructions: Speeds up enabling programs to run smoothly even when they require
It needs more size or memory of the computer to run the long programs program execution and overall system responsiveness. extensive memory resources.
written in Assembly Language. Improves overall system performance: Makes a significant Here's how it works:
difference in tasks that require frequent data access, like 1. Illusion of Vast Memory:
Memory Devices: gaming, video editing, and web browsing. The operating system creates an illusion of a vast, contiguous
1. RAM (Random Access Memory): Reduces power consumption: By minimizing the need to access virtual memory space for each program.
Stores data that can be read and written to at any time. main memory, it conserves energy. This virtual space is much larger than the actual physical
Volatile, meaning data is lost when power is turned off. Locality of reference memory (RAM) available.
Used for temporary storage of data used by the processor during It is a fundamental concept in computer science that describes It's like having a personal library with seemingly endless
program execution. the tendency of a program to repeatedly access a relatively shelves, even if you only have a small room for books.
2. ROM (Read-Only Memory): small set of memory locations within a specific timeframe. This 2. Mapping and Translation:
Stores permanent data that can only be read, not written to. phenomenon plays a crucial role in optimizing memory access The operating system keeps track of which parts of the virtual
Non-volatile, meaning data is retained even when power is turned and improving system performance. memory are currently in physical RAM and which parts reside on
off. There are two main types of locality of reference: a slower storage device, such as a hard disk or SSD.
Used for storing programs and essential system data. 1. Temporal locality: This refers to the tendency of a It uses a special hardware unit called the Memory Management
1. EPROM (Erasable Programmable Read-Only program to reuse specific data or instructions Unit (MMU) to translate virtual addresses used by programs into
Memory): repeatedly over a short period. Think of it like physical addresses for actual memory locations.
Similar to ROM in functionality, but data can be erased using revisiting frequently used pages in a book. For This process is like a librarian managing a vast
ultraviolet light and reprogrammed. example, a loop in a program will access the same collection, seamlessly bringing books from storage to reading
Offers flexibility for development and prototyping. instructions and data elements many times until rooms as needed.
the loop finishes. 3. Seamless Swapping:
Data Access: 2. Spatial locality: This describes the tendency of a When a program needs data that's not currently in RAM, the
RAM: Read and write access, allowing for frequent data changes. program to access memory locations near recently operating system automatically swaps out less-used data to the
Like a two-way street, data can flow both in and out. accessed ones. Imagine browsing through storage device and swaps in the required data from storage to
ROM: Read-only access, typically containing pre-programmed chapters in a book consecutively instead of RAM.
instructions or data. Think of a one-way street, information only jumping around randomly. For example, accessing This happens behind the scenes, without the program's
flows out. an array element often leads to accessing nearby knowledge, maintaining the illusion of unlimited memory.
EPROM: Read-only access after programming, but data can be elements within the same array shortly afterward. Imagine the librarian effortlessly exchanging books on shelves to
erased and reprogrammed using a special device. Imagine a one- Cache mapping make room for new requests, without readers ever noticing the
way street with a dedicated eraser that allows you to rewrite the it is the strategy used to determine where data from main swaps.
information flowing out. memory is placed within the cache memory. It's like organizing a Benefits of Virtual Memory:
3. Typical Usage: small, efficient workspace to maximize productivity. Increased memory capacity: Allows programs to run even if
RAM: Holds temporary data used by the processor during Here are the three primary cache mapping techniques: they exceed the physical RAM size.
program execution, like open applications and files. It's the 1. Direct Mapping: Improved multitasking: Enables multiple programs to run
workhorse for active tasks. Each main memory block can only be placed in one specific concurrently without exhausting memory.
ROM: Stores essential system software like the BIOS and basic block of the cache. Simplified memory management: Programmers can focus on
input/output routines (BIOS). It's the foundation for booting up and Simple and fast, but can lead to conflicts when multiple blocks logical memory addresses, leaving physical memory
basic functionality. map to the same cache block. management to the operating system.
EPROM: Used for development and prototyping when frequent Imagine a bookshelf with fixed slots for certain genres: each Enhanced security: Can isolate programs from each
updates to firmware or programs are needed. It's the flexible book can only go in its designated spot. other, preventing unauthorized memory access.
option for testing and adjustments. 2. Associative Mapping: Cache Memory:
4. Cost and Speed: Any main memory block can be placed in any block of the Focus: Speed up data access for the processor.
RAM: Generally cheaper but has slower read/write speeds cache. Size: Very small (kilobytes to megabytes).
compared to ROM and EPROM. Offers more flexibility and reduces conflicts, but requires more Location: Closer to the processor than main memory (RAM).
ROM: More expensive but boasts faster speeds than RAM. complex hardware to search for data. Content: Stores frequently accessed data and instructions from
EPROM: Moderate cost with read/write speeds between RAM Think of a desk with multiple drawers, where you can put any main memory.
and ROM. However, the erasing and reprogramming process can item in any drawer, allowing for more efficient organization. Access Time: Much faster than main memory (nanoseconds
be time-consuming. 3. Set-Associative Mapping: vs. nanoseconds to nanoseconds).
A compromise between direct and associative mapping. Function: Acts like a temporary workspace, keeping frequently
Storage Mechanism: Cache is divided into sets, and each main memory block can be used information readily available for the processor, reducing
DRAM (Dynamic Random Access Memory): placed in any block within a specific set. access times and boosting performance.
Uses capacitors to store data. However, these capacitors leak Balances flexibility with implementation complexity, reducing
charge over time, requiring periodic refreshing to maintain the conflicts while maintaining a simpler hardware design. Difference between virtual memory & cache memory:
data. It's like a leaky bucket that needs constant refilling to keep Picture a multi-shelf unit with multiple compartments on each Virtual Memory:
the water level stable. shelf: items can be placed in any compartment within a Focus: Expand the available memory beyond physical RAM
SRAM (Static Random Access Memory): Utilizes flip-flops designated shelf, providing both structure and adaptability. limitations.
(circuits built with transistors) to store data. These circuits latch In the context of cache memory, hit ratio and miss ratio are Size: Much larger than cache memory (gigabytes to terabytes).
the data and don't need refreshing, making them faster and more crucial metrics for evaluating its effectiveness. They tell you how
Location: No dedicated hardware; utilizes main memory and
storage devices (hard disk, SSD).
Content: Allows programs to use more memory than physically
available by storing less-used parts on storage devices.
Access Time: Slower than main memory and significantly
slower than cache memory (milliseconds to milliseconds
vs. nanoseconds).
Function: Creates the illusion of a larger memory
space, enabling programs to run smoothly even if their memory
requirements exceed physical RAM, improving multitasking and
overall system performance.

LTB:
A TLB, or Translation Lookaside Buffer, is a special type of
memory cache used in computer systems to speed up memory
access. Think of it as a shortcut or cheat sheet for the processor
to find frequently used memory locations much faster.
Here's how it works:
Mapping Memory: Every memory location in your computer has
a unique address, similar to a house address. These addresses
can be long and complex, making them difficult for the processor
to work with directly.
Translation and Storage: The TLB acts as a
middleman, storing recently used address translations in a
small, high-speed cache. These translations map virtual
addresses (the addresses used by programs) to their
corresponding physical addresses (the actual locations in RAM).
Faster Access: When the processor needs to access data, it
first checks the TLB. If the needed address translation is found
(a "TLB hit"), the physical address is retrieved instantly and the
data can be accessed quickly. This is like checking your address
book for a frequently called number instead of dialing the full
number every time.
Miss and Fallback: If the address translation is not found in the
TLB (a "TLB miss"), the processor must fall back to the slower
process of using the page table, a comprehensive list of all
address translations stored in main memory. This is like using
the phone book when the number isn't in your address book.
Benefits of Using TLB:
Significantly faster memory access: Finding memory
locations through the TLB can be many times faster than using
the page table, leading to improved program performance and
overall system responsiveness.
Reduces processor workload: The TLB frees up the processor
from performing frequent page table lookups, allowing it to focus
on other tasks.
Improves efficiency: By caching frequently used
translations, the TLB minimizes the need for slower accesses to
the page table in main memory.
Types of TLBs:
Instruction TLB (ITLB): Specifically stores translations for
instruction addresses, crucial for program execution.
Data TLB (DTLB): Stores translations for data addresses used
by programs to access various data structures.

You might also like