0% found this document useful (0 votes)
10 views10 pages

Memory Organization: by Saniya Mhatre

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Memory Organization: by Saniya Mhatre

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Memory

Organization
Memory organization is the fundamental
structure that enables computers to efficiently
store and retrieve data. Understanding this
hierarchy is crucial for optimizing system
performance and making the most of available
resources.

by Saniya Mhatre
Memory Interleaving
• An arithmetic pipeline usually requires two or more
operands to enter the pipeline at the same time.
• Instead of using two memory buses for simultaneous
access, the memory can be partitioned into a number
of modules connected to a common memory address
and data buses.
• A memory module is a memory array together with its
own address and data registers.
Hierarchical Memory Organization
Registers Cache Main Memory

Fastest, but smallest. Used by Intermediate storage that bridges Larger but slower storage that
the processor for immediate data the gap between the processor holds the bulk of the program
access. and main memory. and data.
Cache Memory
• The cache is a small and very fast memory, interposed between
the processor and the main memory.
• Its purpose is to make the main memory appear to the processor
to be much faster than it actually is.
• The cache memory can store a reasonable number of blocks at
any given time, but this number is small compared to the total
number of blocks in the main memory.
• The correspondence between the main memory blocks and those
in the cache is specified by a mapping function.
Cache hits
• The processor does not need to know explicitly about
the existence of the cache.
• It simply issues Read andWrite requests using
addresses that refer to locations in the memory. The
cache control circuitry determines whether the requested
word currently exists in the cache.
• If it does, the Read orWrite operation is performed on This Photo by Unknown Author is licensed under CC BY-SA

the appropriate cache location. In this case, a read or


write hit is said to have occurred.
Cache misses
• A Read operation for a word that is not in the cache
constitutes a Read miss. It causes the block of words
containing the requested word to be copied from the
main memory into the cache.

This Photo by Unknown Author is licensed under CC BY-SA


Cache Size vs. Block Size
Cache Size 1
Larger cache size generally improves hit
rates, but increases cost and power
consumption. 2 Block Size
Larger block size can reduce the number of
misses, but may also increase access times.
Tradeoffs 3
Careful balancing of cache size and block
size is necessary to optimize system
performance.
Mapping Functions
Direct Mapping Fully Associative
Each block in main memory maps to a Blocks can be stored in any cache line,
specific cache line, limiting flexibility. improving hit rates but increasing
complexity.

Set Associative
A compromise between direct and fully associative, offering balanced performance.
Replacement Algorithms
When the cache is full, the replacement policy determines which data to evict to make
room for new data. Common policies include Least Recently Used (LRU), First-In
First-Out (FIFO), and random replacement. The goal is to maximize cache hit rates and
overall system performance.

FIFO LRU Random


Replaces the least recently used Selects a random block to replace,
Evict the oldest block, first-in a simple but less efficient
block, providing optimal
first-out algorithm.
performance but higher cost.
Write Policies

Write Through Write Back Write Combining


Updates are immediately Updates are first stored in Multiple small writes are
written to both cache and main cache, and only written to main combined into a single, larger
memory, ensuring data memory when the block is write to improve efficiency.
consistency. replaced.

You might also like