Module
Module
Cache memory helps your computer quickly access data. Think of it as a small, super-fast storage area that sits
between the processor (CPU) and the main memory (RAM). Its architecture is made up of three main parts:
What it does: It’s like a “map” that keeps track of where data came from in the main memory.
How it works:
o When the processor wants some data, it looks at this map to check if the data is already in the cache.
o If the "address" on the map matches what the processor wants, it’s called a cache hit, and the data is
delivered quickly.
o If it doesn’t match, it’s a cache miss, and the processor fetches the data from main memory.
2. Data Section
What it does: This is where the actual data is stored in the cache.
How it works:
o Data is stored in chunks called cache lines. Each cache line holds several pieces of data (e.g., 4 small
blocks of 32 bits each).
o The size of the cache depends on how much data it can hold, excluding the map and extra
information.
Example: If your cache has 256 lines and each line holds 4 data blocks, the cache can store 256 × 4 = 1024
blocks of data.
3. Status Information
What it does: Tracks whether the stored data is valid or has been changed.
How it works:
o Dirty Bit: Shows if the data has been modified. If yes, the cache needs to update the main memory
before discarding or replacing it.
Cache Controller: This is like a manager that decides how to handle data requests.
o When the processor asks for data, the controller checks the directory store for a match.
o If there’s a match (cache hit), it delivers the data from the cache.
o If not (cache miss), it gets the data from main memory, stores it in the cache, and then gives it to the
processor.
Example: A 4 KB Cache
2. Each cache line holds 4 blocks of data, with each block being 32 bits (4 bytes).
o So, one cache line can hold 4 × 32 bits = 128 bits (16 bytes).
In addition to this data, each cache line has an address (cache-tag) and status bits (like valid and dirty).
Summary
This setup ensures the processor can access data faster by reducing trips to slower main memory.