Chapter 09 Memory - Coa KU
Chapter 09 Memory - Coa KU
A) Microcomputer Memory:
The contemporary distinctions are helpful, because they are also fundamental to the
architecture of computers in general. The distinctions also reflect an important and
significant technical difference between memory and mass storage devices, which
has been blurred by the historical usage of the term storage. Nevertheless, this article
uses the traditional nomenclature.
Volatility
Non-volatile memory Will retain the stored information even if it is not constantly
supplied with electric power. It is suitable for long-term storage of information.
(Nowadays used for most of secondary, tertiary, and off-line storage. In 1950s and
1960s, it was also used for primary storage, in the form of magnetic core memory.
Differentiation
A form of volatile memory similar to DRAM with the exception that it never needs
to be refreshed.
Mutability
Read only storage Retains the information stored at the time of manufacture, and
write once storage(Write Once Read Many) allows the information to be written
only once at some point after manufacture. These are called immutable storage.
Immutable storage is used for tertiary and off-line storage. Examples include CD-
ROM and CD-R. Slow write, fast read storage Read/write storage which allows
information to be overwritten multiple times, but with the write operation being
much slower than the read operation. Examples include CD-RWand flash memory.
Accessibility
Random access
Any location in storage can be accessed at any moment in approximately the same
amount of time. \Such characteristic is well suited for primary and secondarystorage.
Sequential access
The accessing of pieces of information will be in a serial order, one after the other;
therefore the time to access a particular piece of information depends upon which
piece of information was last accessed. Such characteristic is typical of off-line
storage.
Addressability
Location-addressable
File addressable
Information is divided into filesof variable length, and a particular file is selected
with human-readabledirectory and file names. The underlying device is still
location-addressable, but the operating system of a computer provides the file system
abstraction to make the operation more understandable. In modern computers,
secondary, tertiary and off-line storage use file systems.
Content-addressable
Capacity
Raw capacity
The total amount of stored information that a storage device or medium can hold. It
is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
Performance
Latency
The time it takes to access a particular location in storage. The relevant unit of
measurement is typically nanosecondfor primary storage, millisecond for secondary
storage, and second for tertiary storage. It may make sense to separate read latency
and write latency, and in case of sequential access storage, minimum, maximum and
average latency.
Throughput
The rate at which information can be read from or written to the storage. In computer
data storage, throughput is usually expressed in terms of megabytes per second or
MB/s, though bit rate may also be used. As with latency, read rate and write rate
may need to be differentiated. Also accessing media sequentially, as opposed to
randomly, typically yields maximum throughput.
Magnetic
that the head or medium or both must be moved relative to another in order to access
data. In modern computers, magnetic storage will take these forms:
Magnetic disk
Magnetic tape data storage, used for tertiary and off-line storage
C) Memory Heirarchy:
D) Main Memory:
Main memory, often referred to as RAM (Random Access Memory), is a type of
computer memory that is used to store data and machine code currently being used
and processed by a computer. It is a volatile memory, meaning that its contents are
lost when the power is turned off.
Volatility: Main memory is volatile, which means that it loses its contents when the
power is turned off. This is in contrast to non-volatile memory like hard drives or
solid-state drives, which retain data even when the power is off.
Speed: Main memory is much faster than secondary storage devices like hard drives
or SSDs. This speed is crucial for providing quick access to data that the CPU needs
during program execution.
Random Access: The term "Random Access" in RAM implies that any memory cell
can be directly accessed (read or written) in a constant time, regardless of the
memory location's distance from the currently addressed cell. This is in contrast to
sequential access, where the time to access a particular piece of data depends on its
position in the sequence.
Temporary Storage: Main memory is used for temporary storage of data that the
CPU is actively using or processing. This includes the operating system, running
applications, and data needed for the execution of programs.
Types of RAM:
Dynamic RAM (DRAM): Requires refreshing thousands of times per second to
retain data. It is less expensive but slower compared to SRAM.
Static RAM (SRAM): Doesn't need refreshing and is faster than DRAM but is more
expensive and consumes more power.
Non-Volatile: ROM is non-volatile, meaning it retains its data even when the power
is turned off. This characteristic makes ROM suitable for storing critical and
permanent data, such as firmware and basic input/output system (BIOS) information.
Read-Only: The term "Read-Only" implies that the data stored in ROM is generally
not modified or written to during normal operation. It is pre-programmed during
manufacturing and is intended to be stable throughout the lifespan of the device.
This is in contrast to RAM, which is read and written to during regular use.
Boot Process: During the boot-up sequence of a computer or electronic device, the
initial instructions required to start the system are often stored in ROM. This includes
the BIOS or UEFI firmware, which performs system checks, initializes hardware
components, and hands over control to the operating system stored on other storage
devices.
Types of ROM:
Mask ROM (MROM): The data is permanently written during the manufacturing
process, and it cannot be modified or reprogrammed.
Programmable ROM (PROM): Users can write data to PROM once using a special
device called a PROM programmer.
E) Auxiliary Memory
Auxiliary memory, also known as secondary storage or external storage, refers to
storage devices that are used to store data and programs for the long term. Unlike
the primary or main memory (RAM) which is volatile and loses its contents when
power is turned off, auxiliary memory retains data even when the power is off. The
primary function of auxiliary memory is to provide non-volatile, large-capacity
storage for the operating system, applications, and user data. Here are characteristics
of auxiliary memory:
• Hard Disk Drive (HDD): Uses magnetic storage to store data and is
commonly used in personal computers and servers.
• Solid-State Drive (SSD): Uses flash memory to store data and offers
faster access times compared to HDDs.
• Optical Discs: Include CDs, DVDs, and Blu-rays. They are used for
storing data, music, movies, and software.
8. File Systems: Auxiliary memory devices are formatted with file systems that
organize and manage data storage. Common file systems include NTFS,
FAT32, and exFAT for Windows, and HFS+ and APFS for macOS.
F) Associative Memory
G) Cache memory :
Speed of the main memory is very low in comparison with the speed of processor
– For good performance, the processor cannot spend much time of its time waiting
to access instructions and data in main memory.
– Important to device a scheme that reduces the time to access the information
– When a cache is full and a memory word that is not in the cache is referenced, the
cache control hardware must decide which block should be removed to create space
for the new block that contain the referenced word.
Cache memory is constructed with SRAM. It is much faster than DRAM, with
acess time in order of 10 ns. It is also much more expensive than DRAM used for
physical memory.
cache memory uses parallel searching of data. It first compares the incoming address
to the address present in the cache. if the address matches, it is said that a hit is
occurred and the corresponding data is read by the cpu.. if the address do not match,
it is said that a miss has occurred.
cache memory consist of 2 levels:
1) Cache size:
cache size should be small so that the average cost per bit is close to that
of main memory. The larger the cache it is slower.
2) mapping techniques:
it specifies how the cache is organized. types: direct, associative, set
associative.
3) Replacement algorithms: when the cache is filled up, and a new block is
brought into cache ,one of existing block must be replaced. Replacement
algorithm is needed for associative and set associative mapping. To achieve
high speed,such algorithm must be implemented in hardware.
The 4 most common replacement algorithms are:-
a) LRU: least recently used.
most effective.
replace that block in the set that has been in cache longest with no
reference to it.
easily implemented in 2 way associative.
b) FIFO: first in first out.
replace that block in a set that has been in cache longest.
fifo is easily implemented as round robin or circular buffer
technique.
c) LFU: least frequently used.
replace that block in the set that has experience fewer references
lfu can be implemented by associating a counter with each line.
d) Random replacement:
Pick a line at random from among the candidate lines. it has low
performance.
e) write policy:
i) write through: when a data is written from CPU into a location
in a cache, it is also written corresponding in physical memory.
ii) write back: the value written to cache is not always written to
physical memory. The value is written to physical memory only
once,when data is removed from cache.
f) line size: FOR HPC system, 64 and 128 bytes line sizes are mostly
used.
g) number of caches: cache memory is placed at 2 0r 3 levels. they are called
first level, second level and third level. some processor contain L1 and
L2 within the processor. the cache within the processor are internal cache
and outside the processor is external cache.
Cache Mapping:
Cache mapping defines how a block from the main memory is
mapped to the cache memory in case of a cache miss. Cache
mapping is a technique by which the contents of main memory
are brought into the cache memory.
1. Direct Mapping
2. Fully Associative Mapping
3. K-way Set Associative Mapping
1) Direct mapping,
A particular block of main memory can map only to a particular
line of the cache. Consider cache memory is divided into ‘n’
number of lines. Then, block ‘j’ of main memory can map to line
number (j mod n) only of the cache.
In direct mapping,
There is no need for any replacement algorithm. This is because
a main memory block can map only to a particular line of the
cache. Thus, the new incoming block will always replace the
existing block (if any) in that particular line.
2. Fully Associative Mapping
In fully associative mapping, A block of main memory can map
to any line of the cache that is freely available at that moment.
This makes fully associative mapping more flexible than direct
mapping.
Here, All the lines of cache are freely available. Thus, any block
of main memory can map to any line of the cache. Had all the
cache lines been occupied, then one of the existing blocks will
have to be replaced. In fully associative mapping, A
replacement algorithm is required. Replacement algorithm
suggests the block to be replaced if all the cache lines are
occupied. Thus, replacement algorithm like FCFS Algorithm,
LRU Algorithm, etc is employed.
3. K-way Set Associative Mapping
In k-way set associative mapping,
Cache lines are grouped into sets where each set contains k
number of lines. A particular block of main memory can map to
only one particular set of the cache. However, within that set,
the memory block can map any cache line that is freely
available.
H) Virtual Memory