Coa Notes - Unit 4 Rgpv
Coa Notes - Unit 4 Rgpv
ain Memory refers to the primary storage that is directly accessible by the CPU. It is
M
essential for storing and retrieving data that is currently being processed by the computer.
● Volatility: Volatile memory (loses data when poweris turned off).
● Purpose: Temporarily stores data and instructionsthat the CPU needs in real-time.
● Types of RAM:
○ Dynamic RAM (DRAM): Requires constant refresh to maintain data, cheaper but
slower.
○ Static RAM (SRAM): Faster and more reliable than DRAM, does not require
refreshing but is more expensive and takes up more space.
● Speed: Fast, with low latency access for active tasks.
● Capacity: Typically in gigabytes (GB).
● UseCase:Usedforrunningprograms,operatingsystems,andholdingintermediate
data for processing.
● Volatility: Non-volatile memory (retains data evenwhen power is turned off).
● Purpose: Stores firmware or software that is permanently programmed into the
system, such as boot-up instructions.
● Types of ROM:
○ PROM (Programmable ROM): Can be written once and isread-only afterward.
○ EPROM (ErasableProgrammableROM):Canbeerasedusingultravioletlightand
reprogrammed.
○ EEPROM (Electrically Erasable Programmable ROM): Can be erased and
reprogrammed electrically.
● Speed: Slower compared to RAM, especially when writingdata.
● Capacity: Typically in kilobytes or megabytes.
● Use Case: Stores the BIOS/firmware used during theboot process.
2. Secondary Memory: Magnetic Tape, Disk, Optical Storage
econdaryMemoryprovideslong-termstoragefordataandprogramsthatarenotactively
S
in use but need to be stored for retrieval.
Magnetic Tape
● olatility: Non-volatile.
V
● Purpose: Used for archival storage and backup.
● Speed: Slow access times compared to other secondary memory types.
● Capacity: Can store large amounts of data, often interabytes (TB).
● Cost: Relatively inexpensive per unit of storage.
● Use Case: Long-term backups, data archiving, and disaster recovery.
Magnetic Disk
● Volatility: Non-volatile.
● Purpose: A general-purpose storage medium for operating systems, applications,
and data.
● Speed: Moderate speed, faster than magnetic tape butslower than RAM.
● Capacity: Can range from hundreds of gigabytes (GB)to several terabytes (TB).
● Cost: Moderate in cost per unit of storage.
● Use Case: Hard disk drives (HDDs) are commonly used for system storage in
computers.
● Volatility: Non-volatile.
● Purpose: Used for storing data in a format that canbe read using a laser.
● Speed: Slower than magnetic disks, especially forwriting data.
● Capacity: Typically ranges from a few hundred megabytes (CDs) to several
gigabytes (Blu-ray).
● Cost: Cheap per unit of storage, especially for massdistribution.
● Use Case: Software distribution, media storage, and archiving data for long-term
preservation.
Volatile Memory
● Definition:Memorythatrequirespowertomaintainthestoredinformation.Whenthe
power is lost, all data in volatile memory is erased.
● Examples: RAM, Cache memory.
● Use:PrimarilyusedfortemporarystorageofdatathattheCPUactivelyprocesses.It
is fast but loses its contents when the system is powered off.
● Advantages: Fast access speeds, ideal for holdingactive data.
● Disadvantages: Data is lost when power is turned off.
Non-Volatile Memory
● Definition: Memory that retains stored data even whenthe power is turned off.
● Examples: ROM, Flash memory, Hard Drives, SSDs.
● Use: Used for long-term storage of data, applications,and system software.
● Advantages: Data persistence, cheaper per byte comparedto volatile memory.
● Disadvantages: Slower access times compared to volatile memory, especially for
write operations.
Cache Memory
● Definition: A small, high-speed memory located close to the CPU. It stores
frequently used instructions and data to reduce the CPU's wait time for data retrieval.
● Volatility: Volatile (data is lost when power is turnedoff).
● Speed: Extremely fast, much faster than RAM.
● Capacity: Small, typically ranging from a few kilobytes (KB) to a few megabytes
(MB).
● Use: Used to speed up access to frequently used data and instructions. Cache
stores a copy of data from RAM that is repeatedly accessed.
● Types of Cache:
○ L1 Cache: Located on the CPU chip for immediate access.
○ L2 Cache: Slightly slower than L1 but larger in size,located near the CPU.
○ L3 Cache: Even larger and slower than L2, shared among multiple cores in
multi-core processors.
Virtual Memory
● Definition: A memory management technique that creates an illusion of a larger
amountofmemorythanphysicallyavailablebyusingaportionofsecondarystorage(likea
hard disk) as virtual RAM.
● Volatility: Volatile (active virtual memory data islost if the system shuts down).
● Speed: Slower than physical memory(RAM),asitinvolvesdataswappingbetween
RAM and secondary storage.
● Capacity:Large,typicallyrangingfromafewgigabytes(GB)tomanyterabytes(TB),
depending on the system configuration and storage space.
● Use: Used to extend the effective amount of memory available to programs,
especially when physical RAM is insufficient.
● Working: When the RAM is full, less-used data is swapped to disk storage(virtual
memory) to free up space for active tasks. Whenthedataisneededagain,itisswapped
back into RAM.
● Main Memory (RAMandROM):RAMisusedfortemporarystorageofactivedata,
while ROM stores permanent instructions (like the boot-up process).
● Secondary Memory (Magnetic Tape, Disk, Optical Storage): Provides long-term
storage for programs and data that are not actively used.
● Volatile vs. Non-Volatile Memory: Volatile memory requires power to retain data,
while non-volatile memory retains data even when powered off.
● Cachevs.VirtualMemory:Cacheimprovesspeedbyholdingfrequentlyuseddata,
whilevirtualmemoryallowsasystemtorunlargerapplicationsbyusingsecondarystorage
as temporary memory.
Differences
ainMemory(PrimaryMemory)andSecondaryMemoryaretwotypesofstorageused
M
in computer systems to store data and programs. They differ in various aspects such as
speed, volatility, and purpose.
Examples AM
R (Random Access Hard Disk Drive (HDD),
Memory), Cache. Solid-StateDrive(SSD),Optical
Discs.
ccess
A Direct access to data. Indirect access through I/O
Method operations or file systems.
ower
P equires constant power to Retains data without power.
R
Dependency retain data.
acheMemoryandVirtualMemoryarebothusedtoenhancetheoverallperformanceof
C
a computer system, but they work in very different ways and serve different purposes.
Purpose emporarily
T stores Extends the computer’s memory by
frequently accessed data tousing a portion of secondary
speed up access. memory (like a hard drive) as if it
were RAM.
Speed xtremely fast, faster than Slower than RAM, but provides an
E
RAM. illusion of more memory.
Size mallerinsize(typically256 Larger in size (can be many GB).
S
KB to several MB).
Cost ore expensive per byteVirtual memory is cheaper, as it
M
than RAM. uses the hard drive.
olatile Memory and Non-Volatile Memory are two categories of memory that differ
V
primarily in their data retention characteristics.
Data Retention Loses data when power is turned Retains data even when
off. power is turned off.
ower
P equires continuous power to Doesnotrequirecontinuous
R
Dependency maintain data. ower.
p
nergy
E onsumes power as long as the Requires no power once
C
Consumption computer is on. data is written (unless
actively in use).
xamples
E in Desktop and laptop RAM, CPUHard drives, Flash drives,
Devices registers. BIOS, ROM chips.
Replacement Algorithm is a method used to determine which cache block should be
A
replaced when the cache is full, and new data needs to be loadedintothecache.These
algorithms are essential for improving the cache hit rate, thus reducing memory access
times and improving overall system performance.
hen a new block of data is requested, if it is not already presentinthecache(acache
W
missoccurs),thereplacementalgorithmistriggeredtoselectanexistingblockinthecache
to replace with the new one.
.
1 Least Recently Used (LRU)
○ Concept: The block that has not been used for thelongestperiodisreplaced.The
ideaisthatifablockhasnotbeenaccessedforalongtime,itislesslikelytobeaccessed
soon.
○ Steps:
.
1 Track the access history of all cache blocks.
2. When a cache miss occurs, identify the block thathasbeenunusedforthelongest
time.
3. Replace the least recently used block with the new data.
2. First-In, First-Out (FIFO)
○ Concept:Theblockthathasbeeninthecachethelongestisreplaced,regardlessof
how frequently or recently it has been accessed.
○ Steps:
1. Each cache block isassociatedwithatimestampindicatingwhenitwaslastloaded
into the cache.
2. When a cache miss occurs, the block with the oldest timestamp is replaced.
3. The new data is then loaded into the cache and assigned a new timestamp.
3. Least Frequently Used (LFU)
○ Concept: The block that has been accessedtheleastnumberoftimesisreplaced.
This algorithm assumes that blocks used less frequently are less likely to be needed again.
○ Steps:
1. Keep track of how often each block in the cache has been accessed.
2. When a cache miss occurs, identify the block with the lowest access count.
3. Replace the least frequently used block with the new data.
4. Random Replacement
○ Concept: A cache block is randomly selected for replacementwhen a miss occurs.
○ Steps:
1. When a cache miss occurs, randomly choose one block from the cache to replace.
2. Replace the chosen block with the new data.
5. Optimal Replacement (Belady's Algorithm)
○ Concept: The block that will not be used for the longest time in the future is
replaced. This algorithm is theoretically the best but impractical because it requires
knowledge of future requests.
○ Steps:
1. Look ahead at the future sequence of memory accesses.
2. Identify the block in the cache that will not be accessed for the longest time.
3. Replace that block with the new data.
ache performance is critical for overall system speed, and there are severaltechniques
C
that can be used to improve cache hit rates and reduce the time it takes toretrievedata
from main memory.
.
1 PageReference:Eachtimeapageisaccessed(eitherreadorwritten),itismarked
as "most recently used."
2. PageFault:Whenanewpageneedstobeloadedintomemoryandthereisnofree
space, the LRU algorithm looks for the page that was last used the longest ago and
replaces it with the new page.
3. Tracking Access: The system keeps track of the order in which pages are
accessed, either by using a counter or a stack.
.
1 Counter Implementation:
○ Every page entry has a counter associated with it.
○ Whenever a page is referenced, the counter is updated with the current clock time.
○ When a page fault occurs, the page with the smallest counter value (leastrecently
used) is replaced.
2. Stack Implementation:
○ Thepagesaremaintainedinastackwherethemostrecentlyusedpageisatthetop,
and the least recently used is at the bottom.
○ Each time a page is referenced, it is moved to the top of the stack.
○ When a page fault occurs, the page at the bottom of the stack (the least recently
used) is replaced.
LRU Example:
Consider the following page reference string and a memory size of 3 pages:
age
P ages
P in Page
Reference Memory Fault?
● The first three page references result in page faults because memory is initially
empty.
● After accessing page 1 again, no page fault occurs because it is already in memory.
● When page 4 is referenced, page 3 (the least recently used) is replaced.
● Similarly, as new pages 2 and 5 are accessed, the least recently used pages are
replaced.
age replacement algorithms are techniques used by the operating system to manage
P
pages in memory when a page fault occurs. There are several different algorithms for
replacing a page when there is no free space in memory.
.
1 FIFO (First In, First Out)
○ The oldest page in memory (the first loaded) is replaced when a page fault occurs.
○ Pros: Simple and easy to implement.
○ Cons:Cancausepoorperformancedueto"Belady'sAnomaly,"whereincreasingthe
number of frames can sometimes increase the number of page faults.
2. FIFO Example:
○ Given the reference string 1, 2, 3, 4, 1, 2, 5, and a memory size of 3:
age
P ages
P in P
age
Reference Memory Fault?
LRU eplaces the least Works well with Complex to implement,
R
recently used page. temporal locality. requires history
tracking.
ptimal
O eplaces thepagethat Theoretically
R otpracticalduetothe
N
(OPT) will be used furthest in optimal. need for future
the future. knowledge.
LFU eplaces the least Handles varying Not ideal for bursty
R
frequently used page. access p atterns access patterns.
well.
emorymanagementisanessentialfunctionofmoderncomputersystems.Itinvolvesthe
M
management of a computer's memory resources to ensure that programs run efficiently
while sharingmemoryresourceswithoutinterferingwitheachother.Memorymanagement
hardware plays a critical role in implementing memory management functions, which
include allocation, deallocation, and protection of memory. It involves both hardware and
software components working together.
In this context, memory management hardware refers to the physical components that
assist in managing memory, as well as the mechanisms used to access and control
memory. These hardware elements are essential for managingdifferenttypesofmemory,
such as primary memory (RAM), cache memory, virtual memory, and secondary
memory.
Key Components of Memory Management Hardware
.
1 Memory Management Unit (MMU) The Memory Management Unit (MMU) is the
primary hardware component responsible for translating logical addresses (generated by
the CPU) to physical addresses (actual locations in memory). The MMU is crucial in
implementing techniques like paging, segmentation, and virtual memory. The MMU
interacts with the operating system to ensure that processes access the correct areas of
memory.
○ Address Translation: TheMMUtranslateslogicaladdressestophysicaladdresses
usingpagingorsegmentation.
○ Protection: It ensures that processes cannot access memory allocated to other
processes, preventing unauthorized memory access and ensuring process isolation.
○ Caching: The MMU often uses cache memory to improve address translation
speeds by storing recently accessed address translations.
2. Page Tables The page table is a data structure that maps virtual addresses to
physical addresses. The MMU uses page tables to keep track of this mapping. Each
process has its own page table to maintain a separate mapping, which ensures that
processes cannot access each other's memory space.
○ PageTableEntries(PTE):Eachentryinthepagetablestoresthephysicaladdress
corresponding to a specific virtual address in the logical memory space of a process.
○ Page Table Registers: These are special registers intheMMUthatstorethebase
address of the page table for the current process.
3. TranslationLookasideBuffer(TLB)TheTranslationLookasideBuffer(TLB)isa
small, fast memory unit that stores recently used virtual-to-physical address translations.
TheTLBhelpsspeedupmemoryaccessbyreducingthetimeittakestolookupaddresses
inthepagetable.WhentheMMUreceivesalogicaladdress,itfirstcheckstheTLBtosee
ifthetranslationisalreadycached.Ifthetranslationisfound,itisreferredtoasaTLBhit;
otherwise, aTLB missoccurs, and the MMU looks upthe page table.
4. SegmentRegisterInsystemsthatusesegmentationformemorymanagement,the
segment registers are used to store the base addresses of memory segments.
Segmentation divides the memory into logical units, such as code, data, and stack
segments, and each segment is treated independently. The segment registers hold the
base address of each segment, which helps in translating a logical addresstoaphysical
address.
○ Segment Table: Similar to page tables, a segment table stores the base address
and limits of each segment. The segment register helps locate the relevant segment in
memory.
5. Base and Limit Registers These registers are used in systems that support
segmentation or simple memory protection. The base register stores the starting
ddressofasegment,andthelimitregisterstoresthesizeofthesegment.TheCPUuses
a
these registers to check if a memory access request is within the allowed limits of the
segment. If a process tries to access memory outside the segment boundaries, a
segmentation faultoccurs.
6. Cache Memory Management Cache memory is a small, fast memory that stores
frequently accessed data from main memory (RAM). Hardware mechanisms such as the
cache controller and cache mapping strategies (direct-mapped, associative,
set-associative) play an essential role in managing cache memory and improving overall
system performance.
○ CacheController:Thishardwarecomponentmanagestheflowofdatabetweenthe
CPU andcachememory.Itcontrolsreadingfromandwritingtothecacheanddetermines
which data should be stored.
○ Cache Miss Handling: The hardware is responsible for handling cache misses
(whendataisnotfoundinthecache),whichinvolvesfetchingthedatafrommainmemory
and loading it into the cache.
7. Hardware Support for Virtual Memory Virtual memory enables a system to use
hard disk space as if it were part of RAM, allowing programs to use more memory than
physically available. The MMU works in conjunction withtheoperatingsystemtomanage
virtual memory. Hardware support includes:
○ Page Fault Handling: When apageisnotfoundinphysicalmemory,theoperating
systemtriggersapagefault.TheMMUandtheCPUcoordinatewiththeoperatingsystem
to bring the required page from secondary storage (such as a hard disk) into RAM.
○ SwapSpace:Swapspaceinsecondarymemoryisusedtostorepagesthatarenot
currently in use, allowing the system to free up physical memory.
.
1 Paging Paging is a memory managementtechniqueinwhichthelogicalmemoryis
dividedintofixed-sizedpages,andthephysicalmemoryisdividedintofixed-sizedframes.
TheMMUusesthepagetabletomappagestoframes.Pagingallowsprocessestoaccess
non-contiguous memory, improving memory utilization.
2. Segmentation Segmentation divides memory into segments of varying sizes,each
representingalogicalunit,suchasaprogram'scode,data,orstack.Segmentationenables
more flexibility in memory allocation. The MMU and segment registers manage the
translation of logical addresses to physical addresses for each segment.
3. Virtual Memory Virtual memory uses the MMU to translate virtual addresses to
physical addresses. It enables the system to use secondary storage as an extension of
RAM, which increases the apparent amount of memory available to programs. Virtual
emory allows multitasking and the execution of large programs that exceed physical
m
memory.
4. Memory Protection Memory protection prevents one process from accessing
anotherprocess’smemory.Thisisessentialforsystemstabilityandsecurity.TheMMU,in
combinationwiththeoperatingsystem,ensuresthateachprocesscanonlyaccessitsown
memory space by checking the validity of memory addresses.
● Efficiency: By optimizing memory usage and reducing memory wastage,
hardware-assisted memory management increases the efficiency of the entire system.
● Isolation andSecurity:Memorymanagementhardwareensuresthatprocessesdo
not interfere with each other, which is crucial for process isolation, system security, and
stability.
● Performance:UsinghardwarecomponentsliketheTLBandcache,thesystemcan
achieve faster memory access, minimizing the impact of page faults and improving the
overall performance of the system.
● Support for Multitasking: By enabling efficient handling of virtual memory and
ensuring that each process has access to its own address space, memory management
hardware makes multitasking possible.
Semiconductor Memories
emiconductormemoriesarethemostcommontypeofmemoryusedinmoderncomputer
S
systems. Thesememoriesusesemiconductormaterialslikesilicontostoredata,andthey
areelectricallyprogrammableanderasable,meaningtheycanbeeasilymodifiedandread.
Semiconductor memories are primarily classified into two categories: volatile and
non-volatilememories.
emiconductor memories can be divided into two broad categories based on their
S
functionality and characteristics:
.
1 Volatile Memories: Volatile memory requires a constant power supply to maintain
the stored data. If the power is lost or turned off, the data is erased. Random Access
Memory (RAM)is the most common example of volatilememory.
○ Dynamic RAM (DRAM): DRAM is the most widely used form of memory in
computers and other devices. It stores each bit of data in a separate capacitor within an
integrated circuit. DRAM requires constant refreshing to maintain data integrity because
capacitorsleakchargeovertime.AlthoughslowerthanSRAM,DRAMischeaperandmore
densely packed, which makes it ideal for main memory in computers.
■ Advantages:
■ High memory density (more bits in a given area).
■ Cost-effective and commonly used in main memory.
■ Disadvantages:
■ Slower than SRAM.
■ Requires constant refreshing to maintain data.
○ Static RAM (SRAM): SRAM stores data in flip-flops, which donotrequireconstant
refreshing.SRAMisfasterthanDRAMandisusedforcachememoryinprocessors,where
speediscritical.However,itismoreexpensiveandhaslowermemorydensitycomparedto
DRAM.
■ Advantages:
■ Faster access times compared to DRAM.
■ No need for refreshing.
■ Disadvantages:
■ Higher cost per bit compared to DRAM.
■ Lower memory density.
2. Non-Volatile Memories: Non-volatile memory retains stored data even when the
poweristurnedoff.Theseareusedforlong-termdatastorageandaretypicallyslowerthan
volatile memories.
○ Read-Only Memory (ROM): ROM is a type of non-volatile memory used to store
firmware, which is permanent software embedded into a hardware device. ROM retains
data even when power is lost. It is typically used in devices like BIOS chips, embedded
systems, and other hardware-level storage needs.
■ Types of ROM:
■ Mask ROM: Pre-programmed at the factory with datathat cannot be changed.
■ Programmable ROM (PROM): Can be programmed by the user once after
manufacture.
■ Erasable Programmable ROM (EPROM): Can be erased by ultraviolet light and
reprogrammed.
■ Electrically Erasable Programmable ROM (EEPROM): Can be erased and
reprogrammed electrically, without the need for ultraviolet light.
■ Advantages:
■ Data retention without power.
■ Reliable for storing firmware.
■ Disadvantages:
■ Slower access speeds compared to RAM.
■ Limited write cycles in some types of ROM.
○ FlashMemory:Flashmemoryisatypeofnon-volatilememoryusedinUSBdrives,
SSDs (Solid State Drives), and memory cards. It is similar to EEPROM but is faster and
more reliable, allowing multiple blocks of data to be erased and written simultaneously.
Flash memory is used extensively inportabledevicesforitsabilitytostorelargeamounts
of data without requiring constant power.
■ Advantages:
■ Non-volatile.
■ High speed and durability.
■ Can be rewritten multiple times.
■ Disadvantages:
■ More expensive than traditional hard disk storage.
■ Limited number of write/erase cycles.
○ FerroelectricRAM(FRAM):FRAMisanon-volatilememorythatusesaferroelectric
layertostoredata.ItisfasterthanEEPROMandconsumeslesspower.Itismainlyusedin
low-power applications, such as embedded systems and smart cards.
■ Advantages:
■ Fast read/write speeds.
■ Low power consumption.
■ Unlimited read/write cycles.
■ Disadvantages:
■ Higher cost compared to other non-volatile memories.
■ Limited availability and integration.
.
1 Access Time: Access time refers to the time taken by the memory to retrieve a
specificpieceofdata.VolatilememorieslikeSRAMhavefasteraccesstimescomparedto
DRAM, which requires refreshing.
2. Capacity: The capacity of semiconductor memories is measured in bytes (B),
kilobytes (KB), megabytes (MB), gigabytes (GB), and soon.DRAMtypicallyoffershigher
capacity compared to SRAM.
3. Speed:Speedreferstohowquicklydatacanbereadfromorwrittentothememory.
SRAM is faster than DRAM due to the absence of the need for refreshing, while flash
memory is slower than RAM but faster than magnetic storage devices.
.
4 Power Consumption: Volatile memories like DRAM require continuous power to
retain data, leading to higher power consumption compared to non-volatilememorieslike
ROM and flash memory, which consume power only during read/write operations.
5. Durability: Non-volatile memories, such as flash memory, offer higher durability
since they retain data even after power loss. Volatile memories, like DRAM, lose alldata
when power is lost, so they are not suitable for long-term storage.
● RAM (RandomAccessMemory):Usedasthemainmemoryincomputerstostore
actively used programs and data.
● Cache Memory: A small, fast memory used to store frequently accessed data for
quicker retrieval.
● ROM: Used to store firmware in devices like embedded systems, BIOS chips in
computers, and gaming consoles.
● Flash Memory: Used in USB drives, SSDs, and memorycards for portable storage.
● EPROM/EEPROM: Used in devices that require occasional rewrites, such as
firmware updates in consumer electronics.
emory hierarchy in computer system, Memory organization and different types in
M
computer system
hememoryhierarchyinacomputersystemreferstotheorganizationandarrangementof
T
differenttypesofmemorydevices,fromthefastestandsmallesttotheslowestandlargest.
The idea behind this hierarchy is to provide a balance between speed, size, and cost,
ensuring that the system operates efficiently while keeping memory access as fast as
possible.
he hierarchy is structured so that the most frequently accessed data is stored in the
T
fastest(andusuallymoreexpensive)memory,whilelessfrequentlyaccesseddataisstored
in slower, larger, and cheaper memory types. This organizationhelpsoptimizetheoverall
performance of the system.
.
1 egisters
R
○ Location: Inside the CPU.
○ Speed: Fastest.
○ Size: Smallest (few bytes to a few kilobytes).
○ Cost: Highest.
○ Description: Registers are the smallest and fastest formofmemoryinacomputer.
They are located within the processor itself and hold data that is immediately required
during program execution, such as operands for calculations, intermediate results, or
control information.
2. Cache Memory
○ Location: Near or within the CPU (L1, L2, and sometimesL3 cache).
○ Speed: Very fast, but slower than registers.
○ Size: Relatively small (ranging from a few kilobytesto several megabytes).
○ Cost: Expensive.
○ Description: Cache memory stores frequently accesseddataorinstructions.There
aretypicallymultiplelevels(L1,L2,L3)ofcache,withL1beingthesmallestandfastest.L1
cache is integrated within the processor, while L2andL3cachesarelocatedontheCPU
chip or close to it. Caches improve performance by reducing the time it takes to access
data from main memory.
3. Main Memory (RAM)
○ Location: External to the CPU on the motherboard.
○ Speed: Slower than cache but faster than secondarymemory.
○ Size: Larger (ranging from a few gigabytes to severalterabytes).
○ Cost: Moderate.
○ Description:RAM(RandomAccessMemory)istheprimaryworkingmemoryofthe
computer where programs and data currently in use are stored. It is volatile, meaning it
loses its contents when power is turnedoff.ItistypicallysplitintoDRAM(DynamicRAM)
and SRAM (Static RAM), with DRAM being slower and cheaper, and SRAM being faster
but more expensive.
4. Secondary Memory
○ Location: External to the CPU, such as hard drives,SSDs, and optical disks.
○ Speed: Slowest.
○ Size: Largest (several gigabytes to multiple terabytes).
○ Cost: Cheapest.
○ Description: Secondary memory is used for long-term storage of data and
programs. It is non-volatile, retaining data even when power is turned off. Types of
secondary memory include:
■ Hard Disk Drives (HDDs): Magnetic storage deviceswithrelativelyslowread/write
speeds.
■ Solid-State Drives (SSDs): Faster than HDDs, SSDs use flash memory to store
data.
■ Optical Discs: DVDs and Blu-ray discs used for storageand media distribution.
■ Magnetic Tape: Used for backup storage and archiving.
.
5 Tertiary and Off-line Storage
○ Location: External, typically used for archiving.
○ Speed: Slowest.
○ Size: Very large.
○ Cost: Least expensive.
○ Description: This level includes archival storage systems, such as magnetic tapes
andcloudstorage,wheredataisstoredinfrequentlyandretrievedonlywhennecessary.It
is mainly used for long-term storage and backup.
emory organization refers to the way different memory units are structured, managed,
M
and accessed in acomputersystem.Itinvolvesdetermininghowdataisstored,retrieved,
and transferred between different memory types in the memory hierarchy.
.
1 Von Neumann Architecture (Single Memory Space)
○ In Von Neumann architecture, there is a single memory space for both instructions
and data. The CPU fetches instructions and data sequentially from the same memory,
whichcanleadtoabottleneckknownastheVonNeumannBottleneck.Thisarchitecture
is simpler and more cost-effective but can be slower in processing complex tasks.
2. Harvard Architecture (Separate Memory for Data and Instructions)
○ In Harvard architecture, there are separate memory units for instructions and data,
which allows for simultaneous accesstoboth.Thisdesignleadstofasterdataprocessing
and is typically used in systems requiring high performance, such as embeddedsystems
and DSP (digital signal processing) devices.
3. Cache Memory Organization
○ Cache memory is organized to store copies of frequently accessed datafrommain
memory.Itistypicallyorganizedinahierarchicalmanner,withdifferentlevels(L1,L2,and
sometimes L3) of cache designed to minimize the time it takes to retrieve data. Cache
memoryisoftenorganizedinblocksorlines,anddataisstoredinamannerthatoptimizes
access speed.
4. Virtual Memory Organization
○ Virtualmemoryisamemorymanagementtechniquethatusesaportionofsecondary
memory(typicallyaharddisk)toextendtheapparentsizeofmainmemory.Theoperating
system uses paging or segmentation to divide memory into small, manageable chunks,
and data that is not actively being used can be swapped in and out of physical memory,
creating the illusion of a larger memory space than what is physically available.
5. Paged and Segmented Memory
○ Paged Memory: Memory is divided into fixed-size blocks called pages. The
operating system maintains a page table that maps virtual pages to physical memory
addresses.Thisapproachisusedtoimplementvirtualmemoryandallowsformoreefficient
memory management.
○ Segmented Memory: Memory is divided into variable-sized segments, typically
representing different program components (such as code, data, stack). The operating
system keeps track of these segments to manage memory usage.
.
1 Primary Memory (Volatile Memory):
○ IncludesRAM(Random Access Memory) andCache Memory.
○ Directly accessible by the CPU for executing programs and tasks.
○ Volatile: Loses data when the power is off.
2. Secondary Memory (Non-Volatile Memory):
○ Includes Hard Disk Drives (HDD), Solid-State Drives (SSD), Optical Disks, and
Magnetic Tapes.
○ Provides long-term storage for programs and data.
○ Non-volatile: Retains data even when the power isturned off.
3. Tertiary and Off-line Storage:
○ Includes devices likeMagnetic TapesandCloud Storage.
○ Used for archival purposes and long-term backups.
ache structure and design, Mapping, types and techniques in cache memory,
C
Cache hit and cache miss, cache coherency
ache memory is a small, high-speed memory located between the CPU and main
C
memory (RAM). It stores frequently accesseddataorinstructionstoreducethelatencyof
memoryaccessesandimproveoverallsystemperformance.Thedesignofcachememory
plays a crucial role in determining how effectively it enhances the speed of data retrieval.
he cache is designed to operate at speeds much faster than main memory, and it can
T
eitherbeapartoftheCPU(asinL1cache)oraseparatechipneartheCPU(asinL2and
L3caches).Thebasicstructureofcachememoryconsistsofmultiplecachelinesorblocks,
each capable of holding a small amount of data (typically a few bytes).
.
1 Direct-Mapped Cache:
○ Inthismappingtechnique,eachblockofmainmemoryismappedtoexactlyoneline
in the cache. This is the simplest and most commonly used method, where the address
from memory is divided into a block address and a cache line index.
○ Advantage: Simple to implement and fast.
○ Disadvantage:Highchanceofcachemissesduetoconflictswhenmultiplememory
blocks map to the same cache line.
2. Associative Cache (Fully Associative Mapping):
○ In this mapping, any block of memory can beplacedinanycacheline.Thereisno
fixed location for any memory block in the cache. This type of cache is highly flexible.
○ Advantage: It reduces the chance of cache misses compared to direct-mapped
cache.
○ Disadvantage:Morecomplexandslowerduetotheneedtosearchtheentirecache
for a match.
3. Set-Associative Cache:
○ Thisisahybridofdirect-mappedandfullyassociativemapping.Thecacheisdivided
into sets, and each memory block maps to one set. Within eachset,multiplecachelines
exist, and any memory block can be placed in any line of its designated set. Typically,
caches are 2-way, 4-way, or N-way set-associative.
○ Advantage: It strikes a balance between thesimplicityofdirect-mappedcacheand
the flexibility of fully associative cache.
○ Disadvantage: More complex than direct-mapped, but generally faster and more
efficient.
ache memory comes in different types based on its position and purpose within the
C
system architecture:
.
1 Level 1 Cache (L1 Cache):
○ L1cacheisthesmallestandfastesttypeofcache,integrateddirectlyintotheCPU.It
is used to store frequently accessed data and instructions. L1 cache is typically split into
two parts: one for data (L1D) and one for instructions (L1I).
○ Size: 16 KB to 128 KB.
○ Speed: Very high, closest to the processor.
2. Level 2 Cache (L2 Cache):
○ L2cacheislargerandslowerthanL1cache,butstillfasterthanmainmemory.Itcan
either be integrated into the CPU or placed externally, on the same chip or near the CPU.
○ Size: 128 KB to 8 MB.
○ Speed: Slower than L1, but much faster than main memory.
3. Level 3 Cache (L3 Cache):
○ L3 cache is shared among multiple cores in multi-core processors. It is the largest
cache and operates slower than L1 and L2 caches but still significantly faster than main
memory.
○ Size: 2 MB to 16 MB.
○ Speed: Slower than L2 but still faster than RAM.
.
1 Cache Hit:
○ A cachehitoccurswhenthedatarequestedbytheCPUisfoundinthecache.This
resultsinafastaccesstothedatasinceitdoesn’tneedtobefetchedfromtheslowermain
memory.
○ Impact onperformance:Cachehitsaredesirableandsignificantlyimprovesystem
performance because they reduce the time it takes to retrieve data.
2. Cache Miss:
○ AcachemissoccurswhenthedatarequestedbytheCPUisnotfoundinthecache,
requiringthesystemtofetchthedatafrommainmemory.Thisresultsinlongerdelaysand
reduced performance.
○ Types of Cache Misses:
■ Compulsory Miss: Occurs the first time data is accessed.
■ Capacity Miss: Occurs when the cache is full and dataneeds to be replaced.
■ Conflict Miss: Occurs when multiple data elements map to the same cache line
(typically in direct-mapped or lower-way set-associative caches).
Cache Coherency
● Problem:Withoutpropercachecoherencyprotocols,differentprocessorsmayhave
different versions of the same data, leading to inconsistent results and errors.
● CacheCoherencyProtocols:Theseprotocolsareusedtomaintaintheconsistency
of data across caches. The most common protocols are:
○ MESI Protocol (Modified, Exclusive, Shared, Invalid): This protocol ensuresthat
all caches in the system maintain a coherent view of memory by managing the states of
cache lines. When data in onecacheismodified,theothercachesarenotifiedandeither
invalidate or update their copies of the data.
○ MOESI Protocol (Modified, Owner, Exclusive, Shared, Invalid): A refinement of
the MESI protocol, whichaddsan“Owner”statetohandlethecasewhereonecachehas
the only valid copy of a cache line.