0% found this document useful (0 votes)
18 views15 pages

Module 5 - 5 Marks

The document discusses various aspects of computer memory, including RAM and ROM chip requirements, cache writing policies, memory organization, and address translation in virtual memory systems. It covers calculations for memory chip numbers, advantages of write-through caching, and comparisons between SRAM and DRAM. Additionally, it explains cache organization types and their performance in terms of misses and address translation mechanisms.

Uploaded by

raaka0026
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views15 pages

Module 5 - 5 Marks

The document discusses various aspects of computer memory, including RAM and ROM chip requirements, cache writing policies, memory organization, and address translation in virtual memory systems. It covers calculations for memory chip numbers, advantages of write-through caching, and comparisons between SRAM and DRAM. Additionally, it explains cache organization types and their performance in terms of misses and address translation mechanisms.

Uploaded by

raaka0026
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1.

a. Each 128 × 8 RAM chip has 128 bytes of memory (since 8 bits = 1 byte).
Required memory = 2048 bytes
⇒ Number of chips = 2048 ÷ 128 = 16 chips

b. To address 2048 bytes :


⇒ log₂(2048) = 11 address lines needed in total.

Each chip can address 128 bytes :


⇒ log₂(128) = 7 lines will be common to all chips (for internal chip
addressing).

c. Number of chips = 16
⇒ log₂(16) = 4 lines must be decoded for chip select.

Decoder size: 4 – to – 16 decoder (to select 1 out of 16 chips)

2. What is Write - through? Which memory hierarchy is most suited to


Write - through? Write 3 advantages of Write - through. [5]
Write - through is a cache writing policy where every write operation updates
both the cache and the main memory simultaneously.
Write - through is most suited for cache – main memory hierarchy,
especially when data consistency between cache and memory is critical.
Three Advantages of Write - through:
a. Data consistency: Cache and main memory always have the same data.
b. Simpler recovery: In case of power failure, no dirty data is lost as
memory is already updated.
c. Simpler cache design: No need for dirty bits or complex tracking logic.
3. A computer uses RAM chips of 1024 × 1 bit capacity. [5]

a) How many such chips are required, and how should their address
lines be connected, to create a memory of 1024 bytes?

Each chip = 1024 × 1 bit = 128 bytes (since 1 byte = 8 bits, and 1024 bits
÷ 8 = 128 bytes)

To get 1024 bytes => 1024 ÷ 128 = 8 chips

These 8 chips must work in parallel, each providing 1 bit to form a


complete 8 - bit data word.

• All chips receive the same 10 address lines (since log₂(1024) = 10)
to access 1024 locations.
• Each chip stores 1 bit per location, so 8 chips give 1 byte per
location.

b) How many chips are required to build a memory of 16 KB? Explain


how these chips are connected to the address bus.

Each chip = 1024 × 1 bit = 128 bytes

To get 16,384 bytes :

⇒ 16,384 ÷ 128 = 128 chips

To organize :

• Group the 128 chips into 16 groups, each group having 8 chips (to
form 1 byte per address).
• These 16 groups will be selected one at a time using 4 address
lines (since log₂(16) = 4) through a 4 – to – 16 decoder.

Connection Summary :

• Lower 10 address lines go to all chips (to access 1024 rows).


• Next 4 address lines go to the decoder for chip group selection.
• Within each group, 8 chips provide 8 bits = 1 byte of data per
address.
4. A computer system has a logical address space composed of 128
segments. Each segment can contain up to 32 pages, and each page
holds 4K (4096) words. The physical memory has 4K blocks, each block
also holding 4K words. Formulate the logical and physical address
formats. [5]

i. Logical Address Format

We are given:
✓ Segments = 128 ⇒ Need log₂(128) = 7 bits for segment number
✓ Pages per segment = 32 ⇒ Need log₂(32) = 5 bits for page
number
✓ Words per page = 4K = 2¹² ⇒ Need 12 bits for word offset

Total logical address format:

✓ Segment: 7 bits
✓ Page: 5 bits
✓ Offset (within page): 12 bits

Logical Address = [Segment (7) | Page (5) | Word Offset (12)]

Total = 7 + 5 + 12 = 24 bits

ii. Physical Address Format

We are given :
✓ Physical memory has 4K blocks ⇒ log₂(4K) = 12 bits for block
number
✓ Each block has 4K words ⇒ log₂(4K) = 12 bits for word offset

Physical Address = [Block Number (12) | Word Offset (12)]

Total = 12 + 12 = 24 bits
5. If given a choice between increasing the number of memory banks or
implementing memory interleaving, which approach would be more
effective in improving memory access speed? [5]

Memory interleaving would be more effective than just increasing the


number of memory banks.

• Interleaving allows parallel access to multiple memory blocks,


reducing access time.

• It improves throughput by starting the next memory access while the


previous one is still completing.

• Just increasing banks without interleaving can cause bottlenecks if


access patterns are not optimized.

• Interleaving makes better use of available memory banks, maximizing


performance.

• It is a scalable solution for faster memory operations in high-speed


systems.

6. A system designer must choose between a high - speed cache and a


larger RAM upgrade to improve performance. Which option should they
prioritize and why? [5]
The system designer should prioritize a high - speed cache to improve
performance.
• Cache reduces memory access time much more than adding RAM.
• It boosts CPU utilization by minimizing idle time.
• Programs benefit from locality of reference (recent and nearby data
stays in cache).
• Larger RAM mainly helps when running many large programs, not for
speed when RAM is already sufficient.
• Cache upgrades usually offer better speed improvement for most
applications at a lower cost.
7. In modern computers, why is DRAM still used for main memory instead
of faster SRAM, despite SRAM’s performance advantages? [5]
DRAM is still used for main memory instead of SRAM mainly because of cost
and density.
• DRAM is cheaper and offers higher storage density, allowing more
memory in a smaller area.

• SRAM is faster but very expensive and consumes more space (6


transistors per bit vs 1 capacitor in DRAM).

• Using SRAM for main memory would make computers much costlier
and limit memory size.

• DRAM requires periodic refreshing but still balances speed, capacity,


and cost well for main memory needs.

• SRAM is better suited for small, high - speed caches, not large main
memory.

8. Consider a direct - mapped cache of size 32 KB with block size 32 bytes.


The CPU generates 32 - bit address. Find the no. of bits required for cache
indexing and tag bits. [5]
Cache size = 32 KB = 32 × 2¹⁰ = 32,768 bytes
Number of blocks = 32,768 ÷ 32 = 1024 blocks
Index bits = log₂(1024) = 10 bits
Block size = 32 bytes
Block offset bits = log₂(32) = 5 bits
Total address bits = 32 bits
Tag bits = 32 − (Index bits + Block offset bits)
Tag bits = 32 − (10 + 5) = 17 bits
9. A computer uses RAM chips of 256 × 8 and ROM chips of 1024 × 8. The
system needs 2K bytes of RAM, 4K bytes of ROM, and has four interface
units, each with four registers. A memory - mapped I/O is used. The two
highest - order bits of the address bus are assigned : 00 for RAM, 01 for
ROM, 10 for interface registers.
A. How many RAM and ROM chips are needed?
B. Draw a memory - address map for the system.
C. Give the address range (in hexadecimal) for RAM, ROM, and
interface.

A. RAM chip = 256 bytes


ROM chip = 1024 bytes

System requirements:
RAM needed = 2K bytes = 2048 bytes
⇒ 2048 ÷ 256 = 8 RAM chips

ROM needed = 4K bytes = 4096 bytes


⇒ 4096 ÷ 1024 = 4 ROM chips

B. Since 2 highest bits decide the section :


Address Range Section
00XXXXXXXXXXXX RAM (2K bytes)
01XXXXXXXXXXXX ROM (4K bytes)
10XXXXXXXXXXXX Interface (Registers)

C. RAM (00) :
Starts from 0000H to 07FFH
(2K bytes = 2048 locations = 0x0000 to 0x07FF)
ROM (01) :
Starts from 0800H to 17FFH
(4K bytes = 4096 locations = 0x0800 to 0x17FF)
Interface (10) :
Starts from 1800H onwards (for the interface registers)
10. A virtual memory system has :
Address space = 8K words,
Physical memory space = 4K words,
Page size = 1K words.
The following page reference changes occur during a time interval :
420126140102357
(Each change is listed once even if referenced again.)
Determine the four pages that are resident in main memory after each
page reference change using :
A. FIFO page replacement
B. LRU page replacement.

A. FIFO (First – In First – Out)


• Start with an empty memory.
• Replace the oldest page when a new page needs to be loaded.
Step Reference Memory Status
1 4 4---
2 2 42--
3 0 420-
4 1 4201
5 2 4 2 0 1 (hit, no change)
6 6 6 2 0 1 (replace 4)
7 1 6 2 0 1 (hit, no change)
8 4 6 4 0 1 (replace 2)
9 0 6 4 0 1 (hit, no change)
10 1 6 4 0 1 (hit, no change)
11 0 6 4 0 1 (hit, no change)
12 2 2 4 0 1 (replace 6)
13 3 2 3 0 1 (replace 4)
14 5 2 3 5 1 (replace 0)
15 7 2 3 5 7 (replace 1)
B. LRU (Least Recently Used)
• Replace the least recently used page.

Step Reference Memory Status

1 4 4---

2 2 42--

3 0 420-

4 1 4201

5 2 4 2 0 1 (hit, update usage)

6 6 6 2 0 1 (replace 4, least used)

7 1 6 2 0 1 (hit, update usage)

8 4 4 2 0 1 (replace 6, least used)

9 0 4 2 0 1 (hit, update usage)

10 1 4 2 0 1 (hit, update usage)

11 0 4 2 0 1 (hit, update usage)

12 2 4 2 0 1 (hit, update usage)

13 3 3 2 0 1 (replace 4, least used)

14 5 5 2 0 1 (replace 3, least used)

15 7 7 2 0 1 (replace 5, least used)


11. Assume there are three small caches, each consisting of four one - word
blocks. One cache is fully associative, a second is two - way set -
associative, and the third is direct - mapped. Find the number of misses
for each cache organization given the following sequence of block
addresses: 0, 8, 0, 6, and 8. [5]

A. Fully Associative Cache :


• Any block can go anywhere.
• Cache initially empty.

Access Result Cache Content


0 Miss 0---
8 Miss 08--
0 Hit 08--
8 Miss 086-
6 Hit 086-

Misses = 3

B. Two - Way Set Associative Cache :


• 2 sets, each with 2 blocks (because 4 blocks total, 2 blocks per set).
• Block address modulus 2 = set number (Set 0 or Set 1).

Block Set (block mod 2) Result Set Content

0 0 Miss Set0 : 0 -

8 0 Miss Set0 : 0 8

0 0 Hit Set0 : 0 8

8 0 Miss Set0 : 8 6
(replace 0)

6 0 Hit Set0 : 8 6

Misses = 3
C. Direct Mapped Cache :
• 4 blocks → 4 sets.
• Block address modulus 4 = cache line.

Block Line (block mod 4) Result Cache Content

0 0 Miss Line0: 0

8 0 Miss Line0: 8
(replaces 0)

0 0 Miss Line0: 0
(replaces 8)

8 2 Miss Line2: 6

6 0 Miss Line0: 8
(replaces 0)

Misses = 5

12. A direct - mapped cache has 1024 blocks and uses a 32 - bit address.
Determine the number of bits required for the index and the tag. [5]
Number of blocks = 1024
Index bits = log₂(1024) = 10 bits
(Block size is not given, so block offset = 0 bits, assuming 1 word per block.)
Total address bits = 32
Tag bits = 32 − Index bits − Block offset bits
Tag bits = 32 − 10 − 0 = 22 bits
13. Describe the differences between SRAM and DRAM in terms of
structure, speed, power consumption, and applications. Why is SRAM
preferred for cache, while DRAM is used for main memory? [5]

Aspect SRAM DRAM

Structure Uses 6 transistors per Uses 1 transistor + 1


bit (flip - flop) capacitor per bit

Speed Very fast Slower than SRAM

Power Consumption Higher Lower

Applications Used for cache memory Used for main memory


(RAM)

SRAM is used for cache and DRAM is used for main memory because :
• SRAM is preferred for cache because it is faster, ideal for high - speed
small storage.
• DRAM is used for main memory because it is denser and cheaper, making
it suitable for large capacity storage.

14. A fully associative cache has 16 KB of storage with a block size of 32


bytes. Determine the number of tag bits required for a 32 - bit address.
Cache size = 16 KB = 16 × 2¹⁰ = 16,384 bytes
Block size = 32 bytes
Block offset bits = log₂(32) = 5 bits
Number of blocks = 16,384 ÷ 32 = 512 blocks
Fully associative → No index bits (because any block can go anywhere)
Tag bits = Total address bits − Block offset bits
Tag bits = 32 − 5 = 27 bits
15. Explain address translation for a virtual memory. [5]

I. How address translation works in virtual memory :


• The CPU generates a virtual address during program execution.
• The virtual address is divided into :
✓ Page number (identifies which page)
✓ Page offset (identifies exact location inside the page)

II. Translation process :


• Page number is sent to the Page Table.
• Page Table maps the virtual page number to a physical frame
number in main memory.
• Frame number is combined with the offset to form the physical
address.
• If the page is not in memory (page fault), it is loaded from
secondary storage.

III. Tools involved :


• Page Table: Maintains the mapping between virtual pages and
physical frames.
• TLB (Translation Lookaside Buffer): A cache that stores recent
address translations to speed up access.

16. A write - back cache updates the main memory only on eviction. If
10,000 memory accesses occur and 3,000 result in write - backs, what is
the write - back percentage? [5]
Using the formula,

Substituting values,
17. A CPU accesses memory 1,000,000 times. It experiences 950,000 cache
hits and 50,000 cache misses. Calculate the cache hit rate and miss rate.
Cache Hit Rate :

Cache Miss Rate :


18. A block – set – associative cache consists of :
• Total 64 blocks divided into 4-block sets.
• Main memory contains 4096 blocks, each with 128 words.

A. How many bits are there in a main memory address?


Each memory block = 128 words
Total memory = 4096 blocks × 128 words = 524,288 words
Number of words = 524,288 = 2¹⁹
Thus, main memory address = 19 bits.

B. How many bits are there in each of the TAG, SET, and WORD fields?

Step 1 : WORD field


Each block = 128 words
Word offset bits = log₂(128) = 7 bits

Step 2 : SET field


Cache = 64 blocks, 4 blocks per set → 16 sets
Set index bits = log₂(16) = 4 bits

Step 3 : TAG field


Total address bits = 19
TAG bits = 19 − (Set bits + Word bits)
TAG bits = 19 − (4 + 7) = 8 bits
19. A block - set associative cache memory consists of 128 blocks divided
into four block sets. The main memory consists of 16,384 blocks and
each block contains 256 eight - bit words.

i. How many bits are required for addressing the main memory?
Each block = 256 words
Total words = 16,384 blocks × 256 words = 4,194,304 words
Since, 4,194,304 = 222
Thus, main memory address = 22 bits.

ii. How many bits are needed to represent the TAG, SET and WORD
fields?

Step 1 : WORD field


Block size = 256 words
Word offset bits = log₂(256) = 8 bits

Step 2 : SET field


128 blocks divided into 4 blocks per set → 32 sets
Set index bits = log₂(32) = 5 bits

Step 3 : TAG field


Total address bits = 22
TAG bits = 22 − (Set bits + Word bits)
TAG bits = 22 − (5 + 8) = 9 bits

You might also like