0% found this document useful (0 votes)
8 views18 pages

Dlcoa Question Bank

The document discusses various aspects of computer architecture, including hardwired and microprogrammed control units, memory characteristics, and the memory hierarchy. It explains concepts such as locality of reference, cache memory necessity, and different cache mapping techniques like set-associative and fully associative. Additionally, it outlines micro-operations for specific instructions and provides examples for calculating cache parameters.

Uploaded by

crce.9915.ce
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views18 pages

Dlcoa Question Bank

The document discusses various aspects of computer architecture, including hardwired and microprogrammed control units, memory characteristics, and the memory hierarchy. It explains concepts such as locality of reference, cache memory necessity, and different cache mapping techniques like set-associative and fully associative. Additionally, it outlines micro-operations for specific instructions and provides examples for calculating cache parameters.

Uploaded by

crce.9915.ce
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

‭ 1. Explain with a neat block diagram Hardwired control unit.

List different‬
Q
‭Hardwired techniques.‬
‭Ans:‬
‭Q2. Explain with a neat block diagram Microprogrammed control unit.‬
‭Ans:‬
‭ 3. Write a note on microinstruction sequencing.‬
Q
‭Ans:‬
‭ 4. List and explain the key characteristics of Memory.‬
Q
‭Ans:‬
‭1.‬ V ‭ olatile Memory: Volatile memory loses its data when the power supply is‬
‭turned off. Eg RAM.‬
‭2.‬ ‭Non-Volatile Memory: Non-volatile memory retains its data even when the‬
‭power is turned off. Examples include hard disk drives (HDDs), solid-state‬
‭drives (SSDs), and flash memory.‬
‭3.‬ ‭Latency is the delay between requesting data from memory and the moment the‬
‭data becomes available. Low-latency memory is important for tasks that‬
‭require rapid data access, such as gaming or real-time data processing.‬
‭4.‬ ‭Access time is the time it takes to retrieve data from memory. In general, lower‬
‭access times are desirable because they allow for faster data retrieval, which is‬
‭crucial for system performance.‬
‭5.‬ ‭Memory is categorised by its speed in terms of data access and retrieval. Faster‬
‭memory types, such as RAM (Random Access Memory), are used for‬
‭temporary data storage because they offer quick access to data that the CPU‬
‭needs for processing.‬

‭ 5. Distinguish between DRAM and SRAM.‬


Q
‭Ans:‬

‭ 6. Describe the memory hierarchy in the computer system.‬


Q
‭Ans:‬
‭The memory hierarchy in a computer system is a structured arrangement of‬
‭memory types with varying characteristics and capacities, designed to optimise‬
‭data access and storage. It typically includes:‬

‭ . Registers: The fastest, smallest memory directly within the CPU, used for‬
1
‭immediate data access.‬
‭ . Cache Memory: High-speed memory between the CPU and RAM, storing‬
2
‭frequently accessed data to reduce access times.‬

‭ . Random Access Memory (RAM): The main working memory for active data‬
3
‭and instructions during program execution.‬

‭ . Virtual Memory: A storage management technique using part of the storage‬


4
‭device (e.g., SSD or HDD) as an extension of RAM.‬

‭ . Solid-State Drives (SSDs): Non-volatile storage devices with faster access‬


5
‭times than HDDs, used for data storage.‬

‭ . Hard Disk Drives (HDDs): Non-volatile, high-capacity storage devices,‬


6
‭slower than SSDs but cost-effective for mass storage.‬

‭ his hierarchy optimises the trade-offs between speed, capacity, and cost,‬
T
‭allowing the computer to efficiently manage data at various stages of‬
‭processing.‬

‭ 7. Explain principle of locality of reference.‬


Q
‭Ans:‬
‭The principle of locality of reference is a fundamental concept in computer‬
‭science and computer architecture that describes the behaviour of memory‬
‭access patterns in computer programs. It states that when a program accesses a‬
‭particular memory location, there is a high probability that it will access nearby‬
‭memory locations in the near future. This principle is crucial for optimising‬
‭computer system performance and memory hierarchies. There are two main‬
‭types of locality of reference:‬

‭1. Temporal Locality:‬


‭- Temporal locality, also known as the "temporal locality of reference," refers‬
‭to the tendency of a program to access the same memory locations repeatedly‬
‭over a short period of time.‬
‭- This means that if a program accesses a specific memory address, it is likely‬
‭to access that same address again in the near future.‬
‭- Caching mechanisms, such as CPU caches, exploit temporal locality by‬
‭storing recently accessed data in faster, smaller memory levels, so that‬
‭subsequent accesses to the same data can be satisfied more quickly.‬

‭2. Spatial Locality:‬


-‭ Spatial locality, also known as the "spatial locality of reference," refers to‬
‭the tendency of a program to access memory locations that are physically close‬
‭to each other in a relatively short time frame.‬
‭- When a program accesses a particular memory location, it is likely to access‬
‭nearby memory locations (e.g., addresses adjacent to the current one) in the‬
‭near future.‬
‭- Memory systems, including caches and virtual memory systems, take‬
‭advantage of spatial locality by prefetching and storing contiguous blocks of‬
‭data to minimise the time required to fetch related data.‬

‭ 8.What is the necessity of cache memory? Explain set associative cache mapping‬
Q
‭technique with an example.‬
‭Ans:‬
‭Cache memory is essential in computer systems due to the significant speed‬
‭difference between the CPU and main memory. It is necessary for the following‬
‭reasons:‬

‭ . Speed Mismatch: CPUs operate much faster than main memory, and fetching‬
1
‭data directly from RAM would lead to performance bottlenecks.‬

‭ . Temporal Locality: Programs tend to access the same memory locations‬


2
‭repeatedly in a short time frame. Cache memory stores recently accessed data‬
‭to reduce the need for repeated fetches from slower main memory.‬

‭ . Spatial Locality: Programs often access data located near the current‬
3
‭memory location. Caches take advantage of this behaviour by storing adjacent‬
‭data blocks, improving data access efficiency.‬

‭ . Reducing Memory Latency: Accessing cache memory is faster than‬


4
‭accessing main memory, reducing memory access latency and allowing the‬
‭CPU to continue processing instructions without waiting.‬

‭ . Energy Efficiency: Cache memory consumes less power compared to main‬


5
‭memory or storage devices, contributing to energy efficiency in computer‬
‭systems.‬

‭ et-associative cache mapping is a memory management technique that‬


S
‭balances simplicity and efficiency. In this approach, the cache is divided into‬
‭sets, each containing multiple cache lines. Here's a brief explanation with an‬
‭example:‬
‭ xample: Consider a 4-way set-associative cache with 16 cache lines divided‬
E
‭into 4 sets, each containing 4 lines. This cache organisation allows for efficient‬
‭memory access:‬
‭Note to self: also draw the three blocks showing, tag, block and word. Block‬
‭offset is word‬

-‭ Cache Organization:4 sets, each with 4 lines (0-3, 4-7, 8-11, and 12-15).‬
‭- Address Format: Memory addresses are divided into three parts: Set Index (2‬
‭bits), Block Offset (2 bits), and Tag (remaining bits).‬
‭- Cache Operation: When the CPU wants to read data, it uses the Set Index to‬
‭select one of the four sets. Within the selected set, the cache controller checks‬
‭the Tags to find a match. If found, it's a cache hit, and the data is retrieved‬
‭quickly. If no match is found, it's a cache miss, and data is fetched from main‬
‭memory.‬

‭ et-associative cache mapping balances the trade-offs between the simplicity of‬
S
‭direct-mapped caches and the flexibility of fully associative caches. It reduces‬
‭the likelihood of cache conflicts while maintaining efficiency in memory‬
‭access, making it a common choice for modern computer architectures.‬

‭ 9. Explain Fully associative cache mapping technique with an example.‬


Q
‭Ans:‬
‭Fully associative cache mapping is a cache management technique where any‬
‭memory block can be placed in any cache line without the use of set indices.‬
‭Here's a brief explanation with an example:‬

‭ ache Organization: In fully associative caching, there are no set indices,‬


C
‭meaning each cache line can store any block from main memory.‬

‭ ddress Format:Memory addresses consist of a Block Offset (identifying the‬


A
‭data's position within a block) and a Tag (identifying the specific data within‬
‭the cache).‬

‭ ache Operation: When the CPU requests data, the cache controller compares‬
C
‭the Tag portion of the address with the Tags stored in all cache lines. A match‬
‭results in a cache hit, while no match is a cache miss, requiring data retrieval‬
‭from main memory.‬
‭ xample: Imagine a fully associative cache with four cache lines (Line 0, Line‬
E
‭1, Line 2, and Line 3) and a main memory. The CPU sends a memory address,‬
‭and the cache controller searches all cache lines for a matching Tag. If found,‬
‭it's a cache hit, otherwise, it's a cache miss, and data is fetched from main‬
‭memory.‬

‭ ully associative caches are flexible but complex, allowing any memory block‬
F
‭to reside in any cache line. This flexibility minimises cache conflicts but‬
‭involves extensive tag comparisons, making them suitable for specialised‬
‭caches where flexibility is critical.‬

‭ 10. Consider a fully associative mapped cache of size 8KB with block size 32 bytes.‬
Q
‭The size of the main memory is 4GB. Find-‬
‭1. Number of bits in tag‬
‭2. Word size‬
‭Ans:‬
‭Note to self: always right the unit for tag and block.‬
‭ 11 Consider a fully associative mapped cache of size 512 KB with block size 1 KB.‬
Q
‭There are 17 bits in the tag. Find-‬
‭1. Size of main memory‬
‭2. Size of word.‬
‭Ans:‬

‭ 12. Consider a direct mapped cache of size 16 KB with block size 256 bytes. The‬
Q
‭size of the main memory is 128 KB. Find-‬
‭1. No. of bits in a Tag‬
‭2. No. of bits in a set.‬
‭3. Length of word.‬
‭Ans:‬

‭ 13. Consider a 2-way set associative mapped cache of size 8 KB with block size 32‬
Q
‭bytes. The size of the main memory is 4GB. Find-‬
‭1. Number of bits in tag‬
‭2. No. of sets in main memory.‬
‭3. No. of bits in a set.‬
‭4. Word size‬
‭Ans:‬
‭ 14. Draw and explain the delay element method with an example.‬
Q
‭Ans:‬
‭Q15. Write a note on microinstruction format.‬
‭Ans:‬
‭ 16. Write micro operations for Add R1, R3 .‬
Q
‭Ans:‬
‭Micro-operations are the basic operations performed by the control unit of a CPU to‬
‭execute machine instructions. To perform an addition instruction like "Add R1, R3,"‬
‭several micro-operations are involved. Below are the micro-operations typically‬
‭performed in a simple microarchitecture for this instruction:‬

‭1. Fetch Operand 1 (FO1):‬


‭- Retrieve the contents of register R1 from the register file.‬

‭2. Fetch Operand 2 (FO2):‬


‭- Retrieve the contents of register R3 from the register file.‬

‭3. Perform Addition (Add):‬


‭- Add the values obtained from FO1 and FO2.‬

‭4. Store Result (SR):‬


‭- Store the result of the addition operation back into register R1 in the register file.‬

‭ 17. Write micro operations for MOV A,B‬


Q
‭Ans:‬
‭In the context of microprogramming, the "MOV A, B" instruction typically involves‬
‭transferring the value from one register (B) to another register (A). Below are the‬
‭micro-operations typically involved in executing this instruction:‬

‭1. Fetch Operand B (FOB):‬


‭- Retrieve the contents of register B from the register file.‬

‭2. Transfer (TR):‬


‭- Transfer the value obtained from FOB to register A in the register file.‬

‭ hese micro-operations collectively execute the "MOV A, B" instruction by fetching‬


T
‭the value from register B and transferring it to register A, effectively copying the‬
‭content of B into A.‬

You might also like