0% found this document useful (0 votes)
935 views

COA Solved Question Paper July 2023

Uploaded by

govopi6584
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
935 views

COA Solved Question Paper July 2023

Uploaded by

govopi6584
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Solved Question Paper July 2023 Computer Engineering 4th Sem.

Computer Organization & Architecture


Section A

Note: Multiple choice questions. All questions are compulsory.

Q.1 Which of the following is an input device

a) Plotter b) Printer

c) LED Monitor d) Keyboard Ans. D

Q.2 Both CISC and RISC architecture have been developed to reduce the_________

a) Time delay b) Semantic gap

c) Cost d) All of the above Ans. B

Q.3 Which of the following allows simultaneous read and write operations

a) ROM b) EROM

c) RAM d) None of the above Ans. C

Q.4 Using a_______buffer the instruction fetch segment is implemented.

a) FIFO b) LIFO

c) MIFO d) SIFO Ans. A

Q.5 Cache memory is located between_____and____

a) CPU and main memory

b) HDD and RAM

c) RAM and ROM

d) There is no such memory Ans. A

Q.6 MIPS stands for

a) Mandatory instructions / sec

b) Millions of instructions /sec

c) Most of instructions / sec

d) Many instructions /sec Ans. B

Q.7 Each stage in pipelining should be completed within______cycle

a) 1 b) 2

c) 3 d) 4 Ans. A

Q.8 The address generated by CPU is generally referred to as_____


Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
a) Physical address b) Associative address

c) Referral address d) Logical address Ans. D

Q.9 The DMA transfers are performed by a control

circuit called as_____

a) Device interface b) DMA Controller

c) Data controller d) Overlooker Ans. B

Q.10 Virtual memory consists of

a) Dynamic RAM b) Static RAM

c) Magnetic memory d) none of these Ans. C

Section B

Note: Objective type questions. All questions are compulsory.

Q.11 The addressing mode, where you directly specify the operand value is______

Ans. Immediate

Q.12 Cache memory is the on board storage. (True/False)

Ans. True

Q.13 Storage which stores or retains data after power off is called_____ (volatile/non volatile
memory)

Ans. Non-volatile memory

Q.14 Parallel computers are either_____or MIMD.

Ans. SIMD

Q.15 Each stage in pipelining should be completed within______cycle.

Ans. One

Q.16 After the completion of the DMA transfer, the processor is notified by____(interrupt signal/HDD)

Ans. Interrupt signal

Q.17 Three basic parts of the CPU are________,_______ and _________

Ans. Control Unit, Arithmetic Logic Unit (ALU), and Registers

Q.18 The program counter points at the_______of the next instruction in the program

Ans. Address
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
Q.19 Static Ram is made up of_______

Ans. Flip-flops

Q.20 Access time=_______+________

Ans. Seek Time + Latency (or Rotational Delay)

Section C

Q.21 What do you mean by CPU? Explain the general register organization.

Answer: The Central Processing Unit (CPU) is the primary component of a computer that performs most
of the processing inside a computer. To execute instructions, the CPU uses various registers for
temporary data storage and management. The general register organization includes several types of
registers:

1. Accumulator (AC): Used for arithmetic and logic operations.


2. Program Counter (PC): Holds the address of the next instruction.
3. Instruction Register (IR): Stores the instruction currently being executed.
4. Memory Address Register (MAR): Holds the address of the memory location to be accessed.
5. Memory Data Register (MDR): Contains the data to be written or read from the memory.

These registers facilitate quick data access and processing within the CPU, improving overall system
performance by reducing the need to access slower memory.

Q.22 Explain one address instruction.

Answer: One address instruction is a type of instruction format in assembly language where the
instruction specifies only one address. This address typically refers to the operand's location in memory
or a register. The operation itself involves the accumulator as an implicit second operand. For example,
the instruction "ADD X" means adding the value at memory location X to the accumulator. This format
simplifies instruction decoding and reduces instruction length, making it efficient for certain operations.
However, it requires careful management of the accumulator and may lead to more instructions for
complex tasks【4:1†source】.

Q.23 Give the differences between microprogrammed and hard wired control.

Answer: Microprogrammed control and hardwired control are two approaches to designing the control
unit of a CPU:

1. Microprogrammed Control:
o Uses a microprogram stored in memory to generate control signals.
o Easier to design and modify by changing the microprogram.
o More flexible and can support complex instruction sets.
o Slower due to memory access for microinstructions.
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
2. Hardwired Control:
o Uses fixed logic circuits to generate control signals.
o Faster as it uses combinational logic.
o Less flexible; difficult to modify or extend instruction sets.
o More complex and expensive to design for complex instruction sets.

In summary, microprogrammed control offers flexibility and ease of modification, while hardwired
control provides faster execution at the cost of complexity and flexibility.

Q.24 Explain the differences between RISC and CISC.

Answer: RISC (Reduced Instruction Set Computer) and CISC (Complex Instruction Set Computer) are two
types of CPU architectures:

1. RISC:
o Simplified and smaller instruction set.
o Each instruction typically executes in one clock cycle.
o Emphasizes software over hardware complexity.
o Uses more registers to reduce memory access.
2. CISC:
o Larger and more complex instruction set.
o Instructions may take multiple cycles to execute.
o Emphasizes hardware over software complexity.
o Uses fewer registers, relying more on memory operations.

RISC architectures aim for efficiency and speed by simplifying instructions and optimizing execution,
while CISC architectures focus on reducing the number of instructions per program by using more
complex instructions【4:1†source】.

Q.25 Give the differences between direct mapping and associative mapping.

Answer: Direct mapping and associative mapping are techniques used in cache memory organization:

1. Direct Mapping:
o Each block of main memory maps to exactly one cache line.
o Simple to implement and requires less hardware.
o Can lead to a high rate of cache misses if multiple frequently accessed blocks map to the
same cache line.
2. Associative Mapping:
o Any block of main memory can be loaded into any line of the cache.
o Requires more complex hardware for searching the cache.
o Reduces the chance of cache misses as blocks are not restricted to specific lines.

In summary, direct mapping is simpler but may have higher miss rates, while associative mapping is
more flexible and efficient but requires more hardware complexity
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
Q.26 Explain Virtual memory.

Answer: Virtual memory is a memory management technique that creates an illusion of a large
contiguous memory space, regardless of the actual physical memory available. It uses both hardware
and software to allow a computer to compensate for physical memory shortages, by temporarily
transferring data from random access memory (RAM) to disk storage. This process involves the
following:

• Paging: Dividing memory into fixed-size pages that can be mapped to physical memory frames.
• Swapping: Moving pages between physical memory and disk storage as needed.
• Address Translation: Using a memory management unit (MMU) to translate virtual addresses to
physical addresses.

Virtual memory enables larger programs to run on systems with limited physical memory and provides
isolation and protection between processes.

Q.27 What is address mapping? Explain.

Answer: Address mapping is the process of translating logical or virtual addresses into physical
addresses in the memory. This translation is essential for accessing data stored in memory. There are
several methods of address mapping:

1. Direct Mapping: Each block of main memory maps to only one possible cache line.
2. Associative Mapping: A block of main memory can be loaded into any line of the cache.
3. Set-Associative Mapping: Combines direct and associative mapping, where each block maps to
a set of lines in the cache.

Address mapping improves memory access efficiency and system performance by ensuring that data is
quickly retrievable from the cache or main memory.

Q.28 Explain hit rate in context of cache memory.

Answer: The hit rate in the context of cache memory refers to the percentage of memory accesses that
hit the cache level. This means that the requested data is found in the cache, reducing the time taken to
access the data from the main memory. The hit rate is calculated by dividing the number of cache hits by
the total number of memory accesses. A higher hit rate indicates that the cache is effective in storing
frequently used data, resulting in faster access times and improved system performance

Q.29 What are the major functions of BIOS?

Answer: The Basic Input/Output System (BIOS) is firmware used to perform hardware initialization and
provide runtime services for operating systems and programs. Major functions of BIOS include:

1. Power-On Self-Test (POST): Diagnoses hardware issues during system boot.


2. Bootstrap Loader: Loads the operating system from bootable devices.
3. BIOS Setup: Provides a user interface for configuring hardware settings.
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
4. Hardware Abstraction: Facilitates communication between the operating system and hardware
devices.
5. Interrupt Handling: Manages hardware interrupts to ensure smooth operation of peripheral
devices.

BIOS is crucial for the initial hardware setup and system startup, ensuring the computer's components
are functioning correctly before handing control to the operating system.

Q.30 What is the role of DMA in data transfer?

Answer: Direct Memory Access (DMA) is a feature that allows peripheral devices to access main
memory directly, bypassing the CPU. This improves data transfer efficiency and CPU performance. The
role of DMA in data transfer includes:

1. Reducing CPU Overhead: The CPU initiates the transfer and can perform other tasks while the
DMA controller handles the data transfer.
2. Increasing Transfer Speed: DMA transfers data directly between memory and peripherals,
which is faster than CPU-mediated transfers.
3. Supporting Burst Transfers: Allows large blocks of data to be transferred in bursts, improving
throughput.

DMA is essential for high-speed data transfer applications, such as disk drives, sound cards, and network
interfaces, enabling efficient multitasking and better overall system performance.

Q.31 What is BIOS POST test?

Answer: The Power-On Self-Test (POST) is a diagnostic testing sequence run by the BIOS upon system
startup. The POST process checks the basic functionality of the computer's hardware components to
ensure they are working correctly before booting the operating system. Key steps in the POST process
include:

1. Checking the CPU: Ensuring the CPU is functioning and capable of executing instructions.
2. Memory Test: Verifying the integrity and capacity of RAM.
3. Peripheral Initialization: Detecting and initializing essential hardware components like the
keyboard, mouse, and display.
4. Error Reporting: Displaying error messages or beep codes if any issues are detected.

Successful completion of the POST test indicates that the hardware is in good condition and the system
can proceed to boot the operating system.

Q.32 Explain the interrupt initiated mode of data transfer.

Answer: Interrupt-initiated mode of data transfer is a method where the peripheral device informs the
CPU that it needs attention by sending an interrupt signal. This process includes the following steps:

1. Interrupt Request (IRQ): The device sends an interrupt signal to the CPU.
2. Interrupt Acknowledgment: The CPU acknowledges the interrupt and saves its current state.
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
3. Interrupt Service Routine (ISR): The CPU executes a predefined routine to handle the device's
request.
4. Completion and Return: After servicing the interrupt, the CPU restores its previous state and
resumes normal operations.

This mode improves efficiency by allowing the CPU to perform other tasks until the device is ready,
rather than continuously polling the device status. It is widely used in systems requiring timely
responses to hardware events, such as I/O operations.

Q.33 Explain the types of parallel processing.

Answer: Parallel processing involves executing multiple processes simultaneously to improve


performance. There are several types of parallel processing:

1. Bit-Level Parallelism: Increases the processor's word size to process more bits per instruction.
2. Instruction-Level Parallelism: Executes multiple instructions at the same time using techniques
like pipelining and superscalar architectures.
3. Data Parallelism: Distributes data across multiple processors to perform the same operation on
different data simultaneously.
4. Task Parallelism: Distributes different tasks or processes across multiple processors, each
performing a unique operation.

These types improve computational speed and efficiency, making parallel processing essential for high-
performance computing applications, such as scientific simulations, data analysis, and real-time systems
【4:1†source】.

Q.34 Explain multistage switching network.

Answer: A multistage switching network is a network architecture used in parallel computing and
telecommunications to route data between multiple input and output lines through a series of
interconnected switches. Key features include:

1. Stages of Switches: Multiple stages of smaller switches instead of a single large switch, reducing
hardware complexity.
2. Path Diversity: Multiple paths exist between any input-output pair, improving fault tolerance
and reliability.
3. Scalability: Can be expanded easily by adding more stages or switches, accommodating more
devices or connections.
4. Blocking and Non-Blocking: Blocking networks can experience contention, while non-blocking
networks ensure any input can be connected to any output without interference.

Multistage networks are used in high-speed routers, data centers, and parallel processing systems to
efficiently manage data traffic and enhance performance.

Q.35 Write a note on multiprocessor organization.


Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
Answer: Multiprocessor organization refers to systems that use two or more processors to perform
concurrent processing. This setup can be categorized into:

1. Shared Memory Multiprocessors: Processors share a common memory space, enabling fast
communication and data sharing. Examples include Symmetric Multiprocessing (SMP) and Non-
Uniform Memory Access (NUMA).
2. Distributed Memory Multiprocessors: Each processor has its own local memory, and processors
communicate via a high-speed interconnect. This architecture is used in clusters and massively
parallel processors (MPPs).

Key benefits include increased performance through parallelism, improved fault tolerance, and
scalability. Multiprocessor systems are widely used in scientific computing, data analysis, and real-time
processing applications.

Section D

Q.36 Explain DMA and DMA controller. (CO3)

Answer: Direct Memory Access (DMA) is a technique that allows peripherals to transfer data directly to
or from memory without CPU intervention. The DMA controller is a dedicated hardware component
that manages these transfers. The DMA process involves the following steps:

1. Initiation: The CPU initializes the DMA controller by setting the source and destination
addresses, transfer length, and transfer mode.
2. Transfer: The DMA controller takes over the bus to perform the data transfer directly between
the peripheral and memory.
3. Completion: Once the transfer is complete, the DMA controller sends an interrupt to the CPU to
signal the end of the transfer.

DMA significantly improves system efficiency by freeing the CPU to perform other tasks while the data
transfer occurs. It is especially beneficial in applications requiring high-speed data transfers, such as disk
operations, audio/video streaming, and network communications

DMA Controller - DMA Controller is a hardware device that allows I/O devices to directly access memory
with less participation of the processor. DMA controller needs the same old circuits of an interface to
communicate with the CPU and Input/Output devices.

DMA Controller Diagram - DMA Controller is a type of control unit that works as an interface for the
data bus and the I/O Devices. As mentioned, DMA Controller has the work of transferring the data
without the intervention of the processors, processors can control the data transfer. DMA Controller
also contains an address unit, which generates the address and selects an I/O device for the transfer of
data. Here we are showing the block diagram of the DMA Controller .
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture

Q.37 Write a short note on: a. Memory hierarchy, b. Memory connections to CPU, c. Time shared
common bus.

a. Memory hierarchy: Memory hierarchy is a structured arrangement of storage systems based on


speed, cost, and capacity. It includes:

1. Registers: Smallest, fastest, and most expensive memory directly accessible by the CPU.
2. Cache Memory: Provides faster access than main memory by storing frequently used data.
3. Main Memory (RAM): Primary storage for active processes and data.
4. Secondary Storage (HDD/SSD): Non-volatile storage for long-term data retention.
5. Tertiary Storage (Tape Drives): Used for archival and backup purposes.

This hierarchy balances cost and performance by ensuring frequently accessed data is available in the
fastest storage【4:1†source】.

b. Memory connections to CPU: Memory connections to the CPU are vital for efficient data access and
processing. These connections include:

1. Address Bus: Carries the addresses of memory locations that the CPU needs to access.
2. Data Bus: Transfers actual data between the CPU and memory.
3. Control Bus: Carries control signals to manage memory operations, such as read/write
commands and timing signals.

Efficient memory connections ensure quick data transfer, minimizing latency and improving overall
system performance【4:1†source】.
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
c. Time-shared common bus: A time-shared common bus is a communication system where multiple
devices share the same bus, and access is regulated based on time slots. This method includes:

1. Bus Arbitration: Determines which device has control of the bus at any given time.
2. Time Division Multiplexing (TDM): Allocates fixed time slots to each device for data transfer.
3. Efficiency: Maximizes bus utilization and reduces contention by allowing devices to share the
bus in an orderly manner.

Time-shared common buses are used in systems where multiple peripherals need to communicate with
the CPU, ensuring efficient and fair access to the shared bus.

Q.38 What is addressing modes? Explain the types of addressing modes?

Answer Addressing Modes– The term addressing modes refers to the way in which the operand of an
instruction is specified. The addressing mode specifies a rule for interpreting or modifying the address
field of the instruction before the operand is actually executed.

Implied Mode

In the implied mode, the operands are implicitly specified in the definition of instruction. For instance,
the “complement accumulator” instruction refers to an implied-mode instruction. It is because, in the
definition of the instruction, the operand is implied in the accumulator register. All the register
reference instructions are implied-mode instructions that use an accumulator.

Immediate Mode

In the immediate mode, we specify the operand in the instruction itself. Or, in simpler words, instead of
an address field, the immediate-mode instruction consists of an operand field. An operand field contains
the actual operand that is to be used in conjunction with an operation that is determined in the given
instruction. The immediate-mode instructions help initialize registers to a certain constant value.

Register Mode

In the register mode, the operands exist in those registers that reside within a CPU. In this case, we
select a specific register from a certain register field in the given instruction. The k-bit field is capable of
determining one 2k register.

Register Indirect Mode

In the register indirect mode, the instruction available to us defines that particular register in the CPU
whose contents provides the operand’s address in the memory. In simpler words, any selected register
would include the address of an operand instead of the operand itself.
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture
The reference to a register is equivalent to specifying any memory address. The pros of using this type of
instruction are that an instruction’s address field would make use of fewer bits to select a register than
would be require when someone wants to directly specify a memory address.

Autodecrement or the Autoincrement Mode

The Autodecrement or Autoincrement mode is very similar to the register indirect mode. The only
exception is that the register is decremented or incremented before or after its value is used to access
memory. When the address stored in the register defines a data table in memory, it is very crucial to
decrement or increment the register after accessing the table every time. It can be obtained using the
decrement or increment instruction.

Direct Address Mode

In the direct address mode, the address part of the instruction is equal to the effective address. The
operand would reside in memory, and the address here is given directly by the instruction’s address
field. The address field would specify the actual branch address in a branch-type instruction.

Indirect Address Mode

In an indirect address mode, the address field of an available instruction gives that address in which the
effective address gets stored in memory. The control fetches the instruction available in the memory
and then uses its address part in order to (again) access memory to read its effective address.

Indexed Addressing Mode

In the indexed addressing mode, the content of a given index register gets added to an instruction’s
address part so as to obtain the effective address. Here, the index register refers to a special CPU
register that consists of an index value. An instruction’s address field defines the beginning address of
any data array present in memory.
Solved Question Paper July 2023 Computer Engineering 4th Sem.
Computer Organization & Architecture

You might also like