0% found this document useful (0 votes)
14 views

Computer Organisation and Arthicture Question Paper and Solution

The document outlines the course structure and examination details for the Computer Organization and Architecture course at SRM Institute of Science and Technology for the academic year 2024-25. It includes a test format with multiple-choice questions and descriptive questions covering various topics such as cache design, processor architecture, and parallel processing. Additionally, it discusses the functionality of the Thumb instruction set in ARM processors and the operations of memory load and store instructions.

Uploaded by

hyder junaid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Computer Organisation and Arthicture Question Paper and Solution

The document outlines the course structure and examination details for the Computer Organization and Architecture course at SRM Institute of Science and Technology for the academic year 2024-25. It includes a test format with multiple-choice questions and descriptive questions covering various topics such as cache design, processor architecture, and parallel processing. Additionally, it discusses the functionality of the Thumb instruction set in ARM processors and the operations of memory load and store instructions.

Uploaded by

hyder junaid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

SRM Institute of Science and Technology

College of Engineering and Technology


SCHOOL OF COMPUTING
Department of Computing Technologies
SRM Nagar, Kattankulathur – 603203, Chengalpattu District, Tamil Nadu
A2cademic Year: 2024-25 (ODD)

Test: CLAT-2 Date:


Course Code & Title: 21CSS201T – COMPUTER ORGANIZATION AND ARCHITECTURE Duration: 100
minutes
Year & Sem: II/III SET A Max. Marks: 50

Course Articulation Matrix:

Course
Learning At the end of this course, learners will PO PO PO PO PO PO PO PO PO PO PO PO
Outcomes be able to: 1 2 3 4 5 6 7 8 9 10 11 12
(CLO)
Identify the computer hardware and - - - - - - -
CO-1 how software interacts with computer H M - - -
hardware
Apply Boolean algebra as related to
designing computer logic, through - - - - - -
CO-2 H H - - - -
simple combinational and sequential
logic circuits

Part – A ( 10 x 1 = 10 Marks) Instructions: Answer all

Q. No Question Marks BL CO PO PI Code

1 An advantage of a multiple bus organization over a single bus


organization in a computer system is ________.
A) Simplified control complexity
1 1 1 1 2.1.2
B) Reduced cost of implementation
C) Reduced number of simultaneous data transfers
D) Higher speed due to parallelism in data transfers
2 In a load/store architecture, the control signal used to instruct
the CPU to transfer data from a register to memory is ________.
A. Read
1 1 1 1 2.1.2
B. Write
C. Fetch
D. Execute
3 In a hardwired control unit, the control signals are generated
using:
A) Microprogramming
B) A set of predefined conditions and combinational logic 1 1 1 1 2.1.2
C) Memory
D) Software instructions

4 What is the main purpose of instruction pipelining in a CPU?


A) To increase the clock speed of the CPU
B) To increase the number of instructions a CPU can execute at
once 1 1 1 1 2.1.2
C) To reduce the size of the instruction set
D) To improve the throughput by allowing multiple instructions
to be processed simultaneously at different stages.
5 In a pipelined processor, the primary reason for stalling or
pipeline bubbles is due to:
A) A branch instruction
B) An arithmetic instruction 1 1 1 1 2.1.2
C) A load instruction
D) Instruction fetch stage
6 The hazard that occurs when an instruction depends on the result
of a previous instruction that has not yet completed its execution
in a pipeline is known as ________.
A) Structural hazard 1
1 1 2 2.1.2
B) Data hazard
C) Control hazard
D) Resource hazard

7 Which of the following is a primary function of the control unit


in a CPU?
A) Perform arithmetic and logic operations
B) Store data and instructions 1 1 2 1 2.1.2
C) Fetch and decode instructions
D) Manage I/O operations

8 The architecture best suited for parallel processing is ________.


A) Von Neumann architecture
B) Harvard architecture 1
1 1 2 2.1.2
C) SIMD architecture
D) SISD architecture

9 In the ARM architecture, what is the purpose of the Program


Counter (PC)?
A) To store the result of arithmetic operations
B) To keep track of the current instruction being executed 1 1 2 1 2.1.2
C) To hold the base address of data segments
D) To manage the stack operations

10 Which type of parallelism is achieved by executing multiple


instructions simultaneously using different processors in a
multiprocessor system?
A) Instruction-level parallelism 1
1 1 2 2.1.2
B) Data-level parallelism
C) Thread-level parallelism
D) Task-level parallelism
SRM Institute of Science and Technology
College of Engineering and Technology
SCHOOL OF COMPUTING
Department of Computing Technologies
SRM Nagar, Kattankulathur – 603203, Chengalpattu District, Tamil Nadu
Academic Year: 2024-25 (ODD)

Part – B ( 4 x 4 = 16 Marks) Instructions: Answer any 4

11 Compare the different mapping schemes utilized in


cache design.
1. Direct Mapping

● Description: In direct mapping, each main


memory block is mapped to a specific line in
the cache. The cache line number is
calculated using the address of the main
memory block.

● Advantages: Simple to implement, requires


only a small amount of hardware.

● Disadvantages: High conflict misses because


multiple data blocks map to the same cache
line. If two frequently used blocks map to the
same line, they will continually replace each
other, reducing cache efficiency.
4 2 1 2 2.4.1
● Performance: Faster access due to its
simplicity, but may experience more misses.

2. Fully Associative Mapping

● Description: Any block of main memory can


be loaded into any line of the cache. The
cache controller checks all cache lines
simultaneously to see if the required data is
present.

● Advantages: Eliminates conflict misses, as any


block can be loaded into any line, maximizing
cache flexibility.

● Disadvantages: Requires more complex


hardware, as all cache lines need to be
searched, leading to higher costs and slower
performance with larger caches.

● Performance: High hit rate, especially useful


for small caches where space is limited.

3. Set-Associative Mapping

● Description: This is a hybrid of direct and fully


associative mapping. The cache is divided into
sets, and each set contains multiple lines. A
memory block maps to a specific set, but
within that set, it can be placed in any line.

● Advantages: Balances the benefits of direct


and fully associative mapping. Reduces
conflict misses while keeping the hardware
requirements manageable.

● Disadvantages: Slightly more complex than


direct mapping but still more feasible than
fully associative mapping.

● Performance: Provides a good trade-off, with


a lower miss rate than direct mapping and
fewer hardware requirements than fully
associative mapping.

12 Outline the architecture of Single Bus Organization


processors.
ALU
➢Registers for temporary storage
➢Various digital circuits for executing different
micro operations. (gates, MUX, decoders,
counters). rIRl
Program Counter (PC)
Keeps track of execution of a program
Contains the memory address of the next
instruction to be fetched and executed.
Memory Address Register (MAR)
Holds the address of the location to be
accessed.
I/P of MAR is connected to Internal bus and
4 2 1 2 2.4.1
an O/P to external bus.
Memory Data Register (MDR)
It contains data to be written into or read out
of the addressed location.
Data can be loaded into MDR either from
memory bus or from internal processor bus.
Registers
The processor registers R0 to Rn-1 vary
considerably from one processor to another.
Registers are provided for general purpose
used by programmer.
Multiplexer
Select either the output of the register Y or a
constant value 4 to be provided as input A of the
ALU.
Constant4isusedbytheprocessortoincrementthecontents
ofPC.

13 What are the key features and components of the


Multibus Architecture processor
Most commercial processors provide multiple internal
paths to enable several transfers to take place in
parallel. Data transfer requires less control sequences.
Mutiple data transfer can be done in a single clock
cycle

4 2 1 2 2.4.1

Three-busorganizationtoconnecttheregistersandtheAL
Uofaprocessor.

•Allgeneral-purposeregistersarecombinedintoasinglebl
ockcalledregisterfile.

•Register file has three ports.

•Two output sports connected to buses A and B,


allowing the contents of two different registers to be
accessed simultaneously, and placed on buses A and
B.

•Third input port allows the data on bus C to be loaded


into a third register during the same clock cycle.

•Inputs to the ALU and outputs from the ALU:

•Buses A and B are used to transfer the source


operands to the A and B inputs of the ALU.

•Result is transferred to the destination over bus C

ALU can also pass one of its two input operand sun
modified if needed:
Control signals for such an operation are R=A
or R = B.
Three bus arrangement obviates the need for
Registers Y and Z in the single bus organization.
Incrementer unit: Used to increment the PC by
4.
Source for the constant 4 at the ALU
multiplexer can be used to increment other addresses
such as the memory addresses in multiple load / store
instructions.

14 List the Applications and advantages of parallel


processing?
Parallel processing is a computing technique that uses
multiple processors to divide a task into smaller parts,
which are then completed simultaneously. This
technique is used in many fields, including:

• Web services and social media: Parallel


processing is used to efficiently analyze large datasets.
• Medical imaging: Parallel processing is used
to analyze large datasets.
• Bioinformatics: Parallel processing is used to
analyze large datasets.
• Weather forecasting: Parallel processing helps
weather models run faster, allowing for more accurate
forecasts. 4 2 2 2 2.4.1
• Smartphones: Parallel processing helps
smartphones accomplish tasks faster and more
efficiently.
• Blockchain technology: Parallel processing
connects multiple computers to validate transactions
and inputs.
• Servers: Symmetric multiprocessing (SMP) is
a type of parallel processing architecture that's
commonly used in servers.
• Personal computers: Parallel processing is
used to support everyday functions like running search
engines or hosting video conferencing software.
Parallel processing has many advantages in computer
architecture, including:
• Computational efficiency: Parallel processing
can execute code more efficiently, which can save
time and money.
• High performance: Parallel processing can
process data and perform calculations at high speeds.
• Scalability: Parallel processing can scale by
creating subgroups of processes that can limit the
scope of collective communication operations.
• Flexibility: Parallel processors allow for more
flexibility in how system resources are used.
• Machine learning: Parallel processing can
significantly reduce the time it takes to train and test
machine learning models.
• Shared memory: Shared memory systems
offer advantages such as low latency, high bandwidth,
and ease of programming.

15 State the purpose and functionality of the Thumb

instruction set in ARM processors.

The Thumb instruction set is a subset of the most


commonly used 32-bit ARM instructions. Thumb
instructions are each 16 bits long, and have a
corresponding 32-bit ARM instruction that has the
same effect on the processor model. Thumb
instructions operate with the standard ARM register
configuration, allowing excellent interoperability
between ARM and Thumb states.

On execution, 16-bit Thumb instructions are


transparently decompressed to full 32-bit ARM
instructions in real time, without performance loss.

Thumb has all the advantages of a 32-bit core:

● 32-bit address space


● 32-bit registers
● 32-bit shifter, and Arithmetic Logic Unit
(ALU)
● 32-bit memory transfer. 4 2 2 2 2.4.1

Thumb therefore offers a long branch range, powerful


arithmetic operations, and a large address space.

Thumb code is typically 65% of the size of ARM


code, and provides 160% of the performance of ARM
code when running from a 16-bit memory system.
Thumb, therefore, makes the ARM7TDMI core
ideally suited to embedded applications with restricted
memory bandwidth, where code density and footprint
is important.

The availability of both 16-bit Thumb and 32-bit


ARM instruction sets gives designers the flexibility to
emphasize performance or code size on a subroutine
level, according to the requirements of their
applications. For example, critical loops for
applications such as fast interrupts and DSP
algorithms can be coded using the full ARM
instruction set then linked with Thumb code.

Part – C ( 2 x 12 = 24 Marks)

16 Describe the Microprogrammed control unit with 12 3


diagram and list out the control signals.
Microprogrammed Control 1 2 2.4.1
​The Control signals are generated through a program
similar to machine language programs. Here we use a
sequence of bits to notify, the signals that are to be set
for a particular action. For example if PCout is used in
a particular step then the bit allocated for PCout will
be set to 1.
OR

17 Demonstrate the control sequences involved in the


complete execution of “ADD R4,R5,R6” in multi bus
environment with diagram.

2.4.1
12 3 1 2
4 R4out Yin, Select Y,

R5 out, Add Zout, R6in , End

18 Analyze the classification of parallel architecture that


is determined by the multiplicity of instruction
streams and data streams in a computer system.

12 3 2 2 2.4.1
OR

19 Explain the operations of the Memory Load and Store


12 3 2 2 2.4.1
instructions in ARM processors and their impact on
data handling.

ARM processors perform Arithmetic Logic Unit (ALU)


operations only on registers. The only supported
memory operations are the load (that read data from
memory into registers) or store (that write data from
registers to memory). A LDR and STR can be
conditionally executed, in the same fashion as other
instructions.

You can specify the size of the Load or Store transfer by


appending a B for Byte, H for Halfword, or D for
doubleword (64 bits) to the instruction, for example,
LDRB. For loads only, an extra S can be used to indicate
a signed byte or halfword (SB for Signed Byte or SH for
Signed Halfword).

This approach can be useful, because if you load an 8-bit


or 16-bit quantity into a 32-bit register you must decide
what to do with the most significant bits of the register.
For an unsigned number, you zero-extend, that is, you
write the most significant 16 or 24 bits of the register to
zero. But for a signed number, it is necessary to copy the
sign bit (bit [7] for a byte, or bit [15] for a halfword) into
the top 16 (or 24) bits of the register.

Addressing modes

There are multiple addressing modes that can be used


for loads and stores:

● Register addressing- the address is in a register.


● Pre-indexed addressing - an offset to the base
register is added before the memory access. The
base form of this is LDR Rd, [Rn, Op2]. The
offset can be positive or negative and can be an
immediate value or another register with an
optional shift applied.
● Pre-indexed with write-back - this is indicated
with an exclamation mark (!) added after the
instruction. After the memory access has
occurred, this updates the base register by
adding the offset value.
● Post-index with write-back - here, the offset
value is written after the square bracket. The
address from the base register only is used for
the memory access, with the offset value added
to the base register after the memory access.

(1) LDR R0, [R1] @ address pointed to by R1

(2) LDR R0, [R1, R2] @ address pointed to by R1 +


R2

(3) LDR R0, [R1, R2, LSL #2] @ address is R1 +


(R2*4)

(4) LDR R0, [R1, #32]! @ address pointed to by R1


+ 32, then R1:=R1 + 32

(5) LDR R0, [R1], #32 @ read R0 from address


pointed to by R1, then R1:=R1 + 32

Multiple Transfers

Load and Store Multiple instructions enable successive


words to be read from or written to memory. These are
extremely useful for stack operations and for memory
copying. Only word values can be transferred in this
way and a word aligned address must be used.

The operands are a base register with a list of registers


between braces. The optional ! denotes write-back of the
base register. The register list is comma separated, with
hyphens used to indicate ranges. The order in which the
registers are loaded or stored has nothing to do with the
order specified in this list. Instead, the operation
proceeds in a fixed fashion, in increasing register order,
with the lowest numbered register always mapped to the
lowest address.

For example: LDMIA R10!, { R0-R3, R12 }

This instruction reads five registers from the addresses


pointed to by register (R10) and because write-back is
specified, increments R10 by 20 (5 × 4 bytes) at the end.

The instruction must also specify how to proceed from


the base register Rd. The four possibilities are: IA/IB
(Increment After/Before) and DA/DB (Decrement
After/Before). These can also be specified using aliases
(FD, FA, ED and EA) that work from a stack point of
view and specify whether the stack pointer points to a
full or empty top of the stack, and whether the stack
ascends or descends in memory.

By convention, only the Full Descending (FD) option is


used for stacks in ARM processor based systems. This
means that the stack pointer points to the last filled
location in stack memory and will decrement with each
new item of data pushed to the stack.

For example:

STMFD sp!, {r0-r5} ; Push onto a Full Descending Stack

LDMFD sp!, {r0-r5} ; Pop from a Full Descending Stack

a push of two registers to the stack. Before the STMFD


(PUSH) instruction is executed, the stack pointer points
to the last occupied word of the stack. After the
instruction is completed, the stack pointer has been
decremented by 8 (two words) and the contents of the
two registers have been written to memory, with the
lowest numbered register being written to the lowest
memory address.

Stack push operation


*Performance Indicators are available separately for Computer Science and Engineering in AICTE
examination reforms policy.

Course Outcome (CO) and Bloom’s level (BL) Coverage in Questions

Approved by the Audit Professor/Course Coordinator

You might also like