0% found this document useful (0 votes)
4 views

CC-04 Unit2

Uploaded by

hksingh7061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

CC-04 Unit2

Uploaded by

hksingh7061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Computer System

Architecture

Unit2- Central processing unit

Register organization
Arithmetic and logical micro-operations
Stack organization
Micro programmed control
Instruction formats
Pipelining and parallel processing
#Register organization-
Register organization in computer architecture refers to the
structure and management of registers within a processor.
Registers are small, high-speed storage locations within the
CPU that hold data temporarily during processing. Register
organization in computer architecture is the arrangement and
utilization of registers within the CPU to efficiently execute
instructions, manage data, control program flow, and
optimize performance. They play a crucial role in executing
instructions efficiently.

Here's a precise explanation of register organization:

1. Types of Registers:
- General-Purpose Registers: Used for storing data
temporarily during program execution. They hold operands,
intermediate results, and addresses.
- Special-Purpose Registers: Serve specific functions such as
instruction pointer (IP), stack pointer (SP), program counter
(PC), status flags, and others.

2. Register Sizes:
- Registers can vary in size depending on the architecture.
Common sizes include 8-bit, 16-bit, 32-bit, or 64-bit registers.
The size determines the maximum data a register can hold.
3. Register File:
- Registers are typically organized into a register file or
register bank. This file consists of a collection of registers
accessible by the CPU for data manipulation.

4. Data Transfer:
- Data is transferred between registers and memory or
between registers within the CPU through data movement
instructions.
- Load and store instructions move data between memory
and registers.
- Move instructions transfer data between different
registers within the CPU.

5. Instruction Execution:
- During instruction execution, operands are fetched from
memory into registers, where the CPU performs arithmetic,
logical, or control operations on them.
- Intermediate results are stored back in registers before
being transferred to memory or used in subsequent
operations.

6. Control Registers:
- Control registers manage the operation of the CPU,
including program flow control, interrupt handling, and
processor status.

7. Pipeline Registers:
- In pipelined architectures, pipeline registers are used to
store intermediate results between stages of instruction
execution, improving performance by allowing multiple
instructions to be processed simultaneously.

8. Cache Management:
- Registers play a role in managing cache memory, storing
frequently accessed data and instructions for faster access by
the CPU.

#Arithmetic and Logical micro-operations-

-Arithmetic micro-operations-
Arithmetic micro-operations in computer architecture are
fundamental operations performed at the digital logic level
within the CPU to execute arithmetic computations.
Arithmetic micro-operations are performed on binary data
within a computer's arithmetic logic unit (ALU) or control unit.
These micro-operations manipulate binary data stored in
registers or memory according to arithmetic operations such
as addition, subtraction, multiplication, and division.
Arithmetic micro-operations in computer architecture involve
a set of basic operations performed at the hardware level to
manipulate binary data and execute arithmetic computations
efficiently within the CPU.

Here's a precise explanation:


1. Addition:
- Addition micro-operations combine two binary numbers,
typically stored in registers, by adding corresponding bits and
propagating any carry generated from lower bits to higher
bits.

2. Subtraction:
- Subtraction micro-operations subtract one binary number
from another. This process involves complementing the bits
of the subtrahend and adding it to the minuend along with a
carry from the previous bit.

3. Multiplication:
- Multiplication micro-operations compute the product of
two binary numbers. Techniques like shift-and-add or Booth's
algorithm are commonly used to perform binary
multiplication.
4. Division:
- Division micro-operations compute the quotient and
remainder when one binary number (the dividend) is divided
by another (the divisor). Algorithms like restoring division or
non-restoring division are used for binary division.

5. Increment and Decrement:


- Increment micro-operations add one to the value stored in
a register, while decrement micro-operations subtract one.
These operations are useful for loop control, array indexing,
and other arithmetic sequences.

6. Logical Operations:
- Logical micro-operations perform bitwise logical
operations such as AND, OR, XOR, and NOT. While primarily
used for logical operations, they are also essential in
arithmetic operations, especially for bit manipulation.

7. Operations:
- Shift micro-operations shift the bits of a binary number
left or right by a specified number of positions. These
operations are useful for multiplication and division, as well
as for manipulating data formats like fixed-point or floating-
point numbers.
8. Overflow Detection:
- Arithmetic micro-operations may include overflow
detection mechanisms to indicate when the result of an
operation exceeds the representable range of the data type.
Overflow flags are typically set based on the carry-out or
borrow-out from the most significant bit.

9. Conditional Arithmetic:
- Conditional arithmetic micro-operations perform
arithmetic operations based on certain conditions, such as
conditional jumps or conditional moves, which are crucial for
implementing control flow in programs.

-Logical micro-operations-
Logical micro-operations in computer architecture are
fundamental operations performed at the level of individual
bits within a CPU's registers or data paths. These operations
manipulate binary data according to logical rules, such as
AND, OR, NOT, and XOR.

1. AND Operation: Computes the logical AND of


corresponding bits from two operands. The result is 1 only if
both input bits are 1; otherwise, it's 0.
2. OR Operation: Computes the logical OR of corresponding
bits from two operands. The result is 1 if at least one of the
input bits is 1.

3. NOT Operation: Also known as the complement operation,


it flips the value of each bit. If the input bit is 1, the output is
0, and vice versa.

4. XOR Operation: Computes the exclusive OR of


corresponding bits from two operands. The result is 1 if the
two input bits are different; otherwise, it's 0.

These logical operations are typically executed within the


CPU's arithmetic logic unit (ALU) or other specialized units,
and they form the basis for many higher-level computations
and data manipulations within a computer system. They are
essential for tasks like bitwise operations, Boolean logic, and
data manipulation at the lowest level of abstraction within a
computer system.

#Stack organization-
Stack organization in computer architecture refers to a
method of managing memory that follows the Last In, First
Out (LIFO) principle. It is typically implemented using a region
of memory called the stack, which grows and shrinks
dynamically as data is pushed onto or popped off of it.
Here's a precise explanation of stack organization:

1. Memory Structure: The stack is a region of memory


reserved for storing data temporarily during program
execution. It is organized as a contiguous block of memory
with a fixed starting address and a maximum size determined
by the system or programming language.

2. LIFO Principle: Data is added to and removed from the


stack according to the Last In, First Out (LIFO) principle. This
means that the last item pushed onto the stack is the first
item to be removed (popped) from the stack.

3. Stack Pointer: A special register called the stack pointer


(SP) is used to keep track of the current top of the stack. It
points to the memory location where the next item will be
pushed onto the stack or where the next item will be popped
from.

4. Operations: Two primary operations are performed on the


stack:
- Push: This operation adds data onto the top of the stack. It
involves decrementing the stack pointer to reserve space for
the new data and then storing the data at the memory
location pointed to by the stack pointer.
- Pop: This operation removes data from the top of the
stack. It involves retrieving the data from the memory
location pointed to by the stack pointer and then
incrementing the stack pointer to free up the space.

5. Usage: The stack is commonly used for storing local


variables, function parameters, return addresses, and other
temporary data within a program. It is also used for managing
function calls and returning from function calls in many
programming languages and execution environments.

Overall, stack organization provides an efficient and flexible


method for managing memory in a computer system,
particularly for handling temporary data and supporting
subroutine calls and returns.

#Micro programmed control-


Microprogrammed control in computer architecture is a
method of implementing control logic within a CPU by using
microinstructions stored in a control memory.
Microprogrammed control is a technique used in computer
architecture to implement control logic using
microinstructions stored in a control memory, providing
flexibility and modifiability in CPU design.

Here's a precise explanation:


1.Control Logic Implementation: In microprogrammed
control, the control logic of a CPU is implemented using a
sequence of microinstructions. These microinstructions are
stored in a control memory, typically implemented using fast,
random-access memory (RAM) or ROM (Read-Only Memory).

2. Microinstruction Format: Each microinstruction consists of


fields that encode control signals for various components of
the CPU, such as the arithmetic logic unit (ALU), registers,
memory, and input/output (I/O) devices. These control
signals determine the operations to be performed by the CPU
during each clock cycle.

3. Control Memory: The control memory holds the


microinstructions and is accessed using a control address
generated by the CPU. The control address specifies the
location of the next microinstruction to be fetched and
executed.

4. Control Unit: The control unit of the CPU is responsible for


generating the control address and decoding the
microinstructions fetched from the control memory. It
extracts the control signals from the microinstructions and
activates the appropriate hardware components accordingly.
5. Flexibility and Modifiability: Microprogrammed control
offers flexibility and modifiability in CPU design. Changes to
the CPU's behaviour or instruction set can be implemented
by modifying the microinstructions stored in the control
memory, without requiring changes to the hardware circuitry.

6. Complex Instruction Set Computers (CISC):


Microprogrammed control is commonly used in Complex
Instruction Set Computers (CISC) architectures, where a
single machine instruction may require multiple
microinstructions to execute due to the complexity of the
instruction set.

7. Performance Considerations: While microprogrammed


control provides flexibility, it can introduce overhead due to
the additional memory accesses required to fetch
microinstructions. However, advancements in memory
technology and CPU design have mitigated this overhead in
many modern microarchitectures.

#Instruction formats-
Instruction formats in computer architecture define the
structure and organization of machine instructions that a CPU
can execute. These formats dictate how various components
of an instruction, such as the opcode (operation code),
operands, addressing modes, and other control bits, are
arranged within the binary representation of the instruction.
Here's a precise explanation:
1. Opcode: The opcode field specifies the operation to be
performed by the CPU, such as arithmetic, logical, or control
operations. It is typically a fixed-size field within the
instruction format.

2. Operands: Operands represent the data on which the


operation specified by the opcode is to be performed.
Depending on the instruction format, operands can be
specified explicitly within the instruction or indirectly through
memory addresses or registers.

3. Addressing modes: Addressing modes determine how


operands are accessed or specified within the instruction.
Common addressing modes include immediate (operand
value is included within the instruction itself), register
(operand is stored in a register), direct (operand address is
explicitly specified within the instruction), indirect (operand
address is stored at a specified location), and indexed
(operand address is computed using a base address and an
offset).

4. Control Bits: Control bits within the instruction format


provide additional information or modifiers for the operation
specified by the opcode. These bits may indicate special
conditions, flags, or modes of operation.

5. Instruction Length: The length of an instruction, in terms


of the number of bits, is determined by the instruction
format. Some architectures have fixed-length instructions,
while others support variable-length instructions to
accommodate different addressing modes and operand sizes.

6. Encoding: Instructions are encoded in binary format


according to the instruction format specified by the
architecture. Each field within the instruction format is
assigned a specific bit pattern that uniquely identifies the
operation, operands, and addressing modes.

7. Alignment: Instruction formats may dictate alignment


requirements, specifying how instructions should be aligned
within memory or cache lines for efficient access and
execution by the CPU.

8. Variations: Different instruction formats may exist within a


single architecture to support various instruction types, such
as arithmetic, logical, branch, and data transfer instructions.

Instruction formats in computer architecture define the


structure and organization of machine instructions, including
the opcode, operands, addressing modes, control bits,
instruction length, encoding, alignment, and variations,
enabling the CPU to interpret and execute instructions
correctly.

#Pipelining and parallel processing-

-Pipelining-
Pipelining in computer architecture is a technique used to
enhance CPU performance by overlapping the execution of
multiple instructions. It divides the execution of an
instruction into a series of stages, allowing different stages of
multiple instructions to be executed concurrently. Pipelining
in computer architecture enhances CPU performance by
overlapping the execution of multiple instructions through
the use of pipeline stages, pipeline registers, and instruction-
level parallelism. It increases throughput and reduces the
average execution time of instructions, leading to overall
improvements in system performance.

Here's a precise explanation:


1. Stages: Pipelining breaks down the execution of an
instruction into discrete stages, such as instruction fetch,
instruction decode, operand fetch, execute, and write-back.
Each stage performs a specific task required to execute an
instruction.

2. Concurrent Execution: In a pipelined architecture, multiple


instructions are in various stages of execution simultaneously.
While one instruction is being executed in one stage, the next
instruction is fetched in the subsequent stage, and so on. This
overlapping of instruction execution increases throughput
and improves overall performance.

3. Pipeline Registers: Pipeline registers are used to store


intermediate results and control signals between pipeline
stages. These registers allow data and control signals to be
passed from one stage to the next efficiently, facilitating the
flow of instructions through the pipeline.

4. Parallelism: Pipelining exploits instruction-level parallelism


by allowing different stages of multiple instructions to
proceed concurrently. While one instruction is completing its
execution, subsequent instructions can begin their execution
in the pipeline, reducing the overall time taken to execute a
sequence of instructions.

5. Hazards: Hazards are situations that can arise in pipelined


architectures, such as data hazards, control hazards, and
structural hazards. These hazards can lead to stalls in the
pipeline, reducing the performance benefits of pipelining.
Techniques such as forwarding, branch prediction, and
instruction scheduling are used to mitigate hazards and
improve pipeline efficiency.

6. Performance Impact: Pipelining improves CPU


performance by increasing instruction throughput and
reducing the average execution time of individual
instructions. However, the actual performance gain depends
on factors such as pipeline depth, instruction mix, presence
of hazards, and efficiency of pipeline design.

7. Variants: Various pipeline architectures exist, including


scalar pipelines, superscalar pipelines, and vector pipelines,
each with different approaches to instruction execution and
parallelism exploitation.

-Parallel Processing-
Parallel processing in computer architecture involves the
simultaneous execution of multiple tasks or parts of a task to
achieve faster computation and improved performance.
Parallel processing in computer architecture involves the
simultaneous execution of multiple tasks or parts of a task
using multiple processing units, leading to faster
computation, improved performance, and enhanced
scalability
Here's a precise explanation:
1. Task Decomposition: Parallel processing breaks down a
task into smaller sub-tasks or components that can be
executed concurrently. These sub-tasks can be
independent or interdependent, depending on the
nature of the computation.

2. Multiple Processors or Cores: Parallel processing requires


multiple processing units, such as individual CPU cores or
separate processors, capable of executing tasks
simultaneously. These processors can operate independently
or in coordination with each other.

3. Concurrency: Parallel processing enables concurrent


execution of tasks, meaning that multiple tasks or parts of a
task can progress simultaneously. This concurrency leads to
faster computation and improved throughput compared to
sequential processing.

4. Parallelism Models: There are various models of


parallelism, including:
- Task Parallelism: Dividing tasks into smaller independent
sub-tasks that can be executed concurrently.
- Data Parallelism: Distributing data across multiple
processing units and performing the same operation on each
data element concurrently.
- Pipeline Parallelism: Dividing a task into stages and
executing different stages of multiple tasks simultaneously.

5. Communication and Synchronization: Parallel processing


often requires communication and synchronization between
processing units to exchange data, coordinate tasks, and
ensure correct execution. Techniques such as message
passing, shared memory, and synchronization primitives are
used to manage communication and synchronization.

6. Scalability: Parallel processing offers scalability, allowing


systems to handle larger workloads by adding more
processing units. Scalability can be achieved through
symmetric multiprocessing (SMP), cluster computing, or
distributed computing architectures.

7. Parallel Algorithms and Programming Models: Developing


parallel software requires using parallel algorithms and
programming models suited for parallel execution. Parallel
programming frameworks and languages, such as OpenMP,
MPI, CUDA, and OpenCL, facilitate the development of
parallel applications.
8. Performance Considerations: While parallel processing can
significantly improve performance, factors such as load
balancing, overheads associated with communication and
synchronization, and Amdahl's Law, which limits the potential
speedup due to sequential portions of the program, must be
considered when designing parallel systems.

You might also like