0% found this document useful (0 votes)
9 views17 pages

Basic Organization of the Stored Program Computer

The document discusses the basic organization of stored program computers, highlighting the Von Neumann architecture, which integrates memory, control units, arithmetic logic units, and I/O devices for efficient program execution. It explains the roles of operating systems, compilers, and assemblers in managing resources and translating code, as well as the concepts of operators, operands, registers, and storage in computing. Finally, it covers instruction set architecture, differentiating between CISC and RISC designs, and outlining the key components of instructions.

Uploaded by

bk123456july
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views17 pages

Basic Organization of the Stored Program Computer

The document discusses the basic organization of stored program computers, highlighting the Von Neumann architecture, which integrates memory, control units, arithmetic logic units, and I/O devices for efficient program execution. It explains the roles of operating systems, compilers, and assemblers in managing resources and translating code, as well as the concepts of operators, operands, registers, and storage in computing. Finally, it covers instruction set architecture, differentiating between CISC and RISC designs, and outlining the key components of instructions.

Uploaded by

bk123456july
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Basic Organization of the Stored Program Computer

Introduction

The concept of the stored program computer, introduced by John von Neumann in the 1940s,
revolutionized computing by allowing instructions and data to reside in the same memory. This
architecture forms the foundation of modern computing systems, enabling flexibility, efficiency, and
programmability. This document delves into the core principles of stored program computers, their
components, and how they function.

Von Neumann Architecture

The Von Neumann architecture, named after the mathematician and computer scientist John von
Neumann, describes a computer system with the following key features:

1. Memory: A single memory space stores both instructions and data.

2. Control Unit (CU): Directs the operations of the computer by fetching, decoding, and
executing instructions.

3. Arithmetic and Logic Unit (ALU): Performs arithmetic and logical operations.

4. Input/Output (I/O) Devices: Allow interaction with the external environment.

5. System Bus: Facilitates communication between the components.

This architecture contrasts with earlier designs where instructions were hardwired or stored
separately from data.

Components of the Stored Program Computer

1. Memory Unit

The memory unit is the component where programs and data are stored. It is characterized by the
following:

 Types of Memory:

o RAM (Random Access Memory): Volatile memory used for temporary storage.

o ROM (Read-Only Memory): Non-volatile memory used for permanent storage.

o Cache Memory: High-speed memory close to the CPU for frequently accessed data.

 Structure: Memory is organized into cells, each with a unique address.

 Role in Program Execution: Stores instructions, operands, and intermediate results.

2. Control Unit (CU)

The CU is responsible for orchestrating the execution of instructions. Its functions include:

 Fetching instructions from memory.

 Decoding instructions to determine the required operation.

 Controlling data flow between the CPU, memory, and I/O devices.
3. Arithmetic and Logic Unit (ALU)

The ALU performs arithmetic and logical operations:

 Arithmetic operations: Addition, subtraction, multiplication, division.

 Logical operations: AND, OR, NOT, XOR.

 Shift operations: Left and right shifts.

4. Input/Output (I/O) Devices

I/O devices facilitate interaction with the computer. Examples include:

 Input: Keyboard, mouse, scanner.

 Output: Monitor, printer, speakers.

5. System Bus

The system bus connects the CPU, memory, and I/O devices. It consists of:

 Data Bus: Transfers data between components.

 Address Bus: Specifies memory locations for data transfers.

 Control Bus: Carries control signals to coordinate operations.

Operation Sequence for Execution of a Program

The execution of a program involves several steps, commonly referred to as the instruction cycle.
This cycle includes:

1. Fetch

The CPU retrieves the next instruction from memory, using the Program Counter (PC) to determine
the address.

2. Decode

The Control Unit interprets the fetched instruction to identify the operation and operands.

3. Execute

The ALU performs the required operation, such as arithmetic or logical computation, or data is
moved between memory and registers.

4. Store

The result of the operation is written back to memory or a register.

Example of the Instruction Cycle

Consider an instruction to add two numbers stored in memory locations A and B and store the result
in C:

1. Fetch: The instruction "ADD A, B -> C" is fetched from memory.

2. Decode: The CU identifies the operation (ADD) and the operands (A, B).
3. Execute: The ALU adds the values at A and B.

4. Store: The result is stored at location C.

Role of Key Components in the Instruction Cycle

Registers

Registers are small, fast storage locations within the CPU used to hold:

 Intermediate data.

 Addresses.

 Instructions.

Important registers include:

 Accumulator: Temporarily stores data during computations.

 Program Counter (PC): Holds the address of the next instruction.

 Instruction Register (IR): Stores the current instruction.

 General-Purpose Registers: Hold operands and intermediate results.

System Bus

The instruction cycle relies on the system bus for communication:

 The address bus specifies the memory location of instructions and data.

 The data bus transfers data between memory, CPU, and I/O devices.

 The control bus carries signals such as read/write and interrupt requests.

Clock and Synchronization

The CPU operates based on clock cycles. Each cycle represents a basic unit of time during which a
CPU operation can occur. Faster clock speeds enable more operations per second but require
efficient cooling and power management.

Advantages of the Stored Program Concept

1. Flexibility: The same hardware can execute different programs.

2. Efficiency: Instructions and data are accessible from the same memory.

Formulas and Concepts

 Memory Access Time: Time required to fetch data from memory.

 Instruction Execution Time: Time for fetch + decode + execute cycles.

 Performance Metrics:

o Clock Speed (Hz): Number of cycles per second.

o CPI (Cycles Per Instruction): Average number of cycles needed per instruction.

Conclusion
The basic organization of the stored program computer is a cornerstone of modern computing. By
integrating memory, a CPU, and I/O devices, the architecture enables efficient and flexible execution
of programs. Understanding these concepts provides insight into how computers process data and
execute instructions, forming the basis for further exploration of advanced computing systems.

Role of Operating Systems and


Compiler/Assembler
Introduction

Operating systems (OS) and software tools like compilers and assemblers play a critical role in
enabling computers to function efficiently and user-friendly. While the operating system manages
hardware resources and provides services, compilers and assemblers translate human-readable code
into machine-executable instructions. Together, they form the backbone of modern computing.

Role of the Operating System

An operating system is a system software that acts as a bridge between hardware and software. Its
primary functions include:

1. Resource Management

 CPU Management: Allocates CPU time to processes using scheduling algorithms (e.g.,
Round-Robin, Priority Scheduling).

 Memory Management: Tracks memory usage, allocates memory dynamically, and


implements virtual memory systems.

 Device Management: Interfaces with hardware through device drivers, ensuring smooth I/O
operations.

2. Process Management

 Handles creation, scheduling, synchronization, and termination of processes.

 Implements multitasking and inter-process communication.

3. File System Management

 Organizes data storage and retrieval on physical storage devices.

 Uses file systems like FAT32, NTFS, or ext4 for structuring data.

4. Security and Access Control

 Implements user authentication, encryption, and permission settings to protect data and
system integrity.
5. User Interface

 Provides interfaces like Command Line Interface (CLI) or Graphical User Interface (GUI) for
user interaction.

Role of Compiler and Assembler

Compiler

A compiler translates high-level programming languages (e.g., C++, Java) into machine code. It
performs the following steps:

1. Lexical Analysis: Breaks the source code into tokens.

2. Syntax Analysis: Checks for grammatical correctness.

3. Semantic Analysis: Ensures logical accuracy of the code.

4. Optimization: Improves code efficiency.

5. Code Generation: Produces machine-level code.

Assembler

An assembler converts assembly language (a low-level language) into machine code. It maps
mnemonics (e.g., MOV, ADD) to binary instructions understandable by the CPU.

Fetch, Decode, and Execute Cycle

The Fetch-Decode-Execute cycle is the fundamental operational process of a computer's CPU. It


describes how a program instruction is executed in the following steps:

1. Fetch

 The CPU retrieves an instruction from memory using the Program Counter (PC).

 The fetched instruction is stored in the Instruction Register (IR).

2. Decode

 The Control Unit interprets the instruction to determine the operation and operands.

3. Execute

 The Arithmetic Logic Unit (ALU) performs the required operation, such as arithmetic
computation or logical comparison.

4. Store

 The result of the operation is written back to memory or a CPU register.

Registers Involved in the Cycle

1. Program Counter (PC): Tracks the address of the next instruction.


2. Instruction Register (IR): Holds the current instruction.

3. Accumulator (AC): Temporarily stores data during execution.

4. General-Purpose Registers: Store intermediate results.

Conclusion

The operating system, compiler, and assembler, along with the fetch-decode-execute cycle, play a
pivotal role in modern computing. The OS ensures efficient resource management and user
interaction, while compilers and assemblers bridge the gap between human-readable code and
machine execution. Together, they make the complex processes of computing seamless and
accessible.

--------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------

Concept of Operator, Operand, Registers, and


Storage
Introduction

The concepts of operators, operands, registers, and storage form the foundational components of
computer architecture and programming. These elements work together to process instructions,
perform calculations, and manage data efficiently. This section elaborates on each concept, its role in
computation, and its relevance to modern computing systems.

Operators

Definition

An operator is a symbol or keyword used in programming and computation to perform specific


operations on data values or variables.

Types of Operators

1. Arithmetic Operators

o Perform mathematical calculations.

o Examples: + (addition), - (subtraction), * (multiplication), / (division), % (modulus).

2. Logical Operators

o Used to perform logical operations.

o Examples: && (AND), || (OR), ! (NOT).

3. Relational Operators

o Compare two values and return a Boolean result.

o Examples: > (greater than), < (less than), == (equal to), != (not equal).
4. Bitwise Operators

o Perform operations at the bit level.

o Examples: & (AND), | (OR), ^ (XOR), ~ (NOT).

5. Assignment Operators

o Assign values to variables.

o Examples: = (simple assignment), += (add and assign).

6. Unary Operators

o Operate on a single operand.

o Examples: ++ (increment), -- (decrement).

Usage Example

int a = 5, b = 10;

int sum = a + b; // Arithmetic Operator

if (a < b) { // Relational Operator

sum++; // Unary Operator

Operands

Definition

An operand is the data or variable on which an operator acts. Operands can be constants, variables,
or expressions.

Examples

1. In the expression c = a + b;:

o a and b are operands.

o + is the operator.

2. In the logical expression if (x && y):

o x and y are operands.

o && is the operator.

Relation Between Operators and Operands

Operators act upon operands to produce a result. Understanding the interplay between these
elements is critical for developing efficient algorithms and programs.
Registers

Definition

Registers are small, high-speed storage locations within the CPU used to store data temporarily
during computation.

Types of Registers

1. General-Purpose Registers

o Used for arithmetic and data manipulation.

o Examples: R1, R2, AX, BX in x86 architecture.

2. Special-Purpose Registers

o Serve specific roles in instruction execution.

o Examples:

 Program Counter (PC): Holds the address of the next instruction.

 Instruction Register (IR): Stores the current instruction.

 Stack Pointer (SP): Points to the top of the stack.

3. Accumulator

o Temporarily holds results of arithmetic and logical operations.

4. Status Register

o Contains flags that represent the outcome of operations (e.g., zero flag, carry flag).

Formula Example

For a basic addition operation:

data1 (in R1) + data2 (in R2) → result (in R3)

MOV R1, 5

MOV R2, 10

ADD R3, R1, R2

Storage

Definition

Storage refers to the mechanisms that retain data in a computer system. It includes primary memory
(RAM), secondary storage (HDDs, SSDs), and tertiary storage (cloud, tape drives).
Hierarchy

1. Primary Storage

o High-speed, volatile memory.

o Examples: RAM, cache.

2. Secondary Storage

o Non-volatile storage for long-term data retention.

o Examples: Hard disks, SSDs.

3. Tertiary Storage

o Used for backup and archival purposes.

o Examples: Magnetic tapes, cloud storage.

Diagram: Storage Hierarchy

+------------------+

| Registers |

+------------------+

+------------------+

| Cache Memory |

+------------------+

+------------------+

| Main Memory (RAM)|

+------------------+

+------------------+

| Secondary Storage|

+------------------+

Instruction Format

Definition

An instruction format specifies the layout of a machine language instruction in terms of its
constituent fields (opcode, operand, etc.).
Common Fields

1. Opcode (Operation Code)

o Specifies the operation to be performed.

o Example: ADD, SUB.

2. Operand

o Specifies data or its address.

3. Addressing Mode

o Determines how the operand is accessed (e.g., direct, indirect).

Example: Instruction Format (x86 Architecture)

+--------+---------+---------+

| Opcode | Operands| Address |

+--------+---------+---------+

Important Points

 Operators and operands are essential for computations.

 Registers provide high-speed data storage for ongoing processes.

 Instruction formats define how commands are structured for execution.

 Efficient use of these concepts enhances computational performance.

Conclusion

The interplay between operators, operands, registers, and storage is fundamental to computer
processing. Registers ensure rapid data access, while storage systems manage data persistence.
Together, these components enable precise and efficient execution of instructions, forming the basis
of computational logic.

Got it! Let's continue from the beginning with detailed explanations, organized in the same format
and style as you liked:
Instruction Set Architecture (ISA)
1. Overview of Instruction Set Architecture (ISA)

An Instruction Set Architecture (ISA) defines the interface between hardware and software in a
computing system. It is a set of basic commands (or instructions) that a CPU can understand and
execute. The ISA serves as the critical link between how software (programs) are written and how
they are executed on the hardware (processor).

The instruction set includes:

 Operations (Instructions): The tasks the CPU can perform, such as arithmetic, logic, data
movement, and control operations.

 Instruction Formats: The structure of instructions, specifying how the opcode, operands, and
other components are arranged.

 Data Types and Storage: How data is represented and manipulated in memory, including the
size and types of registers.

 Addressing Modes: How operands (data) are accessed in memory, which will be covered in
detail in the later sections.

An instruction set is the collection of all machine-level instructions that a microprocessor can
understand and execute. Each instruction in this set performs a specific operation on data stored in
memory or registers.

2. Types of Instruction Sets

Instruction sets can be broadly classified into two categories based on their design philosophy:

1. CISC (Complex Instruction Set Computing):

o Characteristics:

 Large and diverse set of instructions.

 Each instruction can perform multiple tasks (e.g., load, add, store) in a single
operation.

 Instructions vary in length and complexity.

o Example: The x86 architecture (used in Intel and AMD processors) is a CISC
architecture.

o Advantage: Reduces the number of instructions needed for performing complex


operations.

o Disadvantage: More complex hardware and slower execution time due to decoding
complex instructions.

2. RISC (Reduced Instruction Set Computing):

o Characteristics:
 Small and simple set of instructions, each performing a single operation.

 Instructions are of fixed length and are uniform.

o Example: The ARM architecture, commonly used in mobile devices, is a RISC


architecture.

o Advantage: Faster execution due to simpler instructions and better optimization.

o Disadvantage: Requires more instructions for complex operations compared to CISC.

3. Key Components of an Instruction

An instruction typically consists of the following components:

 Opcode (Operation Code): The part of the instruction that specifies the operation to be
performed. This could be arithmetic operations like ADD, logical operations like AND, or data
movement operations like MOV.

 Operands: The data to be operated on. This could be a register value, a memory address, or
an immediate value (constant).

 Addressing Mode: Defines how the operand is located in memory or the register. This can be
immediate, direct, indirect, etc.

Each instruction can vary in length and format depending on the architecture.

4. Instruction Format

The structure of an instruction is known as the instruction format. It defines how the opcode,
operands, and other fields (such as flags) are arranged in the instruction. A typical instruction format
looks like this:

| Opcode | Operand 1 | Operand 2 | Addressing Mode |

|--------|-----------|-----------|-----------------|

For example, in a RISC architecture, an instruction might consist of:

 Opcode (6 bits)

 Operand 1 (Source register) (5 bits)

 Operand 2 (Destination register) (5 bits)

 Addressing Mode (optional field, depending on the operation)

In CISC systems, the opcode might be larger and can be followed by more operands or memory
addresses.

Addressing Modes

1. Introduction to Addressing Modes


Addressing modes define how the CPU accesses the operands of an instruction. They specify where
the data for an instruction resides or how its location is calculated. Addressing modes are crucial in
efficient data handling and memory management.

Different addressing modes offer flexibility in how operands are accessed, enabling more powerful
and optimized execution of instructions.

2. Types of Addressing Modes

Here are the most commonly used addressing modes:

1. Immediate Addressing Mode

o Description: The operand is a constant value embedded directly in the instruction


itself.

o Example:

 Instruction: MOV R1, #5

 Action: Move the value 5 directly into register R1.

o Diagram:

2. Instruction: MOV R1, #5

3. Operand = 5 (constant directly in the instruction)

4. [R1] <- 5

2. Register Addressing Mode

o Description: The operand is stored in a register. The register number is specified in


the instruction.

o Example:

 Instruction: ADD R1, R2

 Action: Add the value in register R2 to the value in register R1, and store the
result in R1.

o Diagram:

3. Instruction: ADD R1, R2

4. Operand = R2 (register operand)

5. [R1] = [R1] + [R2]

3. Direct Addressing Mode


o Description: The operand is located at a specific memory address, which is provided
in the instruction.

o Example:

 Instruction: MOV R1, [1000]

 Action: Move the data from memory address 1000 into register R1.

o Diagram:

4. Instruction: MOV R1, [1000]

5. Operand = Memory at address 1000

6. [R1] <- MEM[1000]

4. Indirect Addressing Mode

o Description: The operand is located at a memory address pointed to by a register.


The register contains the address of the operand.

o Example:

 Instruction: MOV R1, [R2]

 Action: Move the data from the memory address stored in register R2 into
register R1.

o Diagram:

5. Instruction: MOV R1, [R2]

6. Operand = Memory at address in R2

7. [R1] <- MEM[R2]

5. Indexed Addressing Mode

o Description: The operand is located at a memory address that is calculated by adding


an index value to a base address.

o Example:

 Instruction: MOV R1, [R2 + 5]

 Action: Add 5 to the contents of register R2, and use that as the address to
fetch the operand into register R1.

o Diagram:

6. Instruction: MOV R1, [R2 + 5]

7. Operand = Memory at address (R2 + 5)

8. [R1] <- MEM[R2 + 5]


6. Base-Register Addressing Mode

o Description: The operand is located at an address calculated by adding the value of a


base register to a constant index.

o Example:

 Instruction: MOV R1, [BASE + 5]

 Action: Add 5 to the contents of BASE register and fetch the operand.

o Diagram:

7. Instruction: MOV R1, [BASE + 5]

8. Operand = Memory at address (BASE + 5)

9. [R1] <- MEM[BASE + 5]

7. Relative Addressing Mode

o Description: The operand is located at a memory address calculated relative to the


address of the current instruction.

o Example:

 Instruction: BEQ LABEL

 Action: If a condition is true (e.g., branch if equal), jump to the address


specified by the label, which is calculated relative to the current program
counter.

o Diagram:

8. Instruction: BEQ LABEL

9. Operand = Memory at address LABEL (relative to current PC)

3. Importance of Addressing Modes

 Flexibility in Accessing Data: Different addressing modes provide flexibility in how data is
accessed, stored, and manipulated.

 Optimized Execution: Some addressing modes reduce the number of instructions required
to achieve a task, leading to more efficient execution.

 Memory Management: Addressing modes are crucial in managing how data is retrieved
from various memory locations, whether it’s a constant, a register, or memory.

4. Summary of Addressing Modes


Mode Description Example

Immediate Operand is directly specified in the instruction. MOV R1, #5

Register Operand is located in a register. ADD R1, R2

Direct Operand is located at a specific memory address. MOV R1, [1000]

Indirect Operand is located at a memory address pointed to by a register. MOV R1, [R2]

Operand is at an address calculated by adding an index to a base


Indexed MOV R1, [R2 + 5]
address.

Base- Operand is located at an address calculated by adding the base MOV R1, [BASE +
Register register to an index. 5]

Relative Operand is located relative to the instruction address. BEQ LABEL

This completes the detailed explanation of Instruction Set Architecture (ISA) and Addressing Modes.

SUMMARY OF PREVIOUS
TOTAL OR SAY MODULE 1
Summary of Key Concepts in Computer Architecture and Operating Systems

The basic organization of a stored-program computer involves several essential components: the
Central Processing Unit (CPU), memory, and input/output devices. The CPU consists of the Control
Unit (CU), Arithmetic and Logic Unit (ALU), and Registers. The CPU fetches instructions from
memory, decodes them, and executes them, following the fetch-decode-execute cycle. In a stored-
program computer, both data and instructions are stored in memory, making the process of program
execution more efficient and flexible. The memory is typically divided into different levels of
hierarchy, such as registers, cache memory, main memory (RAM), and secondary storage.

The operation sequence for the execution of a program follows a clear pattern. First, the CPU
fetches the instruction from memory, then decodes it to understand what operation is required, and
finally, the CPU executes the instruction. This process is repeated until all instructions in the program
are completed. The role of the Control Unit is to manage this sequence, ensuring that instructions
are executed in the correct order and that the necessary data is available.
The operating system (OS) plays a critical role in managing hardware resources and providing an
interface between the user and the computer hardware. It handles tasks such as process
management, memory management, file systems, and input/output (I/O) operations. The compiler
and assembler are crucial tools that help translate high-level programming languages into machine-
readable code. The assembler converts assembly language into machine code, while the compiler
translates high-level source code into intermediate or machine-level code.

The fetch-decode-execute cycle is the fundamental operation of the CPU. During the fetch phase,
the CPU retrieves an instruction from memory. In the decode phase, the instruction is interpreted to
understand the operation, and in the execute phase, the operation is performed, which may involve
arithmetic calculations, data transfer, or logical comparisons. The cycle is repeated for each
instruction in a program.

Operators and operands are the basic components of an instruction. An operator specifies the
action to be performed (e.g., addition, subtraction), while the operand is the data or address on
which the operator acts (e.g., a number or memory location). Registers are small, fast storage
locations within the CPU that hold data or intermediate results during computation. Storage refers to
both primary memory (RAM) and secondary storage devices (hard drives, SSDs) used to store data
for longer periods.

Instruction format defines the structure of an instruction, specifying the opcode (operator code) and
the operands. Instruction formats can vary in length and complexity, but they typically include fields
for the opcode, operand addresses, and sometimes the mode of addressing (e.g., immediate, direct,
indirect). The instruction set architecture (ISA) defines the set of instructions a processor can
execute, and the addressing modes specify how the operands are located or accessed in memory.

Addressing modes describe the methods by which the CPU can access operands in memory.
Common modes include immediate addressing (where the operand is a constant), direct addressing
(where the operand is at a specific memory location), and indirect addressing (where the operand’s
address is specified by a pointer). Indexed addressing and register addressing are other common
modes that provide different ways to access memory efficiently.

In summary, the stored-program computer relies on the interaction between the CPU, memory, and
I/O devices, with the control unit managing the sequence of operations. The operating system and
compilers play key roles in managing resources and translating code. The fetch-decode-execute cycle
forms the backbone of instruction execution, while operators, operands, registers, and addressing
modes contribute to the flexibility and efficiency of instruction execution and data manipulation.
Understanding these concepts is fundamental to computer architecture and programming.

You might also like