0% found this document useful (0 votes)
5 views

MCSE

The document outlines the curriculum for MCSE-103 Advanced Computer Architecture, covering topics such as Flynn's and Handler's classifications of parallel computing, pipelined and vector processors, data and control hazards, SIMD multiprocessor structures, interconnection networks, parallel algorithms, and MIMD multiprocessor systems. It emphasizes the significance of each topic in understanding and optimizing parallel computing systems, including scheduling and load balancing techniques. Each section includes definitions, characteristics, examples, and diagrams to illustrate the concepts.

Uploaded by

gixayew714
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

MCSE

The document outlines the curriculum for MCSE-103 Advanced Computer Architecture, covering topics such as Flynn's and Handler's classifications of parallel computing, pipelined and vector processors, data and control hazards, SIMD multiprocessor structures, interconnection networks, parallel algorithms, and MIMD multiprocessor systems. It emphasizes the significance of each topic in understanding and optimizing parallel computing systems, including scheduling and load balancing techniques. Each section includes definitions, characteristics, examples, and diagrams to illustrate the concepts.

Uploaded by

gixayew714
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

### MCSE-103 Advanced Computer Architecture

#### UNIT 1: Flynn's and Handler's Classification of Parallel Computing Structures, Pipelined, and Vector
Processors

##### Flynn's Classification

**Definition:**

- Flynn's taxonomy classifies computer architectures based on the multiplicity of instruction and data
streams. It was introduced by Michael J. Flynn in 1966.

**Categories:**

1. **SISD (Single Instruction stream, Single Data stream):**

- **Characteristics:**

- A single processor executes a single instruction stream, operating on a single data stream.

- Traditional uniprocessor systems.

- **Diagram:**

- ![SISD Diagram](https://example.com/sisd-diagram.png)

2. **SIMD (Single Instruction stream, Multiple Data streams):**

- **Characteristics:**

- Multiple processing elements perform the same operation on multiple data points simultaneously.

- Suitable for vector and array processors.

- **Diagram:**

- ![SIMD Diagram](https://example.com/simd-diagram.png)

3. **MISD (Multiple Instruction streams, Single Data stream):**

- **Characteristics:**

- Multiple instructions operate on a single data stream.

- Rarely used in practice; theoretically useful for fault tolerance.

- **Diagram:**
- ![MISD Diagram](https://example.com/misd-diagram.png)

4. **MIMD (Multiple Instruction streams, Multiple Data streams):**

- **Characteristics:**

- Multiple autonomous processors simultaneously execute different instructions on different data


sets.

- Common in parallel computing and multi-core processors.

- **Diagram:**

- ![MIMD Diagram](https://example.com/mimd-diagram.png)

**Significance:**

- Helps in understanding and designing parallel computing systems by providing a clear categorization of
computational architectures based on their capabilities and operational paradigms.

##### Handler's Classification

**Definition:**

- Handler's classification extends Flynn's taxonomy by considering additional architectural features like
processor control, processor interconnection, and memory organization.

**Categories:**

1. **ARP (Asynchronous Array of Processors):**

- **Characteristics:**

- Processors operate asynchronously with their own control unit.

- Suitable for tasks that can be decomposed into independently executable units.

- **Diagram:**

- ![ARP Diagram](https://example.com/arp-diagram.png)

2. **SARP (Synchronous Array of Processors):**

- **Characteristics:**

- Processors operate synchronously under a single control unit.

- Suitable for SIMD operations where all processors perform the same task simultaneously.
- **Diagram:**

- ![SARP Diagram](https://example.com/sarp-diagram.png)

3. **MSIMD (Multiple SIMDs):**

- **Characteristics:**

- Combines multiple SIMD structures, allowing for different SIMD groups to perform distinct tasks.

- Provides flexibility and increased parallelism.

- **Diagram:**

- ![MSIMD Diagram](https://example.com/msimd-diagram.png)

**Significance:**

- Provides a more granular classification that addresses practical considerations in parallel system design,
such as synchronization and interconnection strategies.

##### Pipelined Processors

**Definition:**

- Pipelining is a technique where multiple instruction phases are overlapped to improve execution
efficiency.

**Phases:**

1. **Instruction Fetch (IF):**

- Retrieve the instruction from memory.

2. **Instruction Decode (ID):**

- Interpret the instruction and prepare necessary resources.

3. **Execute (EX):**

- Perform the operation defined by the instruction.

4. **Memory Access (MEM):**

- Access memory if required by the instruction.

5. **Write Back (WB):**


- Write the result back to the register file.

**Diagram:**

- ![Pipelined Processor Diagram](https://example.com/pipelined-processor-diagram.png)

**Significance:**

- Enhances throughput by allowing multiple instructions to be processed simultaneously at different


stages of execution.

##### Vector Processors

**Definition:**

- Vector processors are specialized computing units that operate on entire vectors (arrays) of data with a
single instruction.

**Characteristics:**

1. **Vector Registers:**

- Large registers capable of holding entire vectors.

2. **Vector Instructions:**

- Instructions that operate on vectors rather than scalar values.

3. **Vector Pipelines:**

- Pipelines designed to handle vector operations, improving efficiency.

**Diagram:**

- ![Vector Processor Diagram](https://example.com/vector-processor-diagram.png)

**Significance:**

- Ideal for scientific computations and applications requiring high-speed data processing over large
datasets.
---

#### UNIT 2: Data and Control Hazards, SIMD Multiprocessor Structures

##### Data Hazards

**Definition:**

- Occur when instructions that exhibit data dependencies modify data in a way that affects subsequent
instructions.

**Types:**

1. **Read After Write (RAW):**

- A subsequent instruction tries to read a source before a previous instruction writes to it.

2. **Write After Read (WAR):**

- A subsequent instruction tries to write to a destination before a previous instruction reads from it.

3. **Write After Write (WAW):**

- A subsequent instruction tries to write to a destination before a previous instruction writes to it.

**Resolution Techniques:**

1. **Forwarding (Bypassing):**

- Directly passes the result from one pipeline stage to another without writing to register.

2. **Stalling:**

- Delays subsequent instructions until the hazard is resolved.

**Diagram:**

- ![Data Hazard Resolution](https://example.com/data-hazard-resolution.png)

**Significance:**

- Critical for maintaining the correctness of instruction execution in pipelined architectures.


##### Control Hazards

**Definition:**

- Occur when the pipeline makes wrong decisions on branch predictions, causing instruction fetches
from incorrect addresses.

**Resolution Techniques:**

1. **Branch Prediction:**

- Techniques like static and dynamic prediction to guess the outcome of branches.

2. **Delayed Branching:**

- Reordering instructions to minimize the impact of branch penalties.

**Diagram:**

- ![Control Hazard Resolution](https://example.com/control-hazard-resolution.png)

**Significance:**

- Essential for maintaining pipeline efficiency and minimizing performance degradation due to incorrect
branch predictions.

##### SIMD Multiprocessor Structures

**Definition:**

- SIMD multiprocessors consist of multiple processing elements that execute the same instruction on
different data streams simultaneously.

**Characteristics:**

1. **Processing Elements (PEs):**

- Multiple, identical units performing computations in parallel.

2. **Control Unit:**
- Single control unit synchronizing the operations of all PEs.

3. **Interconnection Network:**

- Network facilitating communication among PEs and between PEs and memory.

**Diagram:**

- ![SIMD Multiprocessor Structure](https://example.com/simd-multiprocessor-structure.png)

**Significance:**

- Enhances performance for data-parallel tasks, making it suitable for applications like image processing,
simulations, and scientific computations.

---

#### UNIT 3: Interconnection Networks, Parallel Algorithms for Array Processors, Search Algorithms,
MIMD Multiprocessor Systems

##### Interconnection Networks

**Definition:**

- Networks that connect processors in parallel computing systems, enabling communication and data
exchange.

**Types:**

1. **Static Networks:**

- Fixed connections between processors (e.g., mesh, hypercube).

- **Diagram:**

- ![Static Network Diagram](https://example.com/static-network-diagram.png)

2. **Dynamic Networks:**

- Reconfigurable connections (e.g., crossbar, multistage interconnection networks).

- **Diagram:**
- ![Dynamic Network Diagram](https://example.com/dynamic-network-diagram.png)

**Characteristics:**

- **Bandwidth:**

- Data transfer capacity of the network.

- **Latency:**

- Time taken for data to travel from source to destination.

**Significance:**

- Crucial for the performance of parallel computing systems, impacting data transfer speeds and overall
system efficiency.

##### Parallel Algorithms for Array Processors

**Definition:**

- Algorithms designed to be executed on array processors, leveraging their parallel processing


capabilities.

**Examples:**

1. **Matrix Multiplication:**

- Parallel algorithm for multiplying matrices using array processors.

- **Diagram:**

- ![Parallel Matrix Multiplication](https://example.com/parallel-matrix-multiplication.png)

2. **Sorting Algorithms:**

- Parallel sorting algorithms like bitonic sort and parallel quicksort.

**Significance:**

- Enhances computational efficiency and speed for large-scale data processing tasks.
##### Search Algorithms

**Definition:**

- Algorithms designed for searching data in parallel computing environments.

**Examples:**

1. **Parallel Binary Search:**

- Distributes search operations across multiple processors.

2. **Associative Search:**

- Uses associative memory to perform searches in parallel.

**Significance:**

- Improves search efficiency and reduces time complexity for large datasets.

##### MIMD Multiprocessor Systems

**Definition:**

- Systems where multiple processors operate asynchronously, each executing different instructions on
different data.

**Characteristics:**

1. **Autonomous Processors:**

- Each processor has its own control unit and executes its own program.

2. **Shared Memory:**

- Processors share a common memory space, enabling data exchange and communication.

3. **Interconnection Network:**

- Network facilitating communication among processors.

**Diagram:**
- ![MIMD Multiprocessor System](https://example.com/mimd-multiprocessor-system.png)

**Significance:**

- Provides high flexibility and performance for complex and heterogeneous computing

tasks.

---

#### UNIT 4: Scheduling and Load Balancing in Multiprocessor Systems, Multiprocessing Control and
Algorithms

##### Scheduling and Load Balancing

**Definition:**

- Techniques for distributing computational tasks across multiple processors to optimize performance
and resource utilization.

**Scheduling Techniques:**

1. **Static Scheduling:**

- Pre-determined task allocation before runtime.

2. **Dynamic Scheduling:**

- Tasks are allocated to processors dynamically during runtime.

**Load Balancing Strategies:**

1. **Centralized Load Balancing:**

- A central scheduler assigns tasks to processors.

2. **Distributed Load Balancing:**

- Processors make independent decisions about task allocation.


**Diagram:**

- ![Load Balancing Diagram](https://example.com/load-balancing-diagram.png)

**Significance:**

- Ensures efficient utilization of resources, minimizes idle times, and improves overall system
performance.

##### Multiprocessing Control and Algorithms

**Definition:**

- Control mechanisms and algorithms designed to manage and optimize the operation of multiprocessor
systems.

**Control Mechanisms:**

1. **Synchronization:**

- Techniques to coordinate processor operations and access to shared resources.

2. **Communication:**

- Mechanisms for data exchange between processors.

**Algorithms:**

1. **Barrier Synchronization:**

- Ensures all processors reach a certain point before proceeding.

2. **Mutual Exclusion:**

- Ensures that only one processor accesses a critical section at a time.

**Diagram:**

- ![Multiprocessing Control Diagram](https://example.com/multiprocessing-control-diagram.png)


**Significance:**

- Critical for maintaining system integrity, preventing race conditions, and ensuring coordinated
operation of multiple processors.

---

The above information is organized based on the units and topics specified for MCSE-103 Advanced
Computer Architecture. Each topic is explained in detailed bullet points with relevant diagrams, using the
reference books mentioned.

You might also like