0% found this document useful (0 votes)
13 views

Module 2

Uploaded by

Mustafiz Ali
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Module 2

Uploaded by

Mustafiz Ali
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

MODULE 2

Thread and Process Level Parallel Architectures


MIMD (Multiple Instruction, Multiple Data)
Definition:
• MIMD is a parallel computing architecture where multiple processors
execute different instructions on different data. It allows for high
flexibility and can handle a variety of computational tasks.
Characteristics:
• Multiple Instructions: Different processors can execute different
instructions simultaneously.
• Multiple Data: Each processor operates on different pieces of data,
allowing for parallel data processing.
• Communication: Processors may need to communicate and
synchronize with each other to coordinate tasks and share data.
Types:
• Shared Memory MIMD: Processors share a common memory
space. Examples include symmetric multiprocessors (SMPs).
Synchronization mechanisms such as locks and barriers are used to
manage access to shared resources.
• Distributed Memory MIMD: Each processor has its own local
memory. Communication between processors is achieved through
message passing. Examples include clusters and some
supercomputers.
Applications:
• Suitable for a wide range of applications including scientific
computing, database management, and complex simulations where
tasks can vary and data needs to be processed independently.
Advantages:
• Flexibility: Can handle a diverse set of problems with varying
computational needs.
• Scalability: Can scale to a large number of processors, making it
suitable for large-scale parallel tasks.
Challenges:
• Complexity: Requires efficient communication and synchronization
mechanisms, which can add overhead.
• Load Balancing: Distributing work evenly among processors can be
challenging, especially in distributed memory systems.

Multi-Threaded Architectures
Definition:
• Multi-threaded architectures involve multiple threads of execution
within a single process, allowing for concurrent execution of code.
Threads share the same memory space but execute different parts
of the program simultaneously.
Characteristics:
• Shared Memory: Threads within the same process share the same
address space, which allows for efficient communication and data
sharing between threads.
• Context Switching: Switching between threads within a process is
generally faster than switching between processes due to shared
resources.
Types:
• Fine-Grained Multi-Threading: Threads are switched rapidly to
maximize CPU utilization. Often used in scenarios with high
computational needs and frequent context switches.
• Coarse-Grained Multi-Threading: Threads are switched less
frequently, typically used in scenarios where each thread performs
longer-running tasks.
Applications:
• Parallel Processing: Improves performance of applications by
parallelizing tasks such as web servers handling multiple requests or
applications performing simultaneous computations.
• Responsiveness: Enhances the responsiveness of applications by
allowing background tasks to run concurrently with the main
application logic.
Advantages:
• Improved Utilization: Utilizes CPU resources more effectively by
running multiple threads in parallel.
• Efficiency: Faster context switching compared to process-level
parallelism, leading to improved performance in multi-threaded
applications.
Challenges:
• Concurrency Issues: Requires careful management of shared
resources to avoid issues such as race conditions and deadlocks.
• Complex Debugging: Debugging multi-threaded applications can
be complex due to potential concurrency issues and synchronization
challenges.
Comparison with MIMD:
• Instruction and Data: MIMD involves multiple processors executing
different instructions on different data, while multi-threading involves
multiple threads within a single process sharing the same memory
space.
• Memory Model: MIMD can be either shared or distributed memory,
whereas multi-threading typically uses shared memory within a
process.
• Scalability: MIMD can scale across a large number of processors,
whereas multi-threading is limited by the number of cores available
in a single system.

Distributed and Shared Memory MIMD Architectures


MIMD (Multiple Instruction, Multiple Data) Architectures:
1. Distributed Memory MIMD:
Definition:
• In distributed memory MIMD architectures, each processor has its
own local memory. Processors communicate with each other
through explicit message passing.
Characteristics:
• Local Memory: Each processor operates with its own local memory
space, isolated from other processors.
• Communication: Processors must exchange data using message
passing mechanisms (e.g., MPI - Message Passing Interface).
• Scalability: Can easily scale to a large number of processors
because each processor has its own memory, reducing contention
for shared resources.
Applications:
• Suitable for large-scale parallel applications such as high-
performance computing simulations, distributed databases, and
large-scale data processing tasks.
Advantages:
• Scalability: High scalability as processors are connected through
networks and do not share memory.
• Fault Tolerance: Faults in one processor do not directly affect
others since each processor has its own memory.
Challenges:
• Communication Overhead: Message passing can introduce
significant overhead, especially if frequent communication is
required.
• Complexity: Programming models can be more complex due to the
need for explicit data distribution and communication management.
Examples:
• Clusters: Collections of interconnected computers with distributed
memory.
• Supercomputers: Systems with thousands of nodes, each having
its own memory.

2. Shared Memory MIMD:


Definition:
• In shared memory MIMD architectures, multiple processors share a
common global memory. Processors communicate by reading and
writing to this shared memory space.
Characteristics:
• Global Memory: All processors access a single, shared memory
space. This facilitates easier communication and data sharing.
• Synchronization: Requires synchronization mechanisms (e.g.,
locks, semaphores) to manage concurrent access and avoid issues
such as race conditions.
• Cache Coherence: Techniques are used to ensure that all
processors have a consistent view of the shared memory (e.g.,
cache coherence protocols).
Applications:
• Ideal for applications requiring frequent access to shared data, such
as multi-threaded applications, database management systems, and
real-time processing tasks.
Advantages:
• Ease of Communication: Direct access to shared memory
simplifies communication and data sharing between processors.
• Programming Simplicity: Easier to program compared to
distributed memory systems due to the shared memory model.
Challenges:
• Scalability Limits: Scalability is often limited by contention for the
shared memory and bus bandwidth.
• Synchronization Overhead: Managing access to shared memory
and avoiding contention can introduce performance bottlenecks.
Examples:
• Symmetric Multiprocessors (SMPs): Systems where multiple
processors share a common memory and can access it
simultaneously.
• Multicore Processors: Modern CPUs with multiple cores sharing a
common memory space.

Comparison:
• Memory Access:
•Distributed Memory: Each processor has its own local
memory; communication requires explicit message passing.
• Shared Memory: All processors share a common memory
space; communication is implicit through memory reads and
writes.
• Scalability:
•Distributed Memory: Generally more scalable due to
independent memory spaces.
• Shared Memory: Scalability can be limited by memory access
contention and synchronization overhead.
• Complexity:
• Distributed Memory: More complex programming model
requiring explicit communication.
• Shared Memory: Simplified programming model but requires
careful management of synchronization and cache coherence.

You might also like