BCSE412L - Parallel Computing 03
BCSE412L - Parallel Computing 03
•Efficiency: Instead of relying on a single powerful core, multi-core processors can distribute
the workload, leading to better power efficiency and thermal management.
Challenges:
•Software Optimization: Not all software is designed to take full advantage of multi-core
architectures. Efficiently utilizing multiple cores requires well-optimized software that can
divide tasks into parallel threads.
•Amdahl's Law: Some tasks cannot be parallelized, and Amdahl's Law dictates that the
overall speedup is limited by the sequential portion of the code. Thus, not all applications
experience a linear increase in performance with additional cores.
Applications:
•Server Environments: Multi-core processors are widely used in servers to handle multiple tasks
simultaneously, improving the server's ability to handle concurrent user requests.
Future Trends:
•Increasing Core Count: The trend in processor development is toward increasing the number of
cores on a single chip. However, adding more cores does not always translate directly into better
performance due to challenges such as memory access and communication between cores.
• Shared memory and distributed memory are two different paradigms used in
parallel computing to facilitate communication and coordination among multiple
processing units.
Shared Memory:
•Definition: Shared memory is a programming model where multiple processors share a
common, global memory space.
•Communication: Processors can directly read and write to the same memory locations,
allowing for easy communication and data sharing.
•Synchronization: Since all processors have access to the same memory, synchronization
mechanisms (like locks or semaphores) are required to control access and avoid
conflicts when multiple processors attempt to modify the same data simultaneously.
•Synchronization: Since processors have independent memory spaces, there is typically less
need for synchronization mechanisms related to shared data. However, proper coordination is
still necessary for effective communication.
•Examples: Clusters, grid computing, and many supercomputers (like those using a distributed-
memory architecture with interconnected nodes) employ a distributed-memory model.