Parallel Computing System
Parallel Computing System
Parallel computing is a computing where the jobs are broken into discrete parts that can be
executed concurrently. Each part is further broken down to a series of instructions. Instructions
from each part execute simultaneously on different CPUs.
Based on the number of instruction and data streams that can be processed simultaneously,
computing systems are classified into four major categories:
Flynn’s classification –
1. Single-instruction, single-data (SISD) systems –
An SISD computing system is a uniprocessor machine which is capable of executing a single
instruction, operating on a single data stream. In SISD, machine instructions are processed in a
sequential manner and computers adopting this model are popularly called sequential
computers. Most conventional computers have SISD architecture. All the instructions and data
to be processed have to be stored in primary memory.
Figure 2 SISD
The speed of the processing element in the SISD model is limited(dependent) by the rate at
which the computer can transfer information internally. Dominant representative SISD
systems are IBM PC, workstations.
2. Single-instruction, multiple-data (SIMD) systems –
An SIMD system is a multiprocessor machine capable of executing the same instruction on all
the CPUs but operating on different data streams. Machines based on an SIMD model are well
suited to scientific computing since they involve lots of vector and matrix operations. So that
the information can be passed to all the processing elements (PEs) organized data elements of
vectors can be divided into multiple sets(N-sets for N PE systems) and each PE can process
one data set.
Figure 3 SIMD
Figure 5MIMD
MIMD machines are broadly categorized into shared-memory MIMD and distributed-
memory MIMD based on the way PEs are coupled to the main memory.
In the shared memory MIMD model (tightly coupled multiprocessor systems), all the PEs
are connected to a single global memory and they all have access to it. The communication
between PEs in this model takes place through the shared memory, modification of the data
stored in the global memory by one PE is visible to all other PEs. Dominant representative
shared memory MIMD systems are Silicon Graphics machines and Sun/IBM’s SMP
(Symmetric Multi-Processing).
In Distributed memory MIMD machines (loosely coupled multiprocessor systems) all PEs
have a local memory. The communication between PEs in this model takes place through the
interconnection network (the inter process communication channel, or IPC). The network
connecting PEs can be configured to tree, mesh or in accordance with the requirement.
The shared-memory MIMD architecture is easier to program but is less tolerant to failures and
harder to extend with respect to the distributed memory MIMD model. Failures in a shared-
memory MIMD affect the entire system, whereas this is not the case of the distributed model,
in which each of the PEs can be easily isolated. Moreover, shared memory MIMD
architectures are less likely to scale because the addition of more PEs leads to memory
contention.