0% found this document useful (0 votes)
60 views

Parallel and Distributed Computing Lecture#14

1. Scalability issues in distributed systems include controlling the cost of physical resources as the system grows, controlling performance loss as data size increases, and preventing software resources from running out. 2. In distributed systems, a scheduler is responsible for managing task requests and determining which tasks to run next, on which node, and how many tasks to run in parallel to balance the load and maximize performance. 3. Load balancing is important for distributing tasks evenly across processors to make each one equally busy and finish work at the same time.

Uploaded by

Ihsan Ullah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

Parallel and Distributed Computing Lecture#14

1. Scalability issues in distributed systems include controlling the cost of physical resources as the system grows, controlling performance loss as data size increases, and preventing software resources from running out. 2. In distributed systems, a scheduler is responsible for managing task requests and determining which tasks to run next, on which node, and how many tasks to run in parallel to balance the load and maximize performance. 3. Load balancing is important for distributing tasks evenly across processors to make each one equally busy and finish work at the same time.

Uploaded by

Ihsan Ullah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

PARALLEL AND DISTRIBUTED

COMPUTING
LECTURE#14
Topics:
1. Issues/challenges in Scalability
2. Scheduling
3. Storage devices
Scalability issues/challenges
• The design of scalable distributed systems presents the following
challenges:
1. Controlling the cost of physical resources: As the demand for a
resource grows, it should be possible to extend the system, at
reasonable cost, to meet it.
• It must be possible to add server computers to avoid the
performance bottleneck that would arise if a single file server had
to handle all file access requests.
• for example, if a single file server can support 20 users, then two
such servers should be able to support 40 users. Although that
Scalability issues/challenges cont..
Sounds and obvious goal, it is not necessarily easy to achieve in
practice.
2. Controlling the performance loss: consider the management of a
set of data whose size is proportional to the number of users or
resources in the system.
• As we know algorithms have advantages and disadvantages.
• So, we should use such algorithms which may support scaling
up/scaling out, to prevent performance loss.
3. Preventing software resources running out: in the late 1970s, it
Scalability issues/challenges cont.…
Was decided to use 32 bits for this purpose, the supply of available
internet addresses is running out.
• For this reason, a new version of the protocol with 128-bit internet
addresses is being adopted, and this will require modifications to
many software components.
4. Avoiding performance bottlenecks: The term “bottleneck” refers to
both an overloaded network and the state of a computing device in
which one component is unable to keep pace with the rest of the
system, thus slowing overall performance.
• So balance in all the computing component is much necessary.
Distributed System Scheduling
• Introduction: In distributed computing, a scheduler is responsible for
managing incoming task requests and determining which tasks to run
next, on which node to run them, and how many tasks to run in
parallel on the node.
• Distributed system resource management component of a system
which moves jobs around the processors to balance load and
maximize overall performance.
• Make sense in LAN level distributed system.
Why we need distributed system
scheduling
• Because of uneven distribution of task on individual processors.
• Make sense for homogeneous system with even loads.
Load Balancing
• It is the way of distributing load units (jobs or tasks) across a set of
processors which are connected to network which may be distributed
across the globe.
• By loading balancing strategy it is possible to make every processor
equally busy and to finish the works approximately at the same time.
THANKS

You might also like