0% found this document useful (0 votes)
197 views

Lab 09 - Concurrency (Answers) PDF

The document discusses concurrency in programming languages. It introduces the topic of the lab, which focuses on reviewing concepts of concurrency through questions and problems. It emphasizes that understanding concurrency theory makes implementing concurrent code and using concurrency libraries easier. The document then provides eight review questions on topics like the levels of concurrency, differences between physical and logical concurrency, tasks versus subprograms, heavyweight and lightweight tasks, and common threading methods.

Uploaded by

li
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
197 views

Lab 09 - Concurrency (Answers) PDF

The document discusses concurrency in programming languages. It introduces the topic of the lab, which focuses on reviewing concepts of concurrency through questions and problems. It emphasizes that understanding concurrency theory makes implementing concurrent code and using concurrency libraries easier. The document then provides eight review questions on topics like the levels of concurrency, differences between physical and logical concurrency, tasks versus subprograms, heavyweight and lightweight tasks, and common threading methods.

Uploaded by

li
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ITECH5403 Comparative Programming Languages

School of Science, Engineering and Information Technology

Lab 9 – Concurrency
Introduction

As this is an 'odd' week we're theory based and will be doing some review questions and problems on the
topic of concurrency. In today's multi-CPU world concurrency is incredibly important – and to be able to write
blazingly fast concurrent code, you need to know the underlying mechanisms and theory of how it works.

Once you understand the concepts, the practical implementation is waaaaayy easier to digest and work with.
Even if you're working with concurrency libraries like OpenCL / CUDA / Threaded Building Blocks etc. that do
a lot of the heavy lifting for you, you'll still benefit significantly from having a sound theoretical knowledge of the
topic, and it'll make your life easier when you need to use concurrency in the real world =D

Remember – in these labs you aren't expected to know every single answer from memory – sometimes it's
best to do a little research / reading / re-reading of materials and come up with a great, and correct, answer
rather than just 'taking a stab' at the question and hoping you're right!

Review Questions

1. What are the four possible levels of concurrency in programs? [Q1-Mod]

The four levels of concurrency in software execution are:

Instruction Level - Executing two or more machine instructions simultaneously

Statement Level - Executing two or more high-level instructions simultaneously

Unit Level - Executing two or more subprograms simultaneously, and

Program Level - Executing two or more programs simultaneously

2. Describe and explain the difference between physical concurrency and logical concurrency. [Q7]

When more than one processor is available, and several program units from the same program literally execute
simultaneously, then this is called physical concurrency.

When there is just a single processor, but multiple applications run "at once" in an interleaved fashion (i.e. each process
runs for a short period of time, called a slice, and then hands control back to the next process), then this is called logical
concurrency.

3. Explain the difference between a task / process and a subprogram. [Q13-Mod]

A task or process is a unit of a program, similar to a subprogram, that can be in concurrent execution with other units of
the same program. Each task in a program can support one thread of control.

There are three characteristics of tasks that distinguish them from subprograms:

CRICOS Provider No. 00103D Insert file name here Page 1 of 5


- First, tasks may be implicitly started, whereas a subprogram must be explicitly called in order for it to execute.

- Second, when a program unit invokes a task, in some cases it need not wait for the task to complete its execution
before continuing - that is, tasks are generally asynchronous.

- Third, when the execution of a task is completed, control may or may not return to the unit that started that task!

4. What is a heavyweight task, and what is a lightweight task? How do they differ? [Q12-Mod]

A heavyweight task executes in its own address space, while lightweight tasks all run in the same address space.

It's easier to implement lightweight tasks than heavyweight tasks, and lightweight tasks can be more efficient than
heavyweight tasks as less effort is required to manage their execution.

5. Define competition synchronisation and cooperation synchronisation. [Q13-Mod2]

Synchronisation is a mechanism that controls the order in which tasks execute - and two kinds of synchronisation are
required when tasks share data: Cooperation, and Competition.

Cooperation synchronisation is required between task A and task B when task A must wait for task B to complete some
specific activity before task A can begin or continue its execution.

Competition synchronisation is required between two tasks when both require the use of some resource that cannot be
simultaneously used. Specifically, if task A needs to access a shared data location x while task B is accessing x, then task
A must wait for B to complete its processing of x and release the resource first.

6. Concurrent software faces many issues that fall into two broad categories: [Custom]

- Correctness, and
- Liveness.

To address these issues, programming languages use locks (whether they're semaphores or
monitors). Give an example of how a locking mechanism may affect each of the above categories of
issues.

A software correctness issue is one where multiple threads may work on data at the same time, resulting in the data
becoming incorrect (i.e. two threads grab an int, then increment it, then overwrite the original value – if the second
thread grabs the value after the first has grabbed the value, but before the first thread has written back the updated value,
then the second thread is working with stale data and will overwrite the value written back by the first thread with its own
value!). This can lead to issues like two increment b different threads only incrementing the value by 1.

To prevent this, a locking mechanism is commonly used whereby a thread acquires a lock on a resource such as a file or
memory location, then ONLY the thread holding the lock can use the resource, until it later releases the resource.

If a second thread wishes to access a resource when the lock is held by another thread it will back off for a random
amount of time and then try to acquire the lock again. Depending on the mechanisms in place, it may back-off and try
again indefinitely or give up after a certain amount of time or a certain number of failed attempts to gain the lock on the
resource.

CRICOS Provider No. 00103D Page 2 of 5


There are two conditions where a lock may become stuck so that two threads can never complete their work – these
conditions are called deadlock and livelock.,

In a deadlock situation, two threads may require two locks (let's say on resources A and B) – if both threads go to acquire
locks on both resources, it may be that the first thread gets the lock on resource A and the second thread gets the lock on
resource B. HOWEVER – in our scenario each thread needs to hold BOTH LOCKS to work, but it only holds one lock –
so it waits a short while and then tries to reacquire the second lock (while the other thread is doing the same thing). If
neither thread gives up the lock it's holding (instead keeping on waiting to acquire the other lock) then we have a
deadlock situation. Ideally the 'back-off' period should be random, and after a set number of failures to obtain the second
lock each thread will release all locks it holds, allowing one thread to acquire both locks.

In a livelock situation, it's the same as a deadlock but rather than wait to acquire the other 'missing' lock, each thread
releases its lock on the resource it HAS and then attempts to re-acquire both locks again. When two threads are in sync,
but which hold locks on 'the other resource', then they can swap locks back and forth – forever! So thread 1 holds the
lock on resource A, and thread 2 holds the lock on resource B – neither can acquire the other lock – so they both release
resource, and try the other resource first…. Now thread 1 holds the lock for resource B and thread 2 holds the lock for
resource A! So they've swapped – but neither lock can get the other resource, so they both release their locks and try the
other resource first… and so on, swapping locks forever!

7. Describe the five basic states that a task can be in. [Q15]

New – the task is in the process of being created.


Ready – the task has successfully been created and is scheduled to be executed.
Running – the task is currently being executed. When a tasks time-slice expires it will revert to the ready state
again.
Blocked – the task is waiting for a resource or for I/O to complete.
Dead – the task has completed.

8. With regard to threading, explain the purpose of each of the following common threading methods:
[Q34/35/36]
- sleep(),
- join() and
- yield().

The sleep method causes a thread to pause for a specified period of time. This may be used to allow other processes to
execute, or to "wait" for a resource to become available. In computer games, it may be also used to make a process wait
until a minimum given duration has elapsed so that the game runs at a maximum speed (for example, to run at 60fps a
game has 16.67ms to execute per frame – if it finishes all its work in 10ms then we may opt to sleep for 6.67ms before
commencing processing on the next frame so that the game doesn't run too quickly).Calling sleep will ALWAYS cause the
thread to wait for the given amount of time.

The join method is used to force a method to delay its execution until the run method of another thread has completed its
execution. Join is used when the processing of a method cannot continue until the work of another thread is complete. The
join method puts the thread that calls it in the blocked state, which can be ended only by the completion of the thread on
which join was called. If that thread happens to be blocked, there is the possibility of deadlock! To prevent this, join can
be called with a parameter, which specifies the time limit in milliseconds of how long the calling thread will wait for the
called thread to complete.

CRICOS Provider No. 00103D Page 3 of 5


For example, we might wait for two seconds like this: myThread.join(2000);

The yield method is similar to the sleep method in that it causes the currently executing thread object to temporarily
pause and allow other threads to execute. In essence, calling yield on a thread causes it to give up the remainder of its
timeslice. In other words, yield suggests to the CPU that you may stop the current thread and start executing threads with
higher priority –to do so it assigns a low priority value to the current thread to leave room for more critical threads. The
difference between yield and sleep is that while sleep always causes the thread to wait for the given amount of time, if we
call yield and there is no higher priority thread that can execute then the thread we asked to yield will simply resume
execution right away.

Problem Set

1. What is the best action a system can take when deadlock is detected? [PS2]

In a deadlock situation, two threads may require two locks (let's say on resources A and B) – if both threads go to acquire
locks on both resources, it may be that the first thread gets the lock on resource A and the second thread gets the lock on
resource B. HOWEVER – in our scenario each thread needs to hold BOTH LOCKS to work, but it only holds one lock –
so it waits a short while and then tries to reacquire the second lock (while the other thread is doing the same thing). If
neither thread gives up the lock it's holding (instead keeping on waiting to acquire the other lock) then we have a
deadlock situation. Ideally the 'back-off' period should be random, and after a set number of failures to obtain the second
lock each thread will release all locks it holds, allowing one thread to acquire both locks.

2. Busy waiting is a method whereby a task waits for a given event by continuously checking for that
event to occur. What is the main problem with this approach? [PS3]

Busy-waiting is equivalent to constantly asking "Are we there yet?". To perform the query, the processor must
perform work, and it is very wasteful of processor time as we may be constantly asking if a resource is ready / if we
can proceed – perhaps thousands of time per second!

A far better approach is to use an event-driven model whereby a process can be notified when something occurs,
rather than having to constantly query whether something has occurred in order to proceed.

3. Suppose two tasks, A and B, must use the shared variable Buf_Size. Task A adds 2 to Buf_Size, and
task B subtracts 1 from it. Assume that such arithmetic operations are done by the three-step process
of fetching the current value, performing the arithmetic, and putting the new value back. In the absence
of competition synchronization, what sequences of events are possible and what values result from
these operations? Assume that the initial value of Buf_Size is 6.

Task A Task B Buf_Size after Task A op Buf_Size after Task B op

Get B_S Get_BS 6 6

Add 2 Sub 1 6, but task copy is 8 6, but task copy is 5

Write Write 8 5
B_S B_S
Cycle repeats, decreasing Buf_Size by 1 each time.

CRICOS Provider No. 00103D Page 4 of 5


Task A Task B Buf_Size After Task A Op Buf_Size After Task B Op

Get B_S IDLE 6 -

Add 2 Get_BS 6, but task copy is 8 6

Write B_S Sub 1 8 8, but task copy is 5

Get B_S Write B_S 8 5

Add 2 Get_BS 5, but task copy is 10 5

Write B_S Sub 1 10 10, but task copy is 4

Get B_S Write B_S 10 4

Add 2 Get_BS 4, but task copy is 12 4

Write B_S Sub 1 12 12, but task copy is 3

Cycle repeats with Buf_Size both GROWING (task A) before being overwritten and SHRINKING (task
B) – the final state of Buf_Size will depend on which task has the final write operation!

Task A Task B Buf_Size After Task A Op Buf_Size After Task B Op

Get B_S IDLE 6 -

Add 2 IDLE 6, but task copy is 8 -

Write B_S Get_BS 8 8

Get B_S Sub 1 8 8, but task copy is 7

Add 2 Write B_S 8, but task copy is 10 7

Write B_S Get_BS 10 10

Get B_S Sub 1 10 10, but task copy is 9

Add 2 Write B_S 10, but task copy is 12 9

Write B_S Get_BS 12 12

Cycle repeats with Buf_Size both growing by 2 each cycle.

CRICOS Provider No. 00103D Page 5 of 5

You might also like