Unit 3&4
Unit 3&4
Introduction
Two fundamental abstraction facilities
– Process abstraction
Emphasized from early days
– Data abstraction
Emphasized in the1980s
Scope Example
MAIN
- declaration of x
SUB1
- declaration of x -
...
call SUB2
...
SUB2
...
- reference to x -
...
...
call SUB1
…
Static scoping
– Reference to x is to MAIN's x
Dynamic scoping
– Reference to x is to SUB1's x
Evaluation of Dynamic Scoping:
– Advantage: convenience
– Disadvantage: poor readability
3.3 Local Referencing Environments
Def: The referencing environment of a statement is the collection of all names
that are visible in the statement
In a static-scoped language, it is the local variables plus all of the
visible variables in all of the enclosing scopes
A subprogram is active if its execution has begun but has not yet terminated
In a dynamic-scoped language, the referencing environment is the local
variables plus all visible variables in all active subprograms
Local variables can be stack-dynamic (bound to storage)
– Advantages
Support for recursion
Storage for locals is shared among some subprograms
– Disadvantages
Allocation/de-allocation, initialization time
Indirect addressing
Subprograms cannot be history sensitive
Local variables can be static
– More efficient (no indirection)
– No run-time overhead
Co-Routines
A coroutine is a subprogram that has multiple entries and controls them itself
Also called symmetric control: caller and called coroutines are on a more equal
basis
A coroutine call is named a resume
The first resume of a coroutine is to its beginning, but subsequent calls enter at
the point just after the last executed statement in the coroutine
Coroutines repeatedly resume each other, possibly forever
Coroutines provide quasi-concurrent execution of program units (the coroutines);
their execution is interleaved, but not overlapped
Figure 5.2 Possible Execution Controls Figure 5.3 Possible Execution Controls
Figure 5.4 Possible Execution Controls with Loops
Language Examples
Language Examples: Ada
The encapsulation construct is called a package
– Specification package (the interface)
– Body package (implementation of the entities named in the specification)
Information Hiding
– The spec package has two parts, public and private
– The name of the abstract type appears in the public part of the specification
package. This part may also include representations of unhidden types
– The representation of the abstract type appears in a part of the specification
called the private part
More restricted form with limited private types
– Private types have built-in operations for assignment and comparison
– Limited private types have NO built-in operations
Reasons for the public/private spec package:
1. The compiler must be able to see the representation after seeing only the
spec package (it cannot see the private part)
2. Clients must see the type name, but not the representation (they also
cannot see the private part)
Having part of the implementation details (the representation) in the spec
package and part (the method bodies) in the body package is not good
C# Property Example
public class Weather {
public int DegreeDays { //** DegreeDays is a property
get {return degreeDays;}
set {
if(value < 0 || value > 30)
Console.WriteLine("Value is out of range: {0}", value);
else degreeDays = value;}
}
private int degreeDays;
...
}
...
Weather w = new Weather();
int degreeDaysToday, oldDegreeDays;
...
w.DegreeDays = degreeDaysToday;
...
oldDegreeDays = w.DegreeDays;
Concurrency
Concurrency can occur at four levels:
– Machine instruction level
– High-level language statement level
– Unit level
– Program level
Because there are no language issues in instruction- and program-level
concurrency, they are not addressed here
Multiprocessor Architectures
Late 1950s - one general-purpose processor and one or more special purpose
processors for input and output operations
Early 1960s - multiple complete processors, used for program-level concurrency
Mid-1960s - multiple partial processors, used for instruction-level concurrency
Single-Instruction Multiple-Data (SIMD) machines
Multiple-Instruction Multiple-Data (MIMD) machines
– Independent processors that can be synchronized (unit-level concurrency)
Categories of Concurrency
A thread of control in a program is the sequence of program points reached as
control flows through the program
Categories of Concurrency:
– Physical concurrency - Multiple independent processors (multiple threads of
control)
– Logical concurrency - The appearance of physical concurrency is presented by
time-sharing one processor (software can be designed as if there were
multiple threads of control)
Coroutines (quasi-concurrency) have a single thread of control
Motivations for Studying Concurrency
Involves a different way of designing software that can be very useful— many
real-world situations involve concurrency
Multiprocessor computers capable of physical concurrency are now widely used
Subprogram-Level Concurrency
A task or process is a program unit that can be in concurrent execution with
other program units
Tasks differ from ordinary subprograms in that:
– A task may be implicitly started
– When a program unit starts the execution of a task, it is not necessarily
suspended
– When a task‘s execution is completed, control may not return to the caller
Tasks usually work together
Two General Categories of Tasks
Heavyweight tasks execute in their own address space
Lightweight tasks all run in the same address space
A task is disjoint if it does not communicate with or affect the execution of any
other task in the program in any way
Task Synchronization
A mechanism that controls the order in which tasks execute
Two kinds of synchronization
– Cooperation synchronization
– Competition synchronization
Task communication is necessary for synchronization, provided by:
– Shared nonlocal variables
– Parameters
– Message passing
Kinds of synchronization
Cooperation: Task A must wait for task B to complete some specific activity
before task A can continue its execution, e.g., the producer-consumer problem
Competition: Two or more tasks must use some resource that cannot be
simultaneously used, e.g., a shared counter
– Competition is usually provided by mutually exclusive access (approaches
are discussed later)
–
Semaphores
Dijkstra - 1965
A semaphore is a data structure consisting of a counter and a queue for storing
task descriptors
Semaphores can be used to implement guards on the code that accesses shared
data structures
Semaphores have only two operations, wait and release (originally called P and
V by Dijkstra)
Semaphores can be used to provide both competition and cooperation
synchronization
Cooperation Synchronization with Semaphores
Example: A shared buffer
The buffer is implemented as an ADT with the operations DEPOSIT and FETCH
as the only ways to access the buffer
Use two semaphores for cooperation: emptyspots and fullspots
The semaphore counters are used to store the numbers of empty spots and full
spots in the buffer
DEPOSIT must first check emptyspots to see if there is room in the buffer
If there is room, the counter of emptyspots is decremented and the value is
inserted
If there is no room, the caller is stored in the queue of emptyspots
When DEPOSIT is finished, it must increment the counter of fullspots
FETCH must first check fullspots to see if there is a value
– If there is a full spot, the counter of fullspots is decremented and the value is
removed
– If there are no values in the buffer, the caller must be placed in the queue of
fullspots
– When FETCH is finished, it increments the counter of emptyspots
The operations of FETCH and DEPOSIT on the semaphores are accomplished
through two semaphore operations named wait and release
Monitors
Ada, Java, C#
The idea: encapsulate the shared data and its operations to restrict access
A monitor is an abstract data type for shared data
Competition Synchronization
Shared data is resident in the monitor (rather than in the client units)
All access resident in the monitor
– Monitor implementation guarantee synchronized access by allowing only one
access at a time
– Calls to monitor procedures are implicitly queued if the monitor is busy at
the time of the call
Cooperation Synchronization
Cooperation between processes is still a programming task
– Programmer must guarantee that a shared buffer does not experience
underflow or overflow
Message Passing
Message passing is a general model for concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
Central idea: task communication is like seeing a doctor--most of the time she
waits for you or you wait for her, but when you are both ready, you get together,
or rendezvous
Message Passing Rendezvous
To support concurrent tasks with message passing, a language needs:
– A mechanism to allow a task to indicate when it is willing to accept messages
– A way to remember who is waiting to have its message accepted and some
“fair” way of choosing the next message
When a sender task‘s message is accepted by a receiver task, the actual
message transmission is called a rendezvous
Binary Semaphores
For situations where the data to which access is to be controlled is NOT
encapsulated in a task
task Binary_Semaphore is
entry Wait;
entry release;
end Binary_Semaphore;
task body Binary_Semaphore is
begin
loop
accept Wait;
accept Release;
end loop;
end Binary_Semaphore;
Concurrency in Ada 95
Ada 95 includes Ada 83 features for concurrency, plus two new features
– Protected objects: A more efficient way of implementing shared data to allow
access to a shared data structure to be done without rendezvous
– Asynchronous communication
Java Threads
The concurrent units in Java are methods named run
– A run method code can be in concurrent execution with other such methods
– The process in which the run methods execute is called a thread
Class myThread extends Thread{
public void run () {…}
}
…
Thread myTh = new MyThread ();
myTh.start();
Thread Priorities
A thread‘s default priority is the same as the thread that create it
– If main creates a thread, its default priority is NORM_PRIORITY
Threads defined two other priority constants, MAX_PRIORITY and
MIN_PRIORITY
The priority of a thread can be changed with the methods setPriority
C# Threads
Loosely based on Java but there are significant differences
Basic thread operations
– Any method can run in its own thread
– A thread is created by creating a Thread object
– Creating a thread does not start its concurrent execution; it must be
requested through the Start method
– A thread can be made to wait for another thread to finish with Join
– A thread can be suspended with Sleep
– A thread can be terminated with Abort
Synchronizing Threads
Three ways to synchronize C# threads
– The Interlocked class
Used when the only operations that need to be synchronized are incrementing
or decrementing of an integer
– The lock statement
Used to mark a critical section of code in a thread lock (expression) {… }
– The Monitor class
Provides four methods that can be used to provide more sophisticated
synchronization
C#’s Concurrency Evaluation
An advance over Java threads, e.g., any method can run its own thread
Thread termination is cleaner than in Java
Synchronization is more sophisticated
Statement-Level Concurrency
Objective: Provide a mechanism that the programmer can use to inform
compiler of ways it can map the program onto multiprocessor architecture
Minimize communication among processors and the memories of the other
processors
High-Performance Fortran
A collection of extensions that allow the programmer to provide information to
the compiler to help it optimize code for multiprocessor computers
Specify the number of processors, the distribution of data over the memories of
those processors, and the alignment of data
Primary HPF Specifications
Number of processors
!HPF$ PROCESSORS procs (n)
Distribution of data
!HPF$ DISTRIBUTE (kind) ONTO procs :: identifier_list
– kind can be BLOCK (distribute data to processors in blocks) or
CYCLIC (distribute data to processors one element at a time)
Relate the distribution of one array with that of another
ALIGN array1_element WITH array2_element
Statement-Level Concurrency Example
REAL list_1(1000), list_2(1000)
INTEGER list_3(500), list_4(501)
!HPF$ PROCESSORS proc (10)
!HPF$ DISTRIBUTE (BLOCK) ONTO procs ::
list_1, list_2
!HPF$ ALIGN list_1(index) WITH
list_4 (index+1)
…
list_1 (index) = list_2(index)
list_3(index) = list_4(index+1)
FORALL statement is used to specify a list of statements that may be executed
concurrently
FORALL (index = 1:1000)
list_1(index) = list_2(index)
Specifies that all 1,000 RHSs of the assignments can be evaluated before any
assignment takes place
Exception Handling & Logic Programming Language
Design Issues
How are user-defined exceptions specified?
Should there be default exception handlers for programs that do not provide
their own?
Can built-in exceptions be explicitly raised?
Are hardware-detectable errors treated as exceptions that can be handled?
Are there any built-in exceptions?
How can exceptions be disabled, if at all?
How and where exception handlers specified and what are their scope?
How is an exception occurrence bound to an exception handler?
Can information about the exception be passed to the handler?
Where does execution continue, if at all, after an exception handler completes
its execution? (continuation vs. resumption)
Is some form of finalization provided?
Figure 7.1 Exception Handling Control Flow
Throwing Exceptions
Exceptions are all raised explicitly by the statement: throw [expression];
The brackets are metasymbols
A throw without an operand can only appear in a handler; when it appears, it
simply re-raises the exception, which is then handled elsewhere
The type of the expression disambiguates the intended handler
Unhandled Exceptions
An unhandled exception is propagated to the caller of the function in which it is
raised
This propagation continues to the main function
Continuation
After a handler completes its execution, control flows to the first statement after
the last handler in the sequence of handlers of which it is an element
Other design choices
– All exceptions are user-defined
– Exceptions are neither specified nor declared
– Functions can list the exceptions they may raise
– Without a specification, a function can raise any exception (the throw clause)
Evaluation
It is odd that exceptions are not named and that hardware- and system
software-detectable exceptions cannot be handled
Binding exceptions to handlers through the type of the parameter certainly does
not promote readability