0% found this document useful (0 votes)
88 views

Unit 3&4

Uploaded by

20bd1a6655
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

Unit 3&4

Uploaded by

20bd1a6655
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT-3

Subprograms and Blocks

Introduction
 Two fundamental abstraction facilities
– Process abstraction
 Emphasized from early days
– Data abstraction
 Emphasized in the1980s

3.1 Fundamentals of Subprograms


 Each subprogram has a single entry point
 The calling program is suspended during execution of the called subprogram
 Control always returns to the caller when the called subprogram‘s execution
terminates
Basic Definitions
 A subprogram definition describes the interface to and the actions of the
subprogram abstraction
 A subprogram call is an explicit request that the subprogram be executed
 A subprogram header is the first part of the definition, including the name, the
kind of subprogram, and the formal parameters
 The parameter profile
 of a subprogram is the number, order, and types of its parameters
 The protocol is a subprogram‘s parameter profile and, if it is a
function, its return type
 Function declarations in C and C++ are often called prototypes
 A subprogram declaration provides the protocol, but not the body, of the
subprogram
 A formal parameter is a dummy variable listed in the subprogram
header and used in the subprogram
 An actual parameter represents a value or address used in the
subprogram call statement
Actual/Formal Parameter Correspondence
 Positional
– The binding of actual parameters to formal parameters is by position: the
first actual parameter is bound to the first formal parameter and so forth
– Safe and effective
 Keyword
– The name of the formal parameter to which an actual parameter is to be
bound is specified with the actual parameter
– Parameters can appear in any order
Formal Parameter Default Values
 In certain languages (e.g., C++, Ada), formal parameters can have default values
(if not actual parameter is passed)
– In C++, default parameters must appear last because parameters are
positionally associated
 C# methods can accept a variable number of parameters as long as they are of
the same type
Procedures and Functions
 There are two categories of subprograms
– Procedures are collection of statements that define parameterized
computations
– Functions structurally resemble procedures but are semantically modeled on
mathematical functions
 They are expected to produce no side effects
 In practice, program functions have side effects

3.2 Design Issues for Subprograms


 What parameter passing methods are provided?
 Are parameter types checked?
 Are local variables static or dynamic?
 Can subprogram definitions appear in other subprogram definitions?
 Can subprograms be overloaded?
 Can subprogram be generic?

Scope and Lifetime


 The scope of a variable is the range of statements over which it is visible
 The nonlocal variables of a program unit are those that are visible but not
declared there
 The scope rules of a language determine how references to names are
associated with variables
 Scope and lifetime are sometimes closely related, but are different concepts
 Consider a static variable in a C or C++ function
Static scope
– Based on program text
– To connect a name reference to a variable, you (or the compiler) must
find the declaration
– Search process: search declarations, first locally, then in increasingly
larger enclosing scopes, until one is found for the given name
– Enclosing static scopes (to a specific scope) are called its static ancestors;
the nearest static prototype is called a static parent
 Variables can be hidden from a unit by having a "closer" variable with the
same name
 C++ and Ada allow access to these "hidden" variables
– In Ada: unit.name
– In C++: class_name::name
 Blocks
– A method of creating static scopes inside program units--from ALGOL 60
– Examples:
C and C++: for (...)
{ int index;
...
}
Ada: declare LCL : FLOAT;
begin
...
end
 Evaluation of Static Scoping
 Consider the example:
Assume MAIN calls A and B
A calls C and D
B calls A and E
 Suppose the spec is changed so that D must now access some data in B
 Solutions:
– Put D in B (but then C can no longer call it and D cannot access A's
variables)
– Move the data from B that D needs to MAIN (but then all procedures can
access them)
 Same problem for procedure access
 Overall: static scoping often encourages many globals
Dynamic Scope
– Based on calling sequences of program units, not their textual
layout (temporal versus spatial)
– References to variables are connected to declarations by searching
back through the chain of subprogram calls that forced execution to this
point

Scope Example
MAIN
- declaration of x
SUB1
- declaration of x -
...
call SUB2
...
SUB2
...
- reference to x -
...
...
call SUB1

 Static scoping
– Reference to x is to MAIN's x
 Dynamic scoping
– Reference to x is to SUB1's x
 Evaluation of Dynamic Scoping:
– Advantage: convenience
– Disadvantage: poor readability
3.3 Local Referencing Environments
 Def: The referencing environment of a statement is the collection of all names
that are visible in the statement
 In a static-scoped language, it is the local variables plus all of the
visible variables in all of the enclosing scopes
 A subprogram is active if its execution has begun but has not yet terminated
 In a dynamic-scoped language, the referencing environment is the local
variables plus all visible variables in all active subprograms
 Local variables can be stack-dynamic (bound to storage)
– Advantages
 Support for recursion
 Storage for locals is shared among some subprograms
– Disadvantages
 Allocation/de-allocation, initialization time
 Indirect addressing
 Subprograms cannot be history sensitive
 Local variables can be static
– More efficient (no indirection)
– No run-time overhead

3.4 Parameter Passing Methods


 Ways in which parameters are transmitted to and/or from called subprograms
– Pass-by-value
– Pass-by-result
– Pass-by-value-result
– Pass-by-reference
– Pass-by-name

Figure 5.1 Models of Parameter Passing

 Pass-by-value --The value of the actual parameter is used to initialize the


corresponding formal parameter
– Normally implemented by copying
– Can be implemented by transmitting an access path but not recommended
(enforcing write protection is not easy)
– When copies are used, additional storage is required
– Storage and copy operations can be costly

Pass-by-Result (Out Mode)


 When a parameter is passed by result, no value is transmitted to the
subprogram; the corresponding formal parameter acts as a local variable;
 its value is transmitted to caller‘s actual parameter when control is returned to the
caller
4
– Require extra storage location and copy operation
Potential problem: sub(p1, p1); whichever formal parameter is copied back will
represent the current value of p1
Pass-by-Value-Result (Inout Mode)
 A combination of pass-by-value and pass-by-result
 Sometimes called pass-by-copy
 Formal parameters have local storage
 Disadvantages:
– Those of pass-by-result
– Those of pass-by-value
Pass-by-Reference (Inout Mode)
 Pass an access path
 Also called pass-by-sharing
 Passing process is efficient (no copying and no duplicated storage)
 Disadvantages
– Slower accesses (compared to pass-by-value) to formal parameters
– Potentials for un-wanted side effects
– Un-wanted aliases (access broadened)
Pass-by-Name (Inout Mode)
 By textual substitution
 Formals are bound to an access method at the time of the call,
 but actual binding to a value or address takes place at the time of a reference
or assignment
 Allows flexibility in late binding
Implementing Parameter-Passing Methods
 In most language parameter communication takes place thru the run-time
stack
 Pass-by-reference are the simplest to implement; only an address is placed in
the stack
 A subtle but fatal error can occur with pass-by-reference and pass-by-value
result: a formal parameter corresponding to a constant can mistakenly be
changed
Parameter Passing Methods of Major Languages
 Fortran
– Always used the inout semantics model
– Before Fortran 77: pass-by-reference
– Fortran 77 and later: scalar variables are often passed by value-result
 C
– Pass-by-value
– Pass-by-reference is achieved by using pointers as parameters
 C++
– A special pointer type called reference type for pass-by-reference
 Java
– All parameters are passed are passed by value
– Object parameters are passed by reference
 Ada
– Three semantics modes of parameter transmission: in, out, in out; in is the
default mode
– Formal parameters declared out can be assigned but not referenced; those
declared in can be referenced but not assigned; in out parameters can be
referenced and assigned
 C#
– Default method: pass-by-value
– Pass-by-reference is specified by preceding both a formal parameter and its
actual parameter with ref
 PHP: very similar to C#
 Perl: all actual parameters are implicitly placed in a predefined array named @_
Type Checking Parameters
 Considered very important for reliability
 FORTRAN 77 and original C: none
 Pascal, FORTRAN 90, Java, and Ada: it is always required
 ANSI C and C++: choice is made by the user
– Prototypes
 Relatively new languages Perl, JavaScript, and PHP do not require type checking
Multidimensional Arrays as Parameters
 If a multidimensional array is passed to a subprogram and the subprogram is
separately compiled, the compiler needs to know the declared size of that array
to build the storage mapping function
Multidimensional Arrays as Parameters: C and C++
 Programmer is required to include the declared sizes of all but the first
subscript in the actual parameter
 Disallows writing flexible subprograms
 Solution: pass a pointer to the array and the sizes of the dimensions as other
parameters; the user must include the storage mapping function in terms of the
size parameters
Multidimensional Arrays as Parameters: Pascal and Ada
 Pascal
– Not a problem; declared size is part of the array‘s type
 Ada
– Constrained arrays - like Pascal
– Unconstrained arrays - declared size is part of the object declaration
Multidimensional Arrays as Parameters: Fortran
 Formal parameter that are arrays have a declaration after the header
– For single-dimension arrays, the subscript is irrelevant
– For multi-dimensional arrays, the subscripts allow the storage-mapping
function
Multidimensional Arrays as Parameters: Java and C#
 Similar to Ada
 Arrays are objects; they are all single-dimensioned, but the elements can be
arrays
 Each array inherits a named constant (length in Java, Length in C#) that is set
to the length of the array when the array object is created

Design Considerations for Parameter Passing


 Two important considerations
– Efficiency
– One-way or two-way data transfer
 But the above considerations are in conflict
– Good programming suggest limited access to variables, which means one
way whenever possible
– But pass-by-reference is more efficient to pass structures of significant size

Parameters Subprograms as parameters


 It is sometimes convenient to pass subprogram names as parameters
 Issues:
– Are parameter types checked?
– What is the correct referencing environment for a subprogram that was sent
as a parameter?
3.5 Parameters that are Subprogram Names: Parameter Type Checking
 C and C++: functions cannot be passed as parameters but pointers to functions
can be passed; parameters can be type checked
 FORTRAN 95 type checks
 Later versions of Pascal and
 Ada does not allow subprogram parameters; a similar alternative is provided via
Ada‘s generic facility
Parameters that are Subprogram Names: Referencing Environment
 Shallow binding: The environment of the call statement that enacts the passed
subprogram
 Deep binding: The environment of the definition of the passed subprogram
 Ad hoc binding: The environment of the call statement that passed the
subprogram
Overloaded Subprograms
 An overloaded subprogram is one that has the same name as another
subprogram in the same referencing environment
– Every version of an overloaded subprogram has a unique protocol
 C++, Java, C#, and Ada include predefined overloaded subprograms
 In Ada, the return type of an overloaded function can be used to disambiguate
calls (thus two overloaded functions can have the same parameters)
 Ada, Java, C++, and C# allow users to write multiple versions of subprograms
with the same name
Generic Subprograms – CO3
 A generic or polymorphic subprogram takes parameters of different types on
different activations
 Overloaded subprograms provide ad hoc polymorphism
 A subprogram that takes a generic parameter that is used in a type expression
that describes the type of the parameters of the subprogram provides
parametric polymorphism
Examples of parametric polymorphism: C++
template <class Type>
Type max(Type first, Type second) {
return first > second ? first : second;
}
 The above template can be instantiated for any type for which operator > is
defined
int max (int first, int second) {
return first > second? first : second;
}
Design Issues for Functions
 Are side effects allowed?
– Parameters should always be in-mode to reduce side effect (like Ada)
 What types of return values are allowed?
– Most imperative languages restrict the return types
– C allows any type except arrays and functions
– C++ is like C but also allows user-defined types
– Ada allows any type
– Java and C# do not have functions but methods can have any type
User-Defined Overloaded Operators
 Operators can be overloaded in Ada and C++
 An Ada example
Function “*”(A,B: in Vec_Type): return Integer is
Sum: Integer := 0;
begin
for Index in A‘range loop
Sum := Sum + A(Index) * B(Index)
end loopreturn sum; end “*”;

c = a * b; -- a, b, and c are of type Vec_Type

Co-Routines
 A coroutine is a subprogram that has multiple entries and controls them itself
 Also called symmetric control: caller and called coroutines are on a more equal
basis
 A coroutine call is named a resume
 The first resume of a coroutine is to its beginning, but subsequent calls enter at
the point just after the last executed statement in the coroutine
 Coroutines repeatedly resume each other, possibly forever
 Coroutines provide quasi-concurrent execution of program units (the coroutines);
their execution is interleaved, but not overlapped

Figure 5.2 Possible Execution Controls Figure 5.3 Possible Execution Controls
Figure 5.4 Possible Execution Controls with Loops

Abstract Data Types


 Abstract Data type (ADT) is a type (or class) for objects whose
behavior is defined by a set of values and a set of operations.
 The definition of ADT only mentions what operations are to be
performed but not how these operations will be implemented.

The Concept of Abstraction


 An abstraction is a view or representation of an entity that includes only the
most significant attributes
 The concept of abstraction is fundamental in programming (and computer
science)
 Nearly all programming languages support process abstraction with
subprograms
 Nearly all programming languages designed since 1980 support data abstraction

Introduction to Data Abstraction


 An Abstract Data Type is a user-defined data type that satisfies the following
two conditions:
– The representation of, and operations on, objects of the type are defined in a
single syntactic unit
– The representation of objects of the type is hidden from the program units
that use these objects, so the only operations possible are those provided in
the type's definition

Advantages of Data Abstraction


 Advantage of the first condition
– Program organization, modifiability (everything associated with a data
structure is together), and separate compilation
 Advantage the second condition
– Reliability--by hiding the data representations, user code cannot directly
access objects of the type or depend on the representation, allowing the
representation to be changed without affecting user code
Language Requirements for ADTs:
 A syntactic unit in which to encapsulate the type definition
 A method of making type names and subprogram headers visible to clients,
while hiding actual definitions
 Some primitive operations must be built into the language processor
Design Issues:
 Can abstract types be parameterized?
 What access controls are provided?

Language Examples
Language Examples: Ada
 The encapsulation construct is called a package
– Specification package (the interface)
– Body package (implementation of the entities named in the specification)
 Information Hiding
– The spec package has two parts, public and private
– The name of the abstract type appears in the public part of the specification
package. This part may also include representations of unhidden types
– The representation of the abstract type appears in a part of the specification
called the private part
 More restricted form with limited private types
– Private types have built-in operations for assignment and comparison
– Limited private types have NO built-in operations
 Reasons for the public/private spec package:
1. The compiler must be able to see the representation after seeing only the
spec package (it cannot see the private part)
2. Clients must see the type name, but not the representation (they also
cannot see the private part)
 Having part of the implementation details (the representation) in the spec
package and part (the method bodies) in the body package is not good

One solution: make all ADTs pointers


Problems with this:
1. Difficulties with pointers
2. Object comparisons
3. Control of object allocation is lost
An Example in Ada
package Stack_Pack is
type stack_type is limited private;
max_size: constant := 100;
function empty(stk: in stack_type) return Boolean;
procedure push(stk: in out stack_type; elem:in Integer);
procedure pop(stk: in out stack_type);
function top(stk: in stack_type) return Integer;
private -- hidden from clients
type list_type is array (1..max_size) of Integer;
type stack_type is record
list: list_type;
topsub: Integer range 0..max_size) := 0;
end record;
end Stack_Pack

Language Examples: C++


 Based on C struct type and Simula 67 classes
 The class is the encapsulation device
 All of the class instances of a class share a single copy of the member functions
 Each instance of a class has its own copy of the class data members
 Instances can be static, stack dynamic, or heap dynamic
 Information Hiding
– Private clause for hidden entities
– Public clause for interface entities
– Protected clause for inheritance (Chapter 12)
 Constructors:
– Functions to initialize the data members of instances (they do not create the
objects)
– May also allocate storage if part of the object is heap-dynamic
– Can include parameters to provide parameterization of the objects
– Implicitly called when an instance is created
– Can be explicitly called
– Name is the same as the class name
 Destructors
– Functions to cleanup after an instance is destroyed; usually just to reclaim
heap storage
– Implicitly called when the object‘s lifetime ends
– Can be explicitly called
– Name is the class name, preceded by a tilde (~)
An Example in C++
class stack {
private:
int *stackPtr, maxLen, topPtr;
public:
stack() { // a constructor
stackPtr = new int [100];
maxLen = 99;
topPtr = -1;
};
~stack () {delete [] stackPtr;};
void push (int num) {…};
void pop () {…};
int top () {…};
int empty () {…};
}
Evaluation of ADTs in C++ and Ada
 C++ support for ADTs is similar to expressive power of Ada
 Both provide effective mechanisms for encapsulation and information hiding
 Ada packages are more general encapsulations; classes are types
 Friend functions or classes - to provide access to private members to some
unrelated units or functions
– Necessary in C++

Language Examples: Java


 Similar to C++, except:
– All user-defined types are classes
– All objects are allocated from the heap and accessed through reference
variables
– Individual entities in classes have access control modifiers (private or public),
rather than clauses
– Java has a second scoping mechanism, package scope, which can be used in
place of friends
 All entities in all classes in a package that do not have access control modifiers
are visible throughout the package
An Example in Java
class StackClass {
private:
private int [] *stackRef;
private int [] maxLen, topIndex;
public StackClass() { // a constructor
stackRef = new int [100];
maxLen = 99;
topPtr = -1;
};
public void push (int num) {…};
public void pop () {…};
public int top () {…};
public boolean empty () {…};
}
Language Examples: C#
 Based on C++ and Java
 Adds two access modifiers, internal and protected internal
 All class instances are heap dynamic
 Default constructors are available for all classes
 Garbage collection is used for most heap objects, so destructors are rarely used
 structs are lightweight classes that do not support inheritance
 Common solution to need for access to data members: accessor methods(getter
and setter)
 C# provides properties as a way of implementing getters and setters without
requiring explicit method calls

C# Property Example
public class Weather {
public int DegreeDays { //** DegreeDays is a property
get {return degreeDays;}
set {
if(value < 0 || value > 30)
Console.WriteLine("Value is out of range: {0}", value);
else degreeDays = value;}
}
private int degreeDays;
...
}
...
Weather w = new Weather();
int degreeDaysToday, oldDegreeDays;
...
w.DegreeDays = degreeDaysToday;
...
oldDegreeDays = w.DegreeDays;

Parameterized Abstract Data Types - CO4


 Parameterized ADTs allow designing an ADT that can store any type elements
(among other things)
 Also known as generic classes
 C++, Ada, Java 5.0, and C# 2005 provide support for parameterized ADTs
Parameterized ADTs in Ada
 Ada Generic Packages
– Make the stack type more flexible by making the element type and the size of
the stack generic
generic
Max_Size: Positive;
type Elem_Type is private;
package Generic_Stack is
Type Stack_Type is limited private;
function Top(Stk: in out StackType) return Elem_type;

end Generic_Stack;
Package Integer_Stack is new Generic_Stack(100,Integer);
Package Float_Stack is new Generic_Stack(100,Float);

Parameterized ADTs in C++


 Classes can be somewhat generic by writing parameterized constructor
functions
class stack {

stack (int size) {
stk_ptr = new int [size];
max_len = size - 1;
top = -1;
};

}
stack stk(100);
 The stack element type can be parameterized by making the class a templated
class
template <class Type>
class stack {
private:
Typ *stackPtr; const
int maxLen; int
topPtr;
public:
stack() {
stackPtr = new Type[100];
maxLen = 99;
topPtr = -1;
}

}
Parameterized Classes in Java 5.0
 Generic parameters must be classes
 Most common generic types are the collection types, such as LinkedList and
ArrayList
 Eliminate the need to cast objects that are removed
 Eliminate the problem of having multiple types in a structure
Parameterized Classes in C# 2005
 Similar to those of Java 5.0
 Elements of parameterized structures can be accessed through indexing
UNIT-4

Concurrency
 Concurrency can occur at four levels:
– Machine instruction level
– High-level language statement level
– Unit level
– Program level
 Because there are no language issues in instruction- and program-level
concurrency, they are not addressed here
Multiprocessor Architectures
 Late 1950s - one general-purpose processor and one or more special purpose
processors for input and output operations
 Early 1960s - multiple complete processors, used for program-level concurrency
 Mid-1960s - multiple partial processors, used for instruction-level concurrency
 Single-Instruction Multiple-Data (SIMD) machines
 Multiple-Instruction Multiple-Data (MIMD) machines
– Independent processors that can be synchronized (unit-level concurrency)
Categories of Concurrency
 A thread of control in a program is the sequence of program points reached as
control flows through the program
 Categories of Concurrency:
– Physical concurrency - Multiple independent processors (multiple threads of
control)
– Logical concurrency - The appearance of physical concurrency is presented by
time-sharing one processor (software can be designed as if there were
multiple threads of control)
 Coroutines (quasi-concurrency) have a single thread of control
Motivations for Studying Concurrency
 Involves a different way of designing software that can be very useful— many
real-world situations involve concurrency
 Multiprocessor computers capable of physical concurrency are now widely used

Subprogram-Level Concurrency
 A task or process is a program unit that can be in concurrent execution with
other program units
 Tasks differ from ordinary subprograms in that:
– A task may be implicitly started
– When a program unit starts the execution of a task, it is not necessarily
suspended
– When a task‘s execution is completed, control may not return to the caller
 Tasks usually work together
Two General Categories of Tasks
 Heavyweight tasks execute in their own address space
 Lightweight tasks all run in the same address space
 A task is disjoint if it does not communicate with or affect the execution of any
other task in the program in any way
Task Synchronization
 A mechanism that controls the order in which tasks execute
 Two kinds of synchronization
– Cooperation synchronization
– Competition synchronization
 Task communication is necessary for synchronization, provided by:
– Shared nonlocal variables
– Parameters
– Message passing
Kinds of synchronization
 Cooperation: Task A must wait for task B to complete some specific activity
before task A can continue its execution, e.g., the producer-consumer problem
 Competition: Two or more tasks must use some resource that cannot be
simultaneously used, e.g., a shared counter
– Competition is usually provided by mutually exclusive access (approaches
are discussed later)

Scheduler Figure 6.1 Need for Competition Synchronization


 Providing synchronization requires a mechanism for delaying task execution
 Task execution control is maintained by a program called the scheduler, which
maps task execution onto available processors
Task Execution States
 New - created but not yet started
 Ready - ready to run but not currently running (no available processor)
 Running
 Blocked - has been running, but cannot now continue (usually waiting for some
event to occur)
 Dead - no longer active in any sense
Liveness and Deadlock
 Liveness is a characteristic that a program unit may or may not have
– In sequential code, it means the unit will eventually complete its execution
 In a concurrent environment, a task can easily lose its liveness
 If all tasks in a concurrent environment lose their liveness, it is called deadlock
Design Issues for Concurrency
 Competition and cooperation synchronization
 Controlling task scheduling
 How and when tasks start and end execution
 How and when are tasks created
Methods of Providing Synchronization
 Semaphores
 Monitors
 Message Passing

Semaphores
 Dijkstra - 1965
 A semaphore is a data structure consisting of a counter and a queue for storing
task descriptors
 Semaphores can be used to implement guards on the code that accesses shared
data structures
 Semaphores have only two operations, wait and release (originally called P and
V by Dijkstra)
 Semaphores can be used to provide both competition and cooperation
synchronization
Cooperation Synchronization with Semaphores
 Example: A shared buffer
 The buffer is implemented as an ADT with the operations DEPOSIT and FETCH
as the only ways to access the buffer
 Use two semaphores for cooperation: emptyspots and fullspots
 The semaphore counters are used to store the numbers of empty spots and full
spots in the buffer
 DEPOSIT must first check emptyspots to see if there is room in the buffer
 If there is room, the counter of emptyspots is decremented and the value is
inserted
 If there is no room, the caller is stored in the queue of emptyspots
 When DEPOSIT is finished, it must increment the counter of fullspots
 FETCH must first check fullspots to see if there is a value
– If there is a full spot, the counter of fullspots is decremented and the value is
removed
– If there are no values in the buffer, the caller must be placed in the queue of
fullspots
– When FETCH is finished, it increments the counter of emptyspots
 The operations of FETCH and DEPOSIT on the semaphores are accomplished
through two semaphore operations named wait and release

Semaphores: Wait Operation


wait(aSemaphore)
if aSemaphore‘s counter > 0 then
decrement aSemaphore‘s counter
else
put the caller in aSemaphore‘s queue
attempt to transfer control to a ready
task
-- if the task ready queue is empty,
-- deadlock occurs
end

Semaphores: Release Operation


release(aSemaphore)
if aSemaphore‘s queue is empty then
increment aSemaphore‘s counter
else
put the calling task in the task ready queue
transfer control to a task from aSemaphore‘s
end queue
Producer Consumer Code
semaphore fullspots, emptyspots;
fullstops.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
-- produce VALUE –-
wait (emptyspots); {wait for space}
DEPOSIT(VALUE);
release(fullspots); {increase filled}
end loop;
end producer;
Producer Consumer Code
task consumer;
loop
wait (fullspots);{wait till not empty}}
FETCH(VALUE);
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;
Competition Synchronization with Semaphores
 A third semaphore, named access, is used to control access (competition
synchronization)
– The counter of access will only have the values 0 and 1
– Such a semaphore is called a binary semaphore
 Note that wait and release must be atomic!
Producer Consumer Code
semaphore access, fullspots, emptyspots;
access.count = 0;
fullstops.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
-- produce VALUE –-
wait(emptyspots); {wait for space}
wait(access); {wait for access)
DEPOSIT(VALUE);
release(access); {relinquish access}
release(fullspots); {increase filled}
end loop;
end producer;

Producer Consumer Code


task consumer;
loop
wait(fullspots);{wait till not empty}
wait(access); {wait for access}
FETCH(VALUE);
release(access); {relinquish access}
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;
Evaluation of Semaphores
 Misuse of semaphores can cause failures in cooperation synchronization,
e.g., the buffer will overflow if the wait of fullspots is left out
 Misuse of semaphores can cause failures in competition synchronization,
e.g., the program will deadlock if the release of access is left out

Monitors
 Ada, Java, C#
 The idea: encapsulate the shared data and its operations to restrict access
 A monitor is an abstract data type for shared data
Competition Synchronization
 Shared data is resident in the monitor (rather than in the client units)
 All access resident in the monitor
– Monitor implementation guarantee synchronized access by allowing only one
access at a time
– Calls to monitor procedures are implicitly queued if the monitor is busy at
the time of the call
Cooperation Synchronization
 Cooperation between processes is still a programming task
– Programmer must guarantee that a shared buffer does not experience
underflow or overflow

Evaluation of Monitors Figure 6.2 Cooperation Synchronization


 A better way to provide competition synchronization than are semaphores
 Semaphores can be used to implement monitors
 Monitors can be used to implement semaphores
 Support for cooperation synchronization is very similar as with semaphores, so
it has the same problems

Message Passing
 Message passing is a general model for concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
 Central idea: task communication is like seeing a doctor--most of the time she
waits for you or you wait for her, but when you are both ready, you get together,
or rendezvous
Message Passing Rendezvous
 To support concurrent tasks with message passing, a language needs:
– A mechanism to allow a task to indicate when it is willing to accept messages
– A way to remember who is waiting to have its message accepted and some
“fair” way of choosing the next message
 When a sender task‘s message is accepted by a receiver task, the actual
message transmission is called a rendezvous

Ada Support for Concurrency


 The Ada 83 Message-Passing Model
– Ada tasks have specification and body parts, like packages; the spec has the
interface, which is the collection of entry points:
task Task_Example is
entry ENTRY_1 (Item : in Integer);
end Task_Example;
Task Body
 The body task describes the action that takes place when a rendezvous occurs
 A task that sends a message is suspended while waiting for the message to be
accepted and during the rendezvous
 Entry points in the spec are described with accept clauses in the body accept
entry_name (formal parameters) do

end entry_name

Example of a Task Body


task body Task_Example is
begin
loop
accept Entry_1 (Item: in Float) do
...
end Entry_1;
end loop;
end Task_Example;

Ada Message Passing Semantics


 The task executes to the top of the accept clause and waits for a message
 During execution of the accept clause, the sender is suspended
 accept parameters can transmit information in either or both directions
 Every accept clause has an associated queue to store waiting messages
Figure 6.3 Rendezvous Time Lines

Message Passing: Server/Actor Tasks


 A task that has accept clauses, but no other code is called a server task (the
example above is a server task)
 A task without accept clauses is called an actor task
– An actor task can send messages to other tasks
– Note: A sender must know the entry name of the receiver, but not vice versa
(asymmetric)

Figure 6.4 Graphical Representation of a Rendezvous

Example: Actor Task


task Water_Monitor; -- specificationtask body body Water_Monitor is -- body
begin
loop
if Water_Level > Max_Level
then Sound_Alarm;
end if;
delay 1.0; -- No further execution
-- for at least 1 second
end loop;
end Water_Monitor;
Multiple Entry Points
 Tasks can have more than one entry point
– The specification task has an entry clause for each
– The task body has an accept clause for each entry clause, placed in a select
clause, which is in a loop
A Task with Multiple Entries
task body Teller is
loop
select
accept Drive_Up(formal params) do
...
end Drive_Up;
...
or
accept Walk_Up(formal params) do
...
end Walk_Up;
...
end select;
end loop;
end Teller;

Semantics of Tasks with Multiple accept Clauses


 If exactly one entry queue is nonempty, choose a message from it
 If more than one entry queue is nonempty, choose one, nondeterministically,
from which to accept a message
 If all are empty, wait
 The construct is often called a selective wait
 Extended accept clause - code following the clause, but before the next clause
– Executed concurrently with the caller
Cooperation Synchronization with Message Passing
 Provided by Guarded accept clauses
when not Full(Buffer) =>
accept Deposit (New_Value) do
 An accept clause with a with a when clause is either open or closed
– A clause whose guard is true is called open
– A clause whose guard is false is called closed
– A clause without a guard is always open

Semantics of select with Guarded accept Clauses:


 select first checks the guards on all clauses
 If exactly one is open, its queue is checked for messages
 If more than one are open, non-deterministically choose a queue among them to
check for messages
 If all are closed, it is a runtime error
 A select clause can include an else clause to avoid the error
– When the else clause completes, the loop repeats

Example of a Task with Guarded accept Clauses


 Note: The station may be out of gas and there may or may not be a position
available in the garage
task Gas_Station_Attendant is
entry Service_Island (Car : Car_Type);
entry Garage (Car : Car_Type);
end Gas_Station_Attendant;
Example of a Task with Guarded accept Clauses
task body Gas_Station_Attendant is
begin
loop
select
when Gas_Available =>
accept Service_Island (Car : Car_Type) do
Fill_With_Gas (Car);
end Service_Island;
or
when Garage_Available =>
accept Garage (Car : Car_Type) do
Fix (Car);
end Garage;
else

end select; Sleep;


end loop;
end Gas_Station_Attendant;
Competition Synchronization with Message Passing
 Modeling mutually exclusive access to shared data
 Example--a shared buffer
 Encapsulate the buffer and its operations in a task
 Competition synchronization is implicit in the semantics of accept clauses
– Only one accept clause in a task can be active at any given time
Task Termination
 The execution of a task is completed if control has reached the end of its code
body
 If a task has created no dependent tasks and is completed, it is terminated
 If a task has created dependent tasks and is completed, it is not terminated
until all its dependent tasks are terminated
The terminate Clause
 A terminate clause in a select is just a terminate statement
 A terminate clause is selected when no accept clause is open
 When a terminate is selected in a task, the task is terminated only when its
master and all of the dependents of its master are either completed or are
waiting at a terminate
 A block or subprogram is not left until all of its dependent tasks are terminated
Message Passing Priorities
 The priority of any task can be set with the pragma priority pragma Priority
(expression);
 The priority of a task applies to it only when it is in the task ready queue

Binary Semaphores
 For situations where the data to which access is to be controlled is NOT
encapsulated in a task
task Binary_Semaphore is
entry Wait;
entry release;
end Binary_Semaphore;
task body Binary_Semaphore is
begin
loop
accept Wait;
accept Release;
end loop;
end Binary_Semaphore;

Concurrency in Ada 95
 Ada 95 includes Ada 83 features for concurrency, plus two new features
– Protected objects: A more efficient way of implementing shared data to allow
access to a shared data structure to be done without rendezvous
– Asynchronous communication

Ada 95: Protected Objects


 A protected object is similar to an abstract data type
 Access to a protected object is either through messages passed to entries, as
with a task, or through protected subprograms
 A protected procedure provides mutually exclusive read-write access to
protected objects
 A protected function provides concurrent read-only access to protected objects
Asynchronous Communication
 Provided through asynchronous select structures
 An asynchronous select has two triggering alternatives, an entry clause or a
delay
– The entry clause is triggered when sent a message
– The delay clause is triggered when its time limit is reached
Evaluation of the Ada
 Message passing model of concurrency is powerful and general
 Protected objects are a better way to provide synchronized shared data
 In the absence of distributed processors, the choice between monitors and tasks
with message passing is somewhat a matter of taste
 For distributed systems, message passing is a better model for concurrency

Java Threads
 The concurrent units in Java are methods named run
– A run method code can be in concurrent execution with other such methods
– The process in which the run methods execute is called a thread
Class myThread extends Thread{
public void run () {…}
}

Thread myTh = new MyThread ();
myTh.start();

Controlling Thread Execution


 The Thread class has several methods to control the execution of threads
– The yield is a request from the running thread to voluntarily surrender the
processor
– The sleep method can be used by the caller of the method to block the thread
– The join method is used to force a method to delay its execution until the run
method of another thread has completed its execution

Thread Priorities
 A thread‘s default priority is the same as the thread that create it
– If main creates a thread, its default priority is NORM_PRIORITY
 Threads defined two other priority constants, MAX_PRIORITY and
MIN_PRIORITY
 The priority of a thread can be changed with the methods setPriority

Competition Synchronization with Java Threads


 A method that includes the synchronized modifier disallows any other method
from running on the object while it is in execution

public synchronized void deposit( int i) {…}
public synchronized int fetch() {…}

 The above two methods are synchronized which prevents them from interfering
with each other
 If only a part of a method must be run without interference, it can be
synchronized through synchronized statement
synchronized (expression)
statement
Cooperation Synchronization with Java Threads
 Cooperation synchronization in Java is achieved via wait, notify, and notifyAll
methods
– All methods are defined in Object, which is the root class in Java, so all
objects inherit them
 The wait method must be called in a loop
 The notify method is called to tell one waiting thread that the event it was
waiting has happened
 The notifyAll method awakens all of the threads on the object‘s wait list

Java’s Thread Evaluation


 Java‘s support for concurrency is relatively simple but effective
 Not as powerful as Ada‘s tasks

C# Threads
 Loosely based on Java but there are significant differences
 Basic thread operations
– Any method can run in its own thread
– A thread is created by creating a Thread object
– Creating a thread does not start its concurrent execution; it must be
requested through the Start method
– A thread can be made to wait for another thread to finish with Join
– A thread can be suspended with Sleep
– A thread can be terminated with Abort

Synchronizing Threads
 Three ways to synchronize C# threads
– The Interlocked class
 Used when the only operations that need to be synchronized are incrementing
or decrementing of an integer
– The lock statement
 Used to mark a critical section of code in a thread lock (expression) {… }
– The Monitor class
 Provides four methods that can be used to provide more sophisticated
synchronization
C#’s Concurrency Evaluation
 An advance over Java threads, e.g., any method can run its own thread
 Thread termination is cleaner than in Java
 Synchronization is more sophisticated

Statement-Level Concurrency
 Objective: Provide a mechanism that the programmer can use to inform
compiler of ways it can map the program onto multiprocessor architecture
 Minimize communication among processors and the memories of the other
processors
High-Performance Fortran
 A collection of extensions that allow the programmer to provide information to
the compiler to help it optimize code for multiprocessor computers
 Specify the number of processors, the distribution of data over the memories of
those processors, and the alignment of data
Primary HPF Specifications
 Number of processors
!HPF$ PROCESSORS procs (n)
 Distribution of data
!HPF$ DISTRIBUTE (kind) ONTO procs :: identifier_list
– kind can be BLOCK (distribute data to processors in blocks) or
CYCLIC (distribute data to processors one element at a time)
 Relate the distribution of one array with that of another
ALIGN array1_element WITH array2_element
Statement-Level Concurrency Example
REAL list_1(1000), list_2(1000)
INTEGER list_3(500), list_4(501)
!HPF$ PROCESSORS proc (10)
!HPF$ DISTRIBUTE (BLOCK) ONTO procs ::
list_1, list_2
!HPF$ ALIGN list_1(index) WITH
list_4 (index+1)

list_1 (index) = list_2(index)
list_3(index) = list_4(index+1)
 FORALL statement is used to specify a list of statements that may be executed
concurrently
FORALL (index = 1:1000)
list_1(index) = list_2(index)
 Specifies that all 1,000 RHSs of the assignments can be evaluated before any
assignment takes place
Exception Handling & Logic Programming Language

Introduction to Exception Handling


 In a language without exception handling
– When an exception occurs, control goes to the operating system, where a
message is displayed and the program is terminated
 In a language with exception handling
– Programs are allowed to trap some exceptions, thereby providing the
possibility of fixing the problem and continuing
4.7 Basic Concepts – CO3
 Many languages allow programs to trap input/output errors (including EOF)
 An exception is any unusual event, either erroneous or not, detectable by either
hardware or software, that may require special processing
 The special processing that may be required after detection of an exception is
called exception handling
 The exception handling code unit is called an exception handler

Exception Handling Alternatives


 An exception is raised when its associated event occurs
 A language that does not have exception handling capabilities can still define,
detect, raise, and handle exceptions (user defined, software detected)
 Alternatives:
– Send an auxiliary parameter or use the return value to indicate the return
status of a subprogram
– Pass an exception handling subprogram to all subprograms

Advantages of Built-in Exception Handling


 Error detection code is tedious to write and it clutters the program
 Exception handling encourages programmers to consider many different
possible errors
 Exception propagation allows a high level of reuse of exception handling code

Design Issues
 How are user-defined exceptions specified?
 Should there be default exception handlers for programs that do not provide
their own?
 Can built-in exceptions be explicitly raised?
 Are hardware-detectable errors treated as exceptions that can be handled?
 Are there any built-in exceptions?
 How can exceptions be disabled, if at all?
 How and where exception handlers specified and what are their scope?
 How is an exception occurrence bound to an exception handler?
 Can information about the exception be passed to the handler?
 Where does execution continue, if at all, after an exception handler completes
its execution? (continuation vs. resumption)
 Is some form of finalization provided?
Figure 7.1 Exception Handling Control Flow

4.8 Exception Handling in Ada – CO3


 The frame of an exception handler in Ada is either a subprogram body, a
package body, a task, or a block
 Because exception handlers are usually local to the code in which the exception
can be raised, they do not have parameters
Ada Exception Handlers
 Handler form:
when exception_choice{|exception_choice} => statement_sequence
...
[when others =>
statement_sequence]
exception_choice form:
exception_name | others
 Handlers are placed at the end of the block or unit in which they occur
Binding Exceptions to Handlers
 If the block or unit in which an exception is raised does not have a handler for
that exception, the exception is propagated elsewhere to be handled
– Procedures - propagate it to the caller
– Blocks - propagate it to the scope in which it appears
– Package body - propagate it to the declaration part of the unit that declared
the package (if it is a library unit, the program is terminated)
– Task - no propagation; if it has a handler, execute it; in either case, mark it
"completed"
Continuation
 The block or unit that raises an exception but does not handle it is always
terminated (also any block or unit to which it is propagated that does not
handle it)
Other Design Choices
 User-defined Exceptions form:
exception_name_list : exception;
 Raising Exceptions form:
raise [exception_name]
– (the exception name is not required if it is in a handler--in this case, it
propagates the same exception)
 Exception conditions can be disabled with:
pragma SUPPRESS(exception_list)
Predefined Exceptions
 CONSTRAINT_ERROR - index constraints, range constraints, etc.
 NUMERIC_ERROR - numeric operation cannot return a correct value (overflow,
division by zero, etc.)
 PROGRAM_ERROR - call to a subprogram whose body has not been elaborated
 STORAGE_ERROR - system runs out of heap
 TASKING_ERROR - an error associated with tasks
Evaluation
 The Ada design for exception handling embodies the state-of-the-art in language
design in 1980
 A significant advance over PL/I
 Ada was the only widely used language with exception handling until it was
added to C++

4.9 Exception Handling in C++ - CO3


 Added to C++ in 1990
 Design is based on that of CLU, Ada, and ML
C++ Exception Handlers
 Exception Handlers Form:
try {
-- code that is expected to raise an exception
}
catch (formal parameter) {
-- handler code
}
...
catch (formal parameter) {
-- handler code
}
The catch Function
 catch is the name of all handlers--it is an overloaded name, so the formal
parameter of each must be unique
 The formal parameter need not have a variable
– It can be simply a type name to distinguish the handler it is in from others
 The formal parameter can be used to transfer information to the handler
 The formal parameter can be an ellipsis, in which case it handles all exceptions
not yet handled

Throwing Exceptions
 Exceptions are all raised explicitly by the statement: throw [expression];
 The brackets are metasymbols
 A throw without an operand can only appear in a handler; when it appears, it
simply re-raises the exception, which is then handled elsewhere
 The type of the expression disambiguates the intended handler

Unhandled Exceptions
 An unhandled exception is propagated to the caller of the function in which it is
raised
 This propagation continues to the main function

Continuation
 After a handler completes its execution, control flows to the first statement after
the last handler in the sequence of handlers of which it is an element
 Other design choices
– All exceptions are user-defined
– Exceptions are neither specified nor declared
– Functions can list the exceptions they may raise
– Without a specification, a function can raise any exception (the throw clause)
Evaluation
 It is odd that exceptions are not named and that hardware- and system
software-detectable exceptions cannot be handled
 Binding exceptions to handlers through the type of the parameter certainly does
not promote readability

4.13 Exception Handling in Java – CO3


 Based on that of C++, but more in line with OOP philosophy
 All exceptions are objects of classes that are descendants of the Throwable class
Classes of Exceptions
 The Java library includes two subclasses of Throwable :
– Error
o Thrown by the Java interpreter for events such as heap overflow
o Never handled by user programs
– Exception
o User-defined exceptions are usually subclasses of this
o Has two predefined subclasses, IOException and RuntimeException
e.g., ArrayIndexOutOfBoundsException and NullPointerException
Java Exception Handlers
 Like those of C++, except every catch requires a named parameter and all
parameters must be descendants of Throwable
 Syntax of try clause is exactly that of C++
 Exceptions are thrown with throw, as in C++, but often the throw includes the
new operator to create the object, as in: throw new MyException();
Binding Exceptions to Handlers
 Binding an exception to a handler is simpler in Java than it is in C++
– An exception is bound to the first handler with a parameter is the same class
as the thrown object or an ancestor of it
 An exception can be handled and rethrown by including a throw in the handler
(a handler could also throw a different exception)
Continuation
 If no handler is found in the method, the exception is propagated to the
method‘s caller
 If no handler is found (all the way to main), the program is terminated
 To ensure that all exceptions are caught, a handler can be included in any try
construct that catches all exceptions
– Simply use an Exception class parameter
– Of course, it must be the last in the try construct
Checked and Unchecked Exceptions
 The Java throws clause is quite different from the throw clause of C++
 Exceptions of class Error and RunTimeException and all of their descendants
are called unchecked exceptions; all other exceptions are called checked
exceptions
 Checked exceptions that may be thrown by a method must be either:
– Listed in the throws clause, or
– Handled in the method
Other Design Choices
 A method cannot declare more exceptions in its throws clause than
the method it overrides
 A method that calls a method that lists a particular checked
exception in its throws clause has three alternatives for dealing
with that exception:
– Catch and handle the exception
– Catch the exception and throw an exception that is listed in its
own throws clause
– Declare it in its throws clause and do not handle it
The finally Clause
 Can appear at the end of a try construct
 Form:
finally {..}
 Purpose: To specify code that is to be executed, regardless of what
happens in the try construct
Example
 A try construct with a finally clause can be used outside exception
handling
try {
for (index = 0; index < 100; index++) {

if (…) {
return;
} //** end of if
} //** end of
try clause
finally {

} //** end of try construct
Assertions
 Statements in the program declaring a boolean expression
regarding the current state of the computation
 When evaluated to true nothing happens
 When evaluated to false an AssertionError exception is thrown
 Can be disabled during runtime without program modification or
recompilation
 Two forms
– assert condition;
– assert condition: expression;
Evaluation
 The types of exceptions makes more sense than in the case of C++
 The throws clause is better than that of C++ (The throw clause in
C++ says little to the programmer)
 The finally clause is often useful
 The Java interpreter throws a variety of exceptions that can be
handled by user programs

You might also like