Cs8494 Notes
Cs8494 Notes
UNIT I
• Systems Engineering
– Software as part of larger system, determine requirements for all system
elements, allocate requirements to software.
• Software Requirements Analysis
– Develop understanding of problem domain, user needs, function, performance,
interfaces, ...
– Software Design
– Multi-step process to determine architecture, interfaces, data structures,
functional detail. Produces (high-level) form that can be checked for quality,
conformance before coding.
• Coding
– Produce machine readable and executable form, match HW, OS and design needs.
• Testing
– Confirm that components, subsystems and complete products meet requirements,
specifications and quality, find and fix defects.
• Maintenance
– Incrementally, evolve software to fix defects, add features, adapt to new
condition. Often 80% of effort spent here!
Waterfall model phases:
• Requirements analysis and definition
• System and software design
• Implementation and unit testing
• Integration and system testing
• Operation and maintenance
• The main drawback of the waterfall model is the difficulty of accommodating change
after the process is underway. One phase has to be complete before moving onto the next
phase.
• Each phase terminates only when the documents are complete and approved by the SQA
group.
• Maintenance begins when the client reports an error after having accepted the product. It
could also begin due to a change in requirements after the client has accepted the product
Waterfall model: Advantages:
• Disciplined approach
• Careful checking by the Software Quality Assurance Group at the end of each phase.
• Testing in each phase.
• Documentation available at the end of each phase.
Waterfall model problems:
• It is difficult to respond to changing customer requirements.
• Therefore, this model is only appropriate when the requirements are well-understood and
changes will be fairly limited during the design process.
• Few business systems have stable requirements.
• The waterfall model is mostly used for large systems engineering projects where a system
is developed at several sites.
• The customer must have patience. A working version of the program will not be
available until late in the project time-span
• Feedback from one phase to another might be too late and hence expensive.
The Prototyping Models:
• Often, a customer defines a set of general objectives for software but does not identify
detailed input, processing, or output requirements.
• In other cases, the developer may be unsure of the efficiency of an algorithm, the
adaptability of an operating system, or the form that human –machine interaction should
take
• In this case prototyping paradigm may offer the best approach
• Requirements gathering
• Quick design
• Prototype building
• Prototype evaluation by customers
• Prototype may be refined
• Prototype thrown away and software developed using formal process{ it is used to define
the requirement} Prototyping
Strengths:
• Requirements can be set earlier and more reliably
• Customer sees results very quickly.
• Customer is educated in what is possible helping to refine requirements.
• Requirements can be communicated more clearly and completely
• Between developers and clients Requirements and design options can be
investigated quickly and Cheaply
Weaknesses:
– Requires a rapid prototyping tool and expertise in using it–a cost for the
development organisation
– Smoke and mirrors - looks like a working version, but it is not.
The RAD Model:
• Rapid Application Development is a linear sequential software development process
model that emphasizes an extremely short development cycle
• Rapid application achieved by using a component based construction approach
• If requirements are well understood and project scope is constrained the RAD process
enables a development team to create a ―fully functional systemǁ
Team #
n M o d e lin g
busines s m odeling dat
a m odeling
proc es s m odeling
C o n s t ru c t io
n
Team # c om ponent r eus e
Communicat ion 2
aut om at ic c ode
generat ion t es t ing
Mo d el i ng
b u si n e ss m o de
li n
g d a t a m o d eli n
g p ro ce ss m od
elin g
Planning
Co nst r uct i o n De ployment
Team # 1 co m p o n e n t re
u se a u t oma t i c
int egrat ion
deliv ery
cod e
g e n erat io n t e st
f eedback
Mode ling i ng
business mode lin g
d at a mo delin g
p ro cess mod e ling
6 0 - 9 0 d ays
RAD phases :
• Business modeling
• Data modeling
• Process modeling
• Application generation
• Testing and turnover
Business modeling:
• What information drives the business process?
• What information is generated?
• Who generates it?
Data Modeling:
• The information flow defined as part of the business modeling phase is refined into a set
of data objects that are needed to support the business.
• The characteristics ( called attributes) of each object are identified and the relationships
between these objects are defined
Process modeling:
• The data modeling phase are transformed to achieve the information flow necessary to
implement a business function.
• Processing descriptions are created for adding , modifying, deleting, or retrieving a data
object
Application generation:
• RAD assumes the use of 4 generation techniques.
• Rather than creating software using conventional 3 generation programming languages,
the RAD process works to reuse existing program components (when possible) or created
reusable components (when necessary)
Testing and Turnover:
• Since the RAD process emphasizes reuse, many of the program components have already
been testing.
• This reduces over all testing time.
• However, new components must be tested and all interfaces must be fully exercised
Advantages &Disadvantages of RAD:
Advantages
• Extremely short development time.
• Uses component-based construction and emphasises reuse and code generation
Disadvantages
• Large human resource requirements (to create all of the teams).
• Requires strong commitment between developers and customers for “rapid-fire”
activities.
• High performance requirements maybe can’t be met (requires tuning the components).
The Incremental Model
incr em ent # n
P l a n n i n g
C o m m u n i c a t i o n C o n s t r u c t
i o n
des ign
D e p l o y m e n t
c ode
d e l i v e r y
t es t
f e e d b a c k
incr em ent # 2 M
analys is
o d e l i n g
P l a n n i n g
M o
d e li ve r y of
2 nd incr em e nt
incr em ent # 1
d e l i n g
C o m m u n i c a t i o n
C o u n i c a t i o n
P l a n n
i n g
M o d e l i n g a n alys is
d e l i ve r y of
1 st incre me nt
d e l i ve r y of
C u oc n st i t o nr
analy s is d es i g n
n t h in crem e nt
C o n s t r u c t i o n c ode
d es i g n
t es t
D e p l o y m e n t
:
c ode D e p l o y m e n t d e l i v e r y
t es t d e l i v e r y f e e d b a c k
System Engineering
• Software engineering occurs as a consequence of a process called system engineering.
• Instead of concentrating solely on software, system engineering focuses on a variety of
elements, analyzing, designing, and organizing those elements into a system that can be a
product, a service, or a technology for the transformation of information or control.
• The system engineering process usually begins with a view.ǁ That is, the entire
―world
business or product domain is examined to ensure that the proper business or technology
context can be established.
• The world view is refined to focus more fully on specific domain of interest. Within a
specific domain, the need for targeted system elements (e.g., data, software, hardware,
people) is analyzed. Finally, the analysis, design, an construction of a targeted system
d
element is initiated.
• At the top of the hierarchy, a very broad context is established and, at the bottom, detailed
technical activities, performed by the relevant engineering discipline (e.g., hardware or
software engineering), are conducted.
• Stated in a slightly more formal manner, the world view (WV) is composed of a set of
domains (Di), which can each be a system or system of systems in its own right.
WV = {D1, D2, D3, . . . , Dn}
• Each domain is composed of specific elements (Ej) each of which serves some role in
accomplishing the objective and goals of the domain or component:
Di = {E1, E2, E3, . . . , Em}
• Finally, each element is implemented by specifying the technical components (Ck) that
achieve the necessary function for an element:
Ej = {C1, C2, C3, . . . , Ck}
• Each of these engineering disciplines takes a domain-specific view, but it is important to note
that the engineering disciplines must establish and maintain active communication with one
another. Part of the role of requirements engineering is to establish the interfacing
mechanisms that will enable this to happen.
• The element view for product engineering is the engineering discipline itself applied to the
allocated component. For software engineering, this means analysis and design modeling
activities (covered in detail in later chapters) and construction and integration activities that
encompass code generation, testing, and support steps.
• The analysis step models allocated requirements into representations of data, function, and
behavior Design maps the analysis model into data, architectural, interface, and soft ware
.
component-level designs.
UNIT II SOFTWARE
REQUIREMENTS
• The process of establishing the services that the customer requires from a system and the
constraints under which it operates and is developed
• Requirements may be functional or non-functional
• Functional requirements describe system services or functions
• Non-functional requirements is a constraint on the system or on the development
process
Types of requirements
• User requirements
• Statements in natural language (NL) plus diagrams of the services the system
provides and its operational constraints. Written for customers
• System requirements
• A structured document setting out detailed descriptions of the system services.
Written as a contract between client and contractor
• Software specification
• A detailed software description which can serve as a basis for a design or
implementation. Written for developers
Functional requirements
• Functionality or services that the system is expected to provide.
• Functional requirements may also explicitly state what the system shouldn‘t do.
• Functional requirements specification should be:
• Complete: All services required by the user should be defined
• Consistent: should not have contradictory definition (also avoid ambiguity
don‘t leave room for different interpretations)
Non-Functional requirements
• Requirements that are not directly concerned with the specific functions delivered by the
system
• Typically relate to the system as a whole rather than the individual system features
• Often could be deciding factor on the survival of the system (e.g. reliability, cost, response
time)
Pro r equir
duct ements Organisation al requir emen ts Exte r equir
rnale ments
Space
Performance requir ements r equir ements Pri vacy r equir ements
Safety requir ements
Domain requirements
• Domain requirements are derived from the application domain of the system rather than from
the specific needs of the system users.
• May be new functional requirements, constrain existing requirements or set out how
particular computation must take place.
• Example: tolerance level of landing gear on an aircraft (different on dirt, asphalt, water), or
what happens to fiber optics line in case of sever weather during winter Olympics (Only
domain-area experts know)
Product requirements
• Specify the desired characteristics that a system or subsystem must possess.
• Most NFRs are concerned with specifying constraints on the behaviour of the executing
system.
Specifying product requirements
• Some product requirements can be formulated precisely, and thus easily quantified
• Performance
• Capacity
• Others are more difficult to quantify and, consequently, are often stated informally
• Usability
Process requirements
• Process requirements are constraints placed upon the development process of the system
• Process requirements include:
• Requirements on development standards and methods which must be followed
• CASE tools which should be used
• The management reports which must be provided
External requirements
• May be placed on both the product and the process
• Derived from the environment in which the system is developed
• External requirements are based on:
• application domain information
• organisational considerations
• the need for the system to work with other systems
• health and safety or data protection regulations
• or even basic natural laws such as the laws of physics
Software Document
• Should provide for communication among team members
• Should act as an information repository to be used by maintenance engineers
• Should provide enough information to management to allow them to perform all program
management related activities
• Should describe to users how to operate and administer the system
• Specify external system behaviour
• Specify implementation constraints
• Easy to change
• Serve as reference tool for maintenance
• Record forethought about the life cycle of the system i.e. predict changes
• Characterise responses to unexpected events
Us e t he req ui rement s
d ocumen t to pl an a bi d for t he s ys tem an d to pl an th e
Manag ers sy st em dev elo pmen t p roces s
Us e t he req ui rement s to
Sy st em eng in eers un ders tan d wh at s ys tem i s to b e dev elo ped
Process Documentation
• Used to record and track the development process
• Planning documentation
• Cost, Schedule, Funding tracking
• Schedules
• Standards
• This documentation is created to allow for successful management of a software product
• Has a relatively short lifespan
• Only important to internal development process
• Except in cases where the customer requires a view into this data
• Some items, such as papers that describe design decisions should be extracted and moved
into the product documentation category when they become implemented
• Product Documentation
• Describes the delivered product
• Must evolve with the development of the software product
• Two main categories:
• System Documentation
• User Documentation
Product Documentation
• System Documentation
• Describes how the system works, but not how to operate it
• Examples:
• Requirements Spec
• Architectural Design
• Detailed Design
• Commented Source Code
Including output such as JavaDoc
• Test Plans
Including test cases
• V&V plan and results
• List of Known Bugs
• User Documentation has two main types
• End User
• System Administrator
In some cases these are the same people
• The target audience must be well understood!
• There are five important areas that should be documented for a formal release of a software
application
• These do not necessarily each have to have their own document, but the topics should
be covered thoroughly
• Functional Description of the Software
• Installation Instructions
• Introductory Manual
• Reference Manual
• System Administrator‘s Guide
Document Quality
• Providing thorough an professional documentation is important for any size product
d
development team
• The problem is that many software professionals lack the writing skills to create
professional level documents
Document Structure
• All documents for a given product should have a similar structure
• A good reason for product standards
• The IEEE Standard for User Documentation lists such a structure
• It is a superset of what most documents need
• The authors ―best practicesǁ are:
• Put a cover page on all documents
• Divide documents into chapters with sections and subsections
• Add an index if there is lots of reference information
• Add a glossary to define ambiguous terms
Standards
• Standards play an importan role in the development, maintenance an usefulness of
t d
documentation
• Standards can act as a basis for quality documentation
• But are not good enough on their own
Usually define high level content and organization
• There are three types of documentation standards
1. Process Standards
• Define the approach that is to be used when creating the documentation
• Don‘t actually define any of the content of the documents
2. Product Standards
• Goal is to have all documents created for a specific product attain a consistent structure and
appearance
• Can be based on organizational or contractually required standards
• Four main types:
• Documentation Identification Standards
• Document Structure Standards
• Document Presentation Standards
• Document Update Standards
• One caveat:
• Documentation that will be viewed by end users should be created in a way that is
best consumed and is most attractive to them
• Internal development documentation generally does not meet this need
3. Interchange Standards
• Deals with the creation of documents in a format that allows others to effectively use
• PDF may be good for end users who don‘t need to edit
• Word may be good for text editing
• Specialized CASE tools need to be considered
• This is usually not a problem within a single organization, but when sharing data between
organizations it can occur
• This same problem is faced all the time during software integration
Other Standards
• IEEE
• Has a published standard for user documentation
• Provides a structure and superset of content areas
• Many organizations probably won‘t create documents that completely match the
standard
• Writing Style
• Ten ―best practicesǁ when writing are provided
• Author proposes that group edits of important documents should occur in a similar
fashion to software walkthroughs
Feasibility Studies
• A feasibility study decides whether or not the proposed system is worthwhile
• A short focused study that checks
• If the system contributes to organisational objectives
• If the system can be engineered using current technology and within budget
• If the system can be integrated with other systems that are used
• Based on information assessment (what is required), information collection and report
writing
• Questions for people in the organisation
• What if the system wasn‘t implemented?
• What are current process problems?
• How will the proposed system help?
• What will be the integration problems?
• Is new technology needed? What skills?
• What facilities must be supported by the proposed system?
System models
• Different models may be produced during the requirements analysis activity
• Requirements analysis may involve three structuring activities which result in these different
models
• Partitioning – Identifies the structural (part-of) relationships between entities
• Abstraction – Identifies generalities among entities
• Projection – Identifies different ways of looking at a problem
• System models will be covered on January 30
Scenarios
• Scenarios are descriptions of how a system is used in practice
• They are helpful in requirements elicitation as people can relate to these more readily than
abstract statement of what they require from a system
• Scenarios are particularly useful for adding detail to an outline requirements description
Ethnography
• A social scientists spends a considerable time observing and analysing how people actually
work
• People do not have to explain or articulate their work
• Social and organisational factors of importance may be observed
• Ethnographic studies have shown that work is usually richer an more complex than
d
suggested by simple system models
Requirements validation
• Concerned with demonstrating that the requirements define the system that the customer
really wants
• Requirements error costs are high so validation is very important
• Fixing a requirements error after delivery may cost up to 100 times the cost of fixing
an implementation error
• Requirements checking
• Validity
• Consistency
• Completeness
• Realism
• Verifiability
Requirements management
• Requirements management is the process of managing changing requirements during the
requirements engineering process and system development
• Requirements are inevitably incomplete and inconsistent
• New requirements emerge during the process as business needs change and a better
understanding of the system is developed
• Different viewpoints have different requirements and these are often contradictory
Software prototyping
Incomplete versions of the software program being developed. Prototyping can also be
used by end users to describe and prove requirements that developers have not considered
Benefits:
The software designer and implementer can obtain feedback from the users early in the
project. The client and the contractor can compare if the software made matches the software
specification, according to which the software program is built.
It also allows the software engineer some insight into the accuracy of initial project
estimates and whether the deadlines and milestones proposed can be successfully met.
Process of prototyping
1. Identify basic requirements
Determine basic requirements including the input and output information desired. Details,
such as security, can typically be ignored.
2. Develop Initial Prototype
The initial prototype is developed that includes only user interfaces. (See Horizontal
Prototype, below)
3. Review
The customers, including end-users, examine th prototype an provide feedback on
e d
additions or changes.
4. Revise and Enhance the Prototype
Using the feedback both the specifications and the prototype can be improved. Negotiation
abou what is within the scope of the contract/product may be necessary. If changes are
t
introduced then a repeat of steps #3 and #4 may be needed.
Dimensions of prototypes
1. Horizontal Prototype
It provides a broad view of an entire system or subsystem, focusing on user interaction more
than low-level system functionality, such as database access. Horizontal prototypes are useful
for:
• Confirmation of user interface requirements and system scope
• Develop preliminary estimates of development time, cost and effort.
2 Vertical Prototypes
A vertical prototype is a more complete elaboration of a single subsystem or function. It is
useful for obtaining detailed requirements for a given function, with the following benefits:
• Refinement database design
• Obtain information on data volumes and system interface needs, for network sizing and
performance engineering
Types of prototyping
Software prototyping has many variants. However, all the methods are in some way
based on two major types of prototyping: Throwaway Prototyping and Evolutionary Prototyping.
1. Throwaway prototyping
Also called close ended prototyping. Throwaway refers to the creation of a model that
will eventually be discarded rather than becoming part of the final delivered software. After
preliminary requirements gathering is accomplished, a simple working model of the system is
constructed to visually show the users what their requirements may look like when they are
implemented into a finished system.
The most obvious reason for using Throwaway Prototyping is that it can be done quickly.
If the users can get quick feedback on their requirements, they may be able to refine them early
in the development of the software. Making changes early in the development lifecycle is
extremely cost effective since there is nothing at that point to redo. If a project is changed after a
considerable work has been done then small changes could require large efforts to implement
since software systems have many dependencies. Speed is crucial in implementing a throwaway
prototype, since with a limited budget of time and money little can be expended on a prototype
that will be discarded.
Strength of Throwaway Prototyping is its ability to construct interfaces that the users can
test. The user interface is what the user sees as the system, and by seeing it in front of them, it is
much easier to grasp how the system will work.
2. Evolutionary prototyping
Evolutionary Prototyping (also known as breadboard prototyping) is quite different from
Throwaway Prototyping. The main goal when using Evolutionary Prototyping is to build a very
robust prototype in a structured manner and constantly refine it. "The reason for this is that the
Evolutionary prototype, when built, forms the heart of the new system, and the improvements
and further requirements will be built.
Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are
functional systems. Although they may not have all the features the users have planned, they
may be used on a temporary basis until the final system is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the
system that they understand instead of working on developing a whole system. To minimize risk,
the developer does not implement poorly understood features. The partial system is sent to
customer sites. As users work with the system, they detect opportunities for new features and
give requests for these features to developers. Developers then take these enhancement requests
along with their own and use sound configuration-management practices to change the software-
requirements specification, update the design, recode and retest.
3. Incremental prototyping
The final product is built as separate prototypes. At the end the separate prototypes are
merged in an overall design.
4. Extreme prototyping
Extreme Prototyping as a development process is used especially for developing web
applications. Basically, it breaks down web development into three phases, each one based on
the preceding one. The first phase is a static prototype that consists mainly of HTML pages. In
the second phase, the screens are programmed and fully functional using a simulated services
layer. In the third phase the services are implemented. The process is called Extreme Prototyping
to draw attention to the second phase of the process, where a fully-functional UI is developed
with very little regard to the services other than their contract.
Advantages of prototyping
1. Reduced time and costs: Prototyping can improve the quality of requirements and
specifications provided to developers. Because changes cost exponentially more to implement as
they are detected later in development, the early determination of what the user really wants can
result in faster and less expensive software.
2. Improved and increased user involvement: Prototyping requires user involvement and
allows them to see and interact with a prototype allowing them to provide better an more
d
complete feedback and specifications. The presence of the prototype being examined by the user
prevents many misunderstandings and miscommunications that occur when each side believe the
other understands what they said. Since users know the problem domain better than anyone on
the development team does, increased interaction can result in final product that has greater
tangible and intangible quality. The final product is more likely to satisfy the users‘ desire for
look, feel and performance.
Disadvantages of prototyping
1. Insufficient analysis: The focus on a limited prototype can distract developers from properly
analyzing the complete project. This can lead to overlooking better solutions, preparation of
incomplete specifications or the conversion of limited prototypes into poorly engineered final
projects that are hard to maintain. Further, since a prototype is limited in functionality it may not
scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if
developers are too focused on building a prototype as a model.
2. User confusion of prototype and finished system: Users can begin to think that a prototype,
intended to be thrown away is actually a final system that merely needs to be finished or
,
polished. (They are, for example, often unaware of the effort needed to add error -checking and
security features which a prototype may not have.) This can lead them to expect the prototype to
accurately model the performance of the final system when this is not the intent of the
developers. Users can also become attached to features that were included in a prototype for
consideration and then removed from the specification for a final system. If users are able to
require all proposed features be included in the final system this can lead to conflict.
3. Developer misunderstanding of user objectives: Developers may assume that users share
their objectives (e.g. to deliver core functionality on time and within budget), without
understanding wider commercial issues. For example, user representatives attending Enterprise
software (e.g. PeopleSoft) events may have seen demonstrations of "transaction auditing" (where
changes are logged and displayed in a difference grid view) without being told that this feature
demands additional coding and often requires more hardware to handle extra database accesses.
Users might believe they can demand auditing on every field, whereas developers might think
this is feature creep because they have made assumptions about the extent of user requirements.
If the developer has committed delivery before the user requirements were reviewed, developers
are between a rock and a hard place, particularly if user management derives some advantage
from their failure to implement requirements.
4. Developer attachment to prototype: Developers can also become attached to prototypes they
have spent a great deal of effort producing; this can lead to problems like attempting to convert a
limited prototype into a final system when it does not have an appropriate underlying
architecture. (This may suggest that throwaway prototyping, rather than evolutionary
prototyping, should be used.)
5. Excessive development time of the prototype: A key property to prototyping is the fact that
it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try
to develop a prototype that is too complex. When the prototype is thrown away the precisely
developed requirements that it provides may not yield a sufficient increase in productivity to
make up for the time spent developing the prototype. Users can become stuck in debates over
details of the prototype, holding up the development team and delaying the final product.
6. Expense of implementing prototyping: the start up costs for building a development team
focused on prototyping may be high. Many companies have development methodologies in
place, and changing them can mean retraining, retooling, or both. Many companies tend to just
jump into the prototyping without bothering to retrain their workers as much as they should.
A common problem with adopting prototyping technology is high expectations for productivity
with insufficient effort behin the learning curve. In addition to training for the use of a
d
prototyping technique, there is an often overlooked need for developing corporate and project
specific underlying structure to support the technology. When this underlying structure is
omitted, lower productivity can often result.
Methods
There are few formal prototyping methodologies even though most Agile Methods rely
heavily upon prototyping techniques.
1. Dynamic systems development method
Dynamic Systems Development Method (DSDM) is a framework for delivering business
solutions that relies heavily upon prototyping as a core technique, an is itself ISO 9001
d
approved. It expands upon most understood definitions of a prototype. According to DSDM the
prototype may be a diagram, a business process, or even a system placed into production. DSDM
prototypes are intended to be incremental, evolving from simple forms into more comprehensive
ones.
DSDM prototypes may be throwaway or evolutionary. Evolutionary prototypes may be evolved
horizontally (breadth then depth) or vertically (each section is built in detail with additional
iterations detailing subsequent sections). Evolutionary prototypes can eventually evolve into
final systems.
5. Scrum
Scrum is an agile method for project management. The approach was first described by
Takeuchi and Nonaka in "The New New Product Development Game" (Harvard Business
Review, Jan-Feb 1986).
Tools
Efficiently using prototyping requires that an organization have proper tools and a staff
trained to use those tools. Tools used in prototyping can vary from individual tools like 4th
generation programming languages used for rapid prototyping to complex integrated CASE
tools. 4th generation programming languages like Visual Basic and ColdFusion are frequently
used since they are cheap, well known and relatively easy and fast to use. CASE tools are often
developed or selected by the military or large organizations. Users may prototype elements of an
application themselves in a spreadsheet.
3. Sketchflow
Sketch Flow, a feature of Microsoft Expression Studio Ultimate, gives the ability to quickly
and effectively map out and iterate the flow of an application UI, the layout of individual screens
and transition from one application state to another.
• Interactive Visual Tool
• Easy to learn
• Dynamic
• Provides enviroment to collect feedback
4. Visual Basic
One of the most popular tools for Rapid Prototyping is Visual Basic (VB). Microsoft Access,
which includes a Visual Basic extensibility module, is also a widely accepted prototyping tool
that is used by many non-technical business analysts. Although VB is a programming language it
has many features that facilitate using it to create prototypes, including:
• An interactive/visual user interface design tool.
• Easy connection of user interface components to underlying functional behavior.
• Modifications to the resulting software are easy to perform.
5. Requirements Engineering Environment
It provides an integrated toolset for rapidly representing, building, and executing models
of critical aspects of complex systems.
It is currently used by the Air Force to develop systems. It is: an integrated set of tools
that allows systems analysts to rapidly build functional, user interface, and performance
prototype models of system components. These modeling activities are performed to gain a
greater understanding of complex systems an lessen the impact that inaccurate requirement
d
specifications have on cost and scheduling during the system development process.
REE is composed of three parts. The first, called proto is a CASE tool specifically
designed to support rapid prototyping. The second part is called the Rapid Interface Prototyping
System or RIP, which is a collection of tools that facilitate the creation of user interfaces. The
third part of REE is a user interface to RIP and proto that is graphical and intended to be easy to
use.
Rome Laboratory, the developer of REE, intended that to support their internal requirements
gathering methodology. Their method has three main parts:
• Elicitation from various sources which means u loose (users, interfaces to other systems),
specification, and consistency checking
• Analysis that the needs of diverse users taken together do not conflict and are technically
and economically feasible
• Validation that requirements so derived are an accurate reflection of user needs.
6. LYMB
LYMB is an object-oriented development environment aimed at developing applications
that require combining graphics-based user interfaces, visualization, and rapid prototyping.
7. Non-relational environments
Non-relational definition of data (e.g. using Cache or associative models can help make
end-user prototyping more productive by delaying or avoiding the need to normalize data at
every iteration of a simulation. This may yield earlier/greater clarity of business requirements,
though it does not specifically confirm that requirements are technically an economically
d
feasible in the target production system.
8. PSDL
PSDL is a prototype description language to describe real-time software.
System prototyping
• Prototyping is the rapid development of a system
• In the past, the developed system was normally thought of as inferior in some way to the
required system so further development was required
• Now, the boundary between prototyping and normal system development is blurred and
many systems are developed using an evolutionary approach
Uses of system prototypes
• The principal use is to help customers and developers understand the requirements for the
system
• Requirements elicitation. Users can experiment with a prototype to see how the
system supports their work
• Requirements validation. Th prototype can reveal errors and omissions in the
requirements e
• Prototyping can be considered as a risk reduction activity which reduces requirements risks
Prototyping benefits
• Misunderstandings between software users and developers are exposed
• Missing services may be detected and confusing services may be identified
• A working system is available early in the process
• The prototype may serve as a basis for deriving a system specification
• The system can support user training and system testing
Prototyping process
Establish prototype
Define
objectives
prototype functionality
Develop prototype Evaluate prototype
Data Model
• Used to describe the logical structure of data processed by the system
• Entity-relation-attribute model sets out the entities in the system, the relationships between
these entities and the entity attributes
• Widely used in database design. Can readily be implemented using relational databases
• No specific notation provided in the UML but objects and associations can be used
Behavioural Model
• Behavioural models are used to describe the overall behaviour of a system
• Two types of behavioural model are shown here
• Data processing models that show how data is processed as it moves through the system
• State machine models that show the systems response to events
• Both of these models are required for a description of the system‘s behaviour
1. Data-processing models
• Data flow diagrams are used to model the system‘s data processing
• These show the processing steps as data flows through a system
• Intrinsic part of many analysis methods
• Simple and intuitive notation that customers can understand
• Show end-to-end processing of data
Structured Analysis
• The data-flow approach is typified by the Structured Analysis method (SA)
• Two major strategies dominate structured analysis
• ‗Old‘ method popularised by DeMarco
• ‗Modern‘ approach by Yourdon
DeMarco
• A top-down approach
• The analyst maps the current physical system onto the current logical data-flow
model
• The approach can be summarised in four steps:
• Analysis of current physical system
• Derivation of logical model
• Derivation of proposed logical model
• Implementation of new physical system
Method weaknesses
• They do not model non-functional system requirements.
• They do not usually include information about whether a method is appropriate for a given
problem.
• The may produce too much documentation.
• The system models are sometimes too detailed and difficult for users to understand.
CASE workbenches
• A coherent set of tools that is designed to support related software process activities such as
analysis, design or testing.
• Analysis an design workbenches support system modelling during both requirements
d
engineering and system design.
• These workbenches may support a specific design method or may provide support for a
creating several different types of system model.
An analysis and design workbench
Structur ed Repor t
dia g ram m ing tools gener ation facilities
• Diagram editors
• Model analysis and checking tools
• Repository and associated query language
• Data dictionary
• Report definition and generation tools
• Forms definition tools
• Import/export translators
• Code generation tools
Data Dictionary
• Data dictionaries are lists of all of the names used in the system models. Descriptions of the
entities, relationships and attributes are also included
• Advantages
• Support name management and avoid duplication
• Store of organisational knowledge linking analysis, design and implementation
• Many CASE workbenches support data dictionaries
Design Models – 1:
• Data Design
– created by transforming the data dictionary and ERD into implementation data
structures
– requires as much attention as algorithm design
• Architectural Design
– derived from the analysis model and the subsystem interactions defined in the
DFD
• Interface Design
– derived from DFD and CFD
– describes software elements communication with
• other software elements
• other systems
• human users
Design Models – 2 :
• Procedure-level design
– created by transforming the structural elements defined by the software
architecture into procedural descriptions of software components
– Derived from information in the PSPEC, CSPEC, and STD
Design Principles – 1:
• Process should not suffer from tunnel vision – consider alternative approaches
• Design should be traceable to analysis model
• Do not try to reinvent the wheel
- use design patterns ie reusable components
• Design should exhibit both uniformity and integration
• Should be structured to accommodate changes
Design Principles – 2 :
• Design is not coding and coding is not design
• Should be structured to degrade gently, when bad data, events, or operating conditions
are encountered
• Needs to be assessed for quality as it is being created
• Needs to be reviewed to minimize conceptual (semantic) errors
Design Concepts -1 :
• Abstraction
– allows designers to focus on solving a problem without being concerned about
irrelevant lower level details
Procedural abstraction is a named sequence of instructions that has a specific an limited
function d
e.g open a door
Open implies a long sequence of procedural steps
data abstraction is collection of data that describes a data object
e.g door type, opening mech, weight,dimen
Design Concepts -2 :
• Design Patterns
– description of a design structure that solves a particular design problem within a
specific context and its impact when applied
Design Concepts -3 :
• Software Architecture
– overall structure of the software components and the ways in which that structure
– provides conceptual integrity for a system
Design Concepts -4 :
• Information Hiding
– information (data and procedure) contained within a module is inaccessible to
modules that have no need for such information
• Functional Independence
– achieved by developing modules with single-minded purpose and an aversion to
excessive interaction with other models
Refactoring – Design concepts :
• Fowler [FOW99] defines refactoring in the following manner:
– "Refactoring is the process of changing a software system in such a way that it
does not alter the external behavior of the code [design] yet improves its internal
structure.ǁ
• When software is refectories, the existing design is examined for
– redundancy
– unused design elements
– inefficient or unnecessary algorithms
– poorly constructed or inappropriate data structures
– or any other design failure that can be corrected to yield a better design.
Design Concepts – 4 :
• Objects
– encapsulate both data and data manipulation procedures needed to describe the
content and behavior of a real world entity
• Class
– generalized description (template or pattern) that describes a collection of similar
objects
• Inheritance
– provides a means for allowing subclasses to reuse existing superclass data and
procedures; also provides mechanism for propagating changes
Design Concepts – 5:
• Messages
– the means by which objects exchange information with one another
• Polymorphism
– a mechanism that allows several objects in an class hierarchy to have different
methods with the same name
– instances of each subclass will be free to respond to messages by calling their own
version of the method
Layered Architecture:
• Number of different layers are defined, each accomplishing operations that progressively
become closer to the machine instruction set
• At the outer layer –components service user interface operations.
• At the inner layer – components perform operating system interfacing.
• Intermediate layers provide utility services and application software function
Architecture Tradeoff Analysis – 1:
1. Collect scenarios
2. Elicit requirements, constraints, and environmental description
3. Describe architectural styles/patterns chosen to address scenarios and requirements
• module view
• process view
• data flow view
Architecture Tradeoff Analysis – 2:
4. Evaluate quality attributes independently (e.g reliability, performance, security,
.
maintainability, flexibility, testability, portability, reusability, interoperability)
5. Identify sensitivity points for architecture
• any attributes significantly affected by changing in the architecture
Refining Architectural Design:
• Processing narrative developed for each module
• Interface description provided for each module
• Local and global data structures are defined
• Design restrictions/limitations noted
• Design reviews conducted
• Refinement considered if required and justified
Architectural Design
• An early stage of the system design process.
• Represents the link between specification and design processes.
• Often carried out in parallel with some specification activities.
• It involves identifying major system components and their communications.
Advantages of explicit architecture
• Stakeholder communication
- Architecture may be used as a focus of discussion by system stakeholders.
• System analysis
- Means that analysis of whether the system can meet its non-functional requirements is
possible.
• Large-scale reuse
- The architecture may be reusable across a range of systems.
UI design principles
• User familiarity
• The interface should be based on user-oriented terms an concepts rather than
d
computer concepts
• E.g., an office system should use concepts such as letters, documents, folders etc.
rather than directories, file identifiers, etc.
• Consistency
• The system should display an appropriate level of consistency
• Commands and menus should have the same format, command punctuation should be
similar, etc.
• Minimal surprise
• If a command operates in a known way, the user should be able to predict the
operation of comparable commands
• Recoverability
• The system should provide some interface to user errors and allow the user to recover
from errors
• User guidance
• Some user guidance such as help systems, on-line manuals, etc. should be supplied
• User diversity
• Interaction facilities for different types of user should be supported
• E.g., some users have seeing difficulties and so larger text should be available
User-system interaction
• Two problems must be addressed in interactive systems design
• How should information from the user be provided to the computer system?
• How should information from the computer system be presented to the user?
Interaction styles
• Direct manipulation
• Easiest to grasp with immediate feedback
• Difficult to program
• Menu selection
• User effort and errors minimized
• Large numbers and combinations of choices a problem
• Form fill-in
• Ease of use, simple data entry
• Tedious, takes a lot of screen space
• Natural language
• Great for casual users
• Tedious for expert users
Information presentation
• Information presentation is concerned with presenting system information to system users
• The information may be presented directly or may be transformed in some way for
presentation
• The Model-View-Controller approach is a way of supporting multiple presentations of data
Information display
1
0 10 20
4 2
Dial with
Pie chart Thermometer Horizontal bar
needle
Textual highlighting
The fi lena me y o u have cho sen h as been us ed. P lea se cho os e an other na me
OK Ca ncel
Data visualisation
• Concerned with techniques for displaying large amounts of information
• Visualisation can reveal relationships between entities and trends in the data
• Possible data visualisations are:
• Weather information
• State of a telephone network
• Chemical plant pressures and temperatures
• A model of a molecule
Colour displays
• Colour adds an extra dimension to an interface and can help the user understand complex
information structures
• Can be used to highlight exceptional events
• The use of colour to communicate meaning
Error messages
• Error message design is critically important. Poor error messages can mean that a user
rejects rather than accepts a system
• Messages should be polite, concise, consistent and constructive
• The background and experience of users should be the determining factor in
message design
User interface evaluation
• Some evaluation of a user interface design should be carried out to assess its suitability
• Full scale evaluation is very expensive and impractical for most systems
• Ideally, an interface should be evaluated against req
• However, it is rare for such specifications to be produced
Real-time systems:
• Systems which monitor and control their environment
• Inevitably associated with hardware devices
– Sensors: Collect data from the system environment
– Actuators: Change (in some way) the
system's environment
• Time is critical. Real-time systems MUST respond within specified times
Definition:
• A real-time system is a software system where the correct functioning of the system
depends on the results produced by the system and the time at which these results
are produced
• A ‗soft‘ real-time system is a system whose operation is degraded if results are
not produced according to the specified timing requirements
• A ‗hard‘ real-time system is a system whose operation is incorrect if results are
not produced according to the timing specification
Stimulus/Response Systems:
• Given a stimulus, the system must produce a esponse within a specified time
• Periodic stimuli. Stimuli which occur at predictable time intervals
– For example, a temperature sensor may be polled 10 times per second
• Aperiodic stimuli. Stimuli which occur at unpredictable times
– For example, a system power failure may trigger an interrupt which must be
processed by the system
Architectural considerations:
• Because of the need to respond to timing demands made by different stimuli/responses,
the system architecture must allow for fast switching between stimulus handlers
• Timing demands of different stimuli are different so a simple sequential loop is not
usually adequate
• Real-time systems are usually designed as cooperating processes with a real-time
executive controlling these processes
A real-time system model:
Sen so r Sen so r Sen so r Sen so r Sen so r Sen so r
System elements:
• Sensors control processes
– Collect information from sensors. May buffer information collected in response to
a sensor stimulus
• Data processor
– Carries out processing of collected information and computes the system response
• Actuator control
– Generates control signals for the actuator
R-T systems design process:
• Identify the stimuli to be processed and the required responses to these stimuli
• For each stimulus and response, identify the timing constraints
• Aggregate the stimulus and response processing into concurrent processes. A process
may be associated with each class of stimulus and response
• Design algorithms to process each class of stimulus and response. These must meet the
given timing requirements
• Design a scheduling system which will ensure that processes are started in time to meet
their deadlines
• Integrate using a real-time executive or operating system
Timing constraints:
• May require extensive simulation and experiment to ensure that these are met by the
system
• May mean that certain design strategies such as object-oriented design cannot be used
because of the additional overhead involved
• May mean that low-level programming language features have to be used for
performance reasons
Real-time programming:
• Hard-real time systems may have to programmed in assembly language to ensure that
deadlines are met
• Languages such as C allow efficient programs to be written but do not have constructs to
support concurrency or shared resource management
• Ada as a language designed to support real-time systems design so includes a general
purpose concurrency mechanism
Non-stop system components:
• Configuration manager
– Responsible for the dynamic reconfiguration of the system
software an hardware. Hardware modules may be replaced an software
d d
upgraded without stopping the systems
• Fault manager
– Responsible for detecting software and hardware faults and
taking appropriate actions (e.g. switching to backup disks) to ensure that the
system continues in operation
Burglar alarm system e.g
• A system is required to monitor sensors on doors and windows to detect the presence of
intruders in a building
• When a sensor indicates a break-in, the system switches on lights around the area and
calls police automatically
• The system should include provision for operation without a mains power supply
• Sensors
• Movement detectors, window sensors, door sensors.
• 50 window sensors, 30 door sensors and 200 movement detectors
• Voltage drop sensor
• Actions
• When an intruder is detected, police are called automatically.
• Lights are switched on in rooms with active sensors.
• An audible alarm is switched on.
• The system switches automatically to backup power when a voltage drop
is detected.
The R-T system design process:
• Identify stimuli and associated responses
• Define the timing constraints associated with each stimulus and response
• Allocate system functions to concurrent processes
• Design algorithms for stimulus processing and response generation
• Design a scheduling system which ensures that processes will always be scheduled
to meet their deadlines
Control systems:
• A burglar alarm system is primarily a monitoring system. It collects data from sensors
but no real-time actuator control
• Control systems are similar but, in response to sensor values, the system sends control
signals to actuators
• An example of a monitoring and control system is a system which monitors temperature
and switches heaters on and off
Data acquisition systems:
• Collect data from sensors for subsequent processing and analysis.
• Data collection processes and processing processes may have different periods
and deadlines.
• Data collection may be faster than processing e.g. collecting information about an
explosion.
• Circular or ring buffers are a mechanism for smoothing speed differences.
A temperature control system:
500Hz
Sensor proces s
Senso r values
500Hz
Thermostat process
Switch
500Hz command Thermostat process
Room number
Heater Furnace
control process control
process
Mutual exclusion:
• Producer processes collect data and add it to available
from the buffer and make elements
the buffer. Consumer processes take data
• Producer and consumer processes must be mutually excluded from accessing the
same element.
The buffer must stop producer processes adding information to a full buffer and consumer
processes trying to take information from an empty buffer
System Design
• Design both the hardware and the software associated with system. Partition functions to
either hardware or software
• Design decisions should be made on the basis on non-functional system requirements
• Hardware delivers better performance but potentially longer development and less scope for
change
System elements
• Sensors control processes
• Collect information from sensors. May buffer information collected in response t o a
sensor stimulus
• Data processor
• Carries out processing of collected information and computes the system response
• Actuator control
• Generates control signals for the actuator
Sensor/actuator processes
Sen so r Act uat or
St imulus Response
Timing constraints
• For aperiodic stimuli, designers make assumptions about probability of occurrence of stimuli.
• May mean that certain design strategies such as object-oriented design cannot be used
because of the additional overhead involved
Ti mer
Wait in g
d o: di sp lay ti me Nu mber
Fu ll Set ti me Op erati on
p ow erd o: get nu mber exi t: s et t ime d o: op erate o ven
Hal f
Halfp ow er
p ow er Do or clo sed
Cancel
Ti mer
Do or o pen St art
Sy st em faul t
Hal f p ower d o: set po wer En abl ed d o: di sp lay Wait in g
= 3 00 'Ready' d o: di sp lay ti me
Do or clo sed
Di s ab l ed
d o: di sp lay 'Wait in g'
Real-time programming
• Hard-real time systems may have to programmed in assembly language to ensure that
deadlines are met
• Languages such as C allow efficient programs to be written but do not have constructs to
support concurrency or shared resource management
• Ada as a language designed to support real-time systems design so includes a general
purpose concurrency mechanism
Executive components
• Real-time clock
• Provides information for process scheduling.
• Interrupt handler
• Manages aperiodic requests for service.
• Scheduler
• Chooses the next process to be run.
• Resource manager
• Allocates memory and processor resources.
• Dispatchers
• Starts process execution.
Real-t ime
clo ck Sch edul er Int errup t
h an dl er
Ex ecut in
g p ro ces s
Process priority
• The processing of some types of stimuli must sometimes take priority
• Interrupt level priority. Highest priority which is allocated to processes requiring a very
fast response
• Clock level priority. Allocated to periodic processes
• Within these, further levels of priority may be assigned
Interrupt servicing
• Control is transferred automatically to a pre-determined memory location
• This location contains an instruction to jump to an interrupt service routine
• Further interrupts are disabled, the interrupt serviced and control returned to the
interrupted process
• Interrupt service routines MUST be short, simple and fast
Process management
• Concerned with managing the set of concurrent processes
• Periodic processes are executed at pre-specified time intervals
• The executive uses the real-time clock to determine when to execute a process
• Process period - time between executions
• Process deadline - the time by which processing must be complete
Process switching
• The scheduler chooses the next process to be executed by the processor. This depends on a
scheduling strategy which may take the process priority into account
• The resource manager allocates memory and a processor for the process to be executed
• The despatcher takes the process from ready list, loads it onto a processor an starts
d
execution
Scheduling strategies
• Non pre-emptive scheduling
• Once a process has been scheduled for execution, it runs to completion or until it is
blocked for some reason (e.g. waiting for I/O)
• Pre-emptive scheduling
• The execution of an executing processes may be stopped if a higher priority process
requires service
• Scheduling algorithms
• Round-robin
• Shortest deadline first
A ring buffer
Producer process
Consumer process
Mutual exclusion
• Producer processes collect data and add it to the buffer. Consumer processes take data
from the buffer and make elements available.
• Producer and consumer processes must be mutually excluded from accessing the
same element.
• The buffer must stop producer processes adding information to a full buffer and
consumer processes trying to take information from an empty buffer.
CircularBuffer (int n) {
bufsize = n ;
store = new SensorRecord [bufsize] ;
} // CircularBuffer
Timing requirements
5 60 Hz Al ar m s ys tem
Au di bl e alarmLi ghti ng co nt ro lVo ice s yn th esi zer p p ro ces sp ro ces sro ces s
BuildingMonitor()
{
// initialise all the sensors and start the
processes siren.start () ; lights.start () ;
synthesizer.start () ; windows.start () ;
doors.start () ; movements.start () ; pm.start () ;
}
public void run ()
{
int room = 0 ;
while (true)
{
// poll the movement sensors at least twice per second (400 Hz)
move = movements.getVal () ;
// poll the window sensors at least twice/second (100
Hz) win = windows.getVal () ;
// poll the door sensors at least twice per second (60 Hz)
door = doors.getVal () ;
if (move.sensorVal == 1 | door.sensorVal == 1 | win.sensorVal == 1)
{
// a sensor has indicated an intruder
if (move.sensorVal == room = move.room ;
1) if (door.sensorVal == room = door.room ;
1) if (win.sensorVal == room = win.room ;
1)
} // run
} //BuildingMonitor
A temperature control
system 5 00 Hz
Sen so r p ro cess
Sen so r valu es
5 00 Hz
Th ermo st at p ro ces s
Swit ch co
mmand
5 00 Hz Ro om n u mber Th ermost at pro ces s
Control systems
• A burglar alarm system is primarily a monitoring system. It collects data from sensors but no
real-time actuator control
• Control systems are similar but, in response to sensor values, the system sends control
signals to actuators
• An example of a monitoring and control system is a system which monitors temperature and
switches heaters on and off
UNIT IV
TESTING
Taxonomy of Software Testing
• Classified by purpose, software testing can be divided into: correctness testing, performance
testing, and reliability testing and security testing.
• Classified by life-cycle phase, software testing can be classified into the following
categories: requirements phase testing, design phas testing, program phase testing,
e
evaluating test results, installation phase testing, acceptance testing and maintenance testing.
• By scope, software testing can be categorized as follows: unit testing, component testing,
integration testing, and system testing.
Correctness testing
Correctness is the minimum requirement of software, the essential purpose of testing. It is
used to tell the right behavior from the wrong one. The tester may or may not know the inside
details of the software module under test, e.g. control flow, data flow, etc. Therefore, either a
white-box point of view or black-box point of view can be taken in testing software. We must
note that the black-box and white-box ideas are not limited in correctness testing only.
• Black-box testing
• White-box testing
Performance testing
Not all software systems have specifications on performance explicitly. But every system
will have implicit performance requirements. The software should not take infinite time or
infinite resource to execute. "Performance bugs" sometimes are used to refer to those design
problems in software that cause the system performance to degrade.
Performance has always been a great concern and a driving force of computer evolution.
Performance evaluation of a software system usually includes: resource usage, throughput,
stimulus-response time and queue lengths detailing the average or maximum number of tasks
waiting to be serviced by selected resources. Typical resources that need to be considered
include network bandwidth requirements, CPU cycles, disk space, disk access operations, and
memory usage. The goal of performance testing can be performance bottleneck identification,
performance comparison and evaluation, etc.
Reliability testing
Software reliability refers to the probability of failure-free operation of a system. It is
related to many aspects of software, including the testing process. Directly estimating software
reliability by quantifying its related factors can be difficult. Testing is an effective sampling
method to measure software reliability. Guided by the operational profile, software testing
(usually black-box testing) can be used to obtain failure data, and an estimation model can be
further used to analyze the data to estimate the present reliability and predict future reliability.
Therefore, based on the estimation, the developers can decide whether to release the software,
and the users can decide whether to adopt and use the software. Risk of using software can also
be assessed based on reliability information.
Security testing
Software quality, reliability and security are tightly coupled. Flaws in software can be
exploited by intruders to open security holes. With the development of the Internet, software
security problems are becoming even more severe.
Many critical software applications and services have integrated security measures against
malicious attacks. The purpose of security testing of these systems include identifying and
removing software flaws that may potentially lead to security violations, and validating the
effectiveness of security measures. Simulated security attacks can be performed to find
vulnerabilities.
Acceptance testing
Testing to verify a product meets customer specified requirements. A customer usually
does this type of testing on a product that is developed externally.
Compatibility testing
This is used to ensure compatibility of an application or Web site with different browsers,
OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven
by an automated functional or regression test suite.
Conformance testing
This is used to verify implementation conformance to industry standards. Producing tests
for the behavior of an implementation to be sure it provides the portability, interoperability,
and/or compatibility a standard defines.
Integration testing
Modules are typically code modules, individual applications, client and server
applications on a network, etc. Integration Testing follows unit testing and precedes system
testing.
Load testing
Load testing is a generic term covering Performance Testing and Stress Testing.
Performance testing
Performance testing can be applied to understand your application or WWW site's
scalability, or to benchmark the performance in an environment of third party products such as
servers an middleware for potential purchase. This sort of testing is particularly useful to
d
identify performance bottlenecks in high use applications. Performance testing generally
involves an automated test suite as this allows easy simulation of a variety of normal, peak, and
exceptional load conditions.
Regression testing
Similar in scope to a functional test, a regression test allows a consistent, repeatable
validation of each new release of a product or Web site. Such testing ensures reported product
defects have been corrected for eac new release and that no new quality problems were
h
introduced in the maintenance process. Though regression testing can be performed manually an
automated test suite is often used to reduce the time and resources needed to perform the
required testing.
System testing
Entire system is tested as per the requirements. Black-box type testing that is based on
overall requirements specifications, covers all combined parts of a system.
End-to-end testing
Similar to system testing, involves testing of a complete application environment in a
situation that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing
Testing is to determine if a new software version is performing well enough to accept it
for a major testing effort. If application is crashing for initial use then system is not stable
enough for further testing and build or application is assigned to fix.
Alpha testing
In house virtual user environment can be created for this type of testing. Testing is done
at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing
Testing is typically done by end-users or others. This is the final testing before releasing
the application to commercial purpose.
Types of Errors:
• Algorithmic error.
• Computation & precision error.
• Documentation error.
• Capacity error or boundary error.
• Timing and coordination error.
• Throughput or performance error.
• Recovery error.
• Hardware & system software error.
• Standards & procedure errors.
Software Testability Checklist – 1:
• Operability
– if it works better it can be tested more efficiently
• Observability
– what you see is what you test
• Controllability
– if software can be controlled better the it is more that testing can be automated
and optimized
Software Testability Checklist – 2:
• Decomposability
– controlling the scope of testing allows problems to be isolated quickly and
retested intelligently
• Stability
– the fewer the changes, the fewer the disruptions to testing
• Understandability
– the more information that is known, the smarter the testing can be done
Good Test Attributes:
• A good test has a high probability of finding an error.
• A good test is not redundant.
• A good test should be best of breed.
• A good test should not be too simple or too complex.
Test Strategies:
• Black-box or behavioral testing
– knowing the specified function a product is to perform and demonstrating correct
operation based solely on its specification without regard for its internal logic
• White-box or glass-box testing
– knowing the internal workings of a product, tests are performed to check the
workings of all possible logic paths
White-Box Testing:
Basis Path Testing:
• White-box technique usually based on the program flow graph
• The cyclo matic complexity of the program computed from its flow graph using the
formula V(G) = E – N + 2 or by counting the conditional statements in the PDL
representation and adding 1
• Determine the basis set of linearly independent paths (the cardinality of this set is the
program cyclomatic complexity)
• Prepare test cases that will force the execution of each path in the basis set.
Cyclomatic Complexity:
A number of industry studies have indicated that the higher V(G), the higher the probability or
errors.
Control Structure Testing – 1:
• White-box techniques focusing on control structures present in the software
• Condition testing (e.g. branch testing)
– focuses on testing each decision statement in a software module
– it is important to ensure coverage of all logical combinations of data that may be
processed by the module (a truth table may be helpful)
Control Structure Testing – 2:
• Data flow testing
– selects test paths based according to the locations of variable definitions and uses
in the program (e.g. definition use chains)
• Loop testing
– focuses on the validity of the program loop constructs (i.e. while, for, go to)
– involves checking to ensure loops start and stop when they are supposed to
(unstructured loops should be redesigned whenever possible)
Loop Testing: Simple Loops:
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through the loop
where n is the maximum number of allowable passes
Loop Testing: Nested Loops:
Nested Loops
Start at the innermost loop. Set all outer loops to their minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at
their minimum values.
Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue
this step until the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
end if*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
Black-Box Testing:
Graph-Based Testing – 1:
• Black-box methods based on the nature of the relationships (links) among the program
objects (nodes), test cases are designed to traverse the entire graph
• Transaction flow testing
– nodes represent steps in some transaction and links represent logical connections
between steps that need to be validated
• Finite state modeling
– nodes represent user observable states of the software and links represent state
transitions
Graph-Based Testing – 2:
• Data flow modeling
– nodes are data objects and links are transformations of one data object to another
data object
• Timing modeling
– nodes are program objects and links are sequential connections between these
objects
– link weights are required execution times
Equivalence Partitioning:
• Black-box technique that divides the input domain into classes of data from which test
cases can be derived
• An ideal test case uncovers a class of errors that might require many arbitrary test cases
to be executed before a general error is observed
Equivalence Class Guidelines:
• If input condition specifies a range, one valid and two invalid equivalence classes are
defined
• If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined
• If an input condition specifies a member of a set, one valid and one invalid equivale nce
class is defined
• If an input condition is Boolean, one valid and one invalid equivalence class is defined
• Boundary Value Analysis - 1
• Black-box technique
– focuses on the boundaries of the input domain rather than its center
• Guidelines:
– If input condition specifies a range bounded by values a and b, test cases should
include a and b, values just above and just below a and b
– If an input condition specifies and number of values, test cases should be exercise
the minimum and maximum numbers, as well as values just above and just below
the minimum and maximum values
Boundary Value Analysis – 2
1. Apply guidelines 1 and 2 to output conditions, test cases should be designed to
produce the minimum and maximum output reports
2. If internal program data structures have boundaries (e.g. size limitations), be
certain to test the boundaries
Comparison Testing:
• Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to specificatio ns
• Often equivalence class partitioning is used to develop a common set of test cases for
each implementation
Orthogonal Array Testing – 1:
• Black-box technique that enables the design of a reasonably small set of test cases that
provide maximum test coverage
• Focus is on categories of faulty logic likely to be present in the software component
(without examining the code)
Orthogonal Array Testing – 2:
• Priorities for assessing tests using an orthogonal array
– Detect and isolate all single mode faults
– Detect all double mode faults
– Multimode faults
Software Testing Strategies:
Strategic Approach to Testing – 1:
• Testing begins at the component level and works outward toward the integration of the
entire computer-based system.
• Different testing techniques are appropriate at different points in time.
• The developer of the software conducts testing and may be assisted by independent test
groups for large projects.
• The role of the independent tester is to remove the conflict of interest inherent when the
builder is testing his or her own product.
Strategic Approach to Testing – 2:
• Testing and debugging are different activities.
• Debugging must be accommodated in any testing strategy.
• Need to consider verification issues
– are we building the product right?
• Need to Consider validation issuesare we building the right product?
Verification vs validation:
• Verification:
"Are we building the product right" The software should conform to its specification
Validation:
"Are we building the right product" The software should do what the user really requires
The V & V process:
• As a whole life-cycle process - V & V must be applied at each stage in the software
process.
• Has two principal objectives
– The discovery of defects in a system
– The assessment of whether or not the system is usable in an operational situation.
• Strategic Testing Issues - 1 Specify product requirements in a quantifiable manner before
testing starts.
• Specify testing objectives explicitly.
• Identify the user classes of the software and develop a profile for each.
• Develop a test plan that emphasizes rapid cycle testing.
Strategic Testing Issues – 2:
• Build robust software that is designed to test itself (e.g. use anti-bugging).
• Use effective formal reviews as a filter prior to testing.
• Conduct formal technical reviews to assess the test strategy and test cases.
Testing Strategy:
Unit Testing:
• Program reviews.
• Formal verification.
• Testing the program itself.
– black box and white box testing.
Black Box or White Box?:
• Maximum # of logic paths - determine if white box testing is possible.
• Nature of input data.
• Amount of computation involved.
• Complexity of algorithms.
Unit Testing Details:
• Interfaces tested for proper information flow.
• Local data are examined to ensure that integrity is maintained.
• Boundary conditions are tested.
• Basis path testing should be used.
• All error handling paths should be tested.
• Drivers and/or stubs need to be developed to test incomplete software.
Unit Testing:
Regression Testing:
• Regression test suit contains 3 different classes of test cases
– Representative sample of existing tes cases is used to exercise all software
functions. t
– Additional test cases focusing software functions likely to be affected by the
change.
– Tests cases that focus on the changed software components.
Smoke Testing:
• Software components already translated into code are integrated into a build.
• A series of tests designed to expose errors that will keep the build from performing its
functions are created.
• The build is integrated with the other builds and the entire product is smoke tested daily
using either top-down or bottom integration.
Validation Testing:
• Ensure that each function or performance characteristic conforms to its specification.
• Deviations (deficiencies) must be negotiated with the customer to establish a means for
resolving the errors.
• Configuration review or audit is used to ensure that all elements of the software
configuration have been properly developed, cataloged, and documented to allow its
support during its maintenance phase.
Acceptance Testing:
• Making sure the software works correctly for intended user in his or her normal work
environment.
• Alpha test
– version of the complete software is tested by customer under the supervision of
the developer at the developer‘s site
• Beta test
– version of the complete software is tested by customer at his or her own site
without the developer being present
System Testing:
• Recovery testing
– checks system‘s ability to recover from failures
• Security testing
– verifies that system protection mechanism prevents improper penetration or data
alteration
• Stress testing
– program is checked to see how well it deals with abnormal resource demands
• Performance testing
– tests the run-time performance of software
Performance Testing:
• Stress test.
• Volume test.
• Configuration test (hardware & software).
• Compatibility.
• Regression tests.
• Security tests.
• Timing tests.
• Environmental tests.
• Quality tests.
• Recovery tests.
• Maintenance tests.
• Documentation tests.
• Human factors tests.
Testing Life Cycle:
• Establish test objectives.
• Design criteria (review criteria).
– Correct.
– Feasible.
– Coverage.
– Demonstrate functionality.
• Writing test cases.
• Testing test cases.
• Execute test cases.
• Evaluate test results.
Testing Tools:
• Simulators.
• Monitors.
• Analyzers.
• Test data generators.
Document Each Test Case:
• Requirement tested.
• Facet / feature / path tested.
• Person & date.
• Tools & code needed.
• Test data & instructions.
• Expected results.
• Actual test results & analysis
• Correction, schedule, and signoff.
Debugging:
• Debugging (removal of a defect) occurs as a consequence of successful testing.
• Some people better at debugging than others.
• Is the cause of the bug reproduced in another part of the program?
• What ―next bugǁ might be introduced by the fix that is being proposed?
• What could have been done to prevent this bug in the first place?
Procedural programming
Procedural programming can sometimes be used as a synonym for imperative
programming (specifying the steps the program must take to reach the desired state), but can also
refer (as in this article) to a programming paradigm, derived from structured programming, based
upon the concep of the procedure call. Procedures, also know as routines, subroutines,
t n
methods, or functions (not to be confused with mathematical functions, but similar to those used
in functional programming) simply contain a series of computational steps to be carried out. Any
given procedure might be called at any point during a program's execution, including by other
procedures or itself. Some good examples of procedural programs are the Linux Kernel, GIT,
Apache Server, and Quake III Arena.
Object-oriented programming
Object-oriented programming (OOP) is a programming paradigm that uses "objects" –
data structures consisting of data fields and methods together with their interactions – to design
applications and computer programs. Programming techniques may include features such as data
abstraction, encapsulation, modularity, polymorphism, and inheritance. Many modern
programming languages now support OOP.
An object-oriented program may thus be viewed as a collection of interacting objects, as
opposed to the conventional model, in which a program is seen as a list of tasks (subroutines) to
perform. In OOP, each object is capable o f receiving messages, processing data, and sending
messages to other objects. Each object can be viewed as an independent 'machine' with a distinct
role or responsibility. The actions (or "methods") on these objects are closely associated with the
object. For example, OOP data structures tend to 'carry their own operators around with them' (or
at least "inherit" them from a similar object or class). In the conventional model, the data and
operations on the data don't have a tight, formal association.
Logic programming is, in its broadest sense, the use of mathematical logic for computer
programming. In this view of logic programming, which can be traced at least as far back as
John McCarthy's [1958] advice-taker proposal, logic is used as a purely declarative
representation language, and a theorem-prover or model-generator is used as the problem-solver.
The problem-solving task is split between the programmer, who is responsible only for ensuring
the truth of programs expressed in logical form, an the theorem-prover or model-generator,
d
which is responsible for solving problems efficiently.
Rapid Implementations
In the late 1990s as Y2K approached, customers demanded and consulting firms discovered
faster ways to implement packaged software applicat ions. The rapid implementation became
possible for certain types of customers. The events that converged in the late 1990s to provide
faster implementations include the following:
• Many smaller companies couldn‘t afford the big ERP project. If the software vendors and
consulting firms were going to sell to the ―middle marketǁ companies, they had to
develop more efficient methods.
• Many dotcoms needed a financial infrastructure; ERP applications filled the need, and rapid
implementation methods provided the way.
• The functionality of the software improved a lot, many gaps were eliminated, and more
companies could implement with fewer customizations.
• After the big, complex companies implemented their ERP systems, the typical
implementation became less difficult.
• The number of skilled consultants and project managers increased significantly.
• Other software vendors started packaging preprogrammed integration points to the Oracle
ERP modules.
Rapid implementations focus on delivering a predefined set of functionality. A key set of
business processes is installed in a standard way to accelerate the implementation schedule.
These projects benefit from the use of preconfigured modules and predefined business processes.
You get to reuse the analysis and integration testing from other implementations, and you agree
to ignore all gaps by modifying your business to fit the software. Typically, the enterprise will be
allowed some control over key decisions such as the structure of the chart of accounts. Fixed
budgets are set for training, production support, and data conversions (a limited amount of data).
Phased Implementations
Phased implementations seek to break up the work of an ERP implementation project.
This technique can make the system more manageable and reduce risks, and costs in some cases,
to the enterprise. In the mid-1990s, 4 or 5 was about the maximum number of application
modules that could be launched into production at one time. If you bought 12 or 13 applications,
there would be a financial phase that would be followed by phases for the distribution and
manufacturing applications. As implementation techniques improved and Y2K pressures grew in
the late 1990s, more and more companies started launching most of their applications at the same
time. This method became known as the big-bang approach. Now, each company selects a
phased or big-bang approach based on its individual requirements.
Another approach to phasing can be employed by companies with business units at
multiple sites. With this technique, one business unit is used as a template, and all applications
are completely implemented in an initial phase lasting 10–14 months. Then, other sites
implement the applications in cookie-cutter fashion. The cookie-cutter phases are focused on
end-user training and the differences that a site has from the prototype site. The cookie-cutter
phas can be as short as 9–12 weeks, and these phases can be conducted at several sites
e
simultaneously. For your reference, we participated in an efficient project where 13 app lications
were implemented big bang–style in July at the Chicago site after about 8 months work. A site in
Malaysia went live in October. The Ireland site started up in November. After a holiday break,
the Atlanta business unit went live in February, and the final site in China started using the
applications in April. Implementing thirteen application modules at five sites in four countries in
sixteen months was pretty impressive.
Case Studies Illustrating Implementation Techniques
Some practical examples from the real world might help to illustrate some of the principles and
techniques of various software implementation methods. These case studies are composites from
about 60 implementation projects we have observed during the past 9 years.
Big companies often have a horrible time resolving issues and deciding on configuration
parameters becaus there is so much money involved and each of many sites might want to
e
control decisions about what it considers its critical success factors. For example, we once saw a
large company argue for over two months about the chart of accounts structure, while eight
consultants from two consulting firms tried to referee among the feuding operating units.
Another large company labored for more than six months to unify a mast er customer list for a
centralized receivables and decentralized order entry system.
Transition activities at large companies need special attention. Training end users can be
a logistical challenge and can require considerable planning. For example, if you have 800 users
to train and each user needs an average of three classes of two hours each and you have one
month, how many classrooms and instructors do you need? Another example is that loading data
from a legacy system can be a problem. If you have one million customers to load into Oracle
receivables at the rate of 5,000/hour and the database administrator allows you to load 20 hours
per day, you have a 10-day task.
Because they spend huge amounts of money on their ERP systems, many big companies
try to optimize the systems and capture specific returns on the investment. However, sometimes
companies can be incredibly insensitive and uncoordinated as they try to make money from their
ERP software. For example, one business announced at the beginning of a project that the
accounts payable department would be cut from 50–17 employees as soon as the system went
live. Another company decided to centralize about 30 accounting sites into one shared service
center and advised about 60 accountants that they would lose their jobs in about a year. Several
of the 60 employees were offered positions on the ERP implementation team.
Small companies have other problems when creating an implementation team. Occasionally, the
small company tries to put clerical employees on the team and they have problems with issue
resolution or some of the ERP concepts. In another case, one small company didn‘t create the
position of project manager. Each department worked on its own modules and ignored the
integration points, testing, and requirements of other users. When Y2K deadlines forced the
system startup, results were disastrous with a cost impact that doubled the cost of the entire
project.
Project team members at small companies sometimes have a hard time relating to the cost
of the implementation. We once worked with a company where the project manager (who was
also the database administrator) advised me within the first hour of our meeting that he thought
consulting charges of $3/minute were outrageous, and he couldn‘t rationalize how we could
possibly make such a contribution. We agreed a consultant could not contribute $3 in value each
and every minute to his project. However, when I told him we would be able to save him
$10,000/week and make the difference between success and failure, he realized we should get to
work.
Because the small company might be relatively simple to implement and the technical
staff might be inexperienced with the database and software, it is possible that the technical staff
will be on the critical path of the project. If the database administrator can‘t learn how to handle
the production database by the time the users are ready to go live, you might need to hire some
temporary help to enable the users to keep to the schedule. In addition, we often see small
companies with just a single database administrator who might be working 60 or more hours per
week. They feel they can afford to have more DBAs as employees, but they don‘t know how to
establish the right ratio of support staff to user requirements. These companies can burn out a
DBA quickly and then have to deal with the problem of replacing an important skill.
UNIT V
Software metric
• Any type of measurement which relates to a software system, process or
related documentation
• Lines of code in a program, the Fog index, number of person-days required to
develop a component.
• Allow the software and the software process to be quantified.
• May be used to predict product attributes or to control the software process.
• Product metrics can be used for general predictions or to identify anomalous components.
Metrics assumptions
• A software property can be measured.
• The relationship exists between what we can measure and what we want to know. We can
only measure internal attributes but are often more interested in external software
attributes.
• This relationship has been formalised and validated.
• It may be difficult to relate what can be measured to desirable external quality attributes.
Data collection
• A metrics programme should be based on a set of product and process data.
• Data should be collected immediately (not in retrospect) and, if possible, automatically.
• Three types of automatic data collection
• Static product analysis;
• Dynamic product analysis;
• Process data collation.
Data accuracy
• Don‘t collect unnecessary data
• The questions to be answered should be decided in advance and the required data
identified.
• Tell people why the data is being collected.
• It should not be part of personnel evaluation.
• Don‘t rely on memory
• Collect data when it is generated not after a project has finished.
Product metrics
• A quality metric should be a predictor of product quality.
• Classes of product metric
• Dynamic metrics which are collected by measurements made of a program in
execution;
• Static metrics which are collected by measurements made of the system
representations;
• Dynamic metrics help assess efficiency and reliability; static metrics help assess
complexity, understand ability and maintainability.
Depth of inheritance tree This represents the number of discrete levels in the inheritance
tree where sub-classes inherit attributes and operations
(methods) from super-classes. The deeper the inheritance tree,
the more complex the design. Many different object classes may
have to be understood to understand the object classes at the
leaves of the tree.
Method fan-in/fan-out This is directly related to fan-in and fan-out as described above
and means essentially the same thing. However, it may be
appropriate to make a distinction between calls from other
methods within the object and calls from external methods.
Weighted methods per This is the number of methods that are included in a class
class weighted by the complexity of each method. Therefore, a
simple method may have a complexity of 1 and a large and
complex method a much higher value. The larger the value for
this metric, the more complex the object class. Complex objects
are more likely to be more difficult to understand. They may not
be logically cohesive so cannot be reused effectively as super-
classes in an inheritance tree.
Number of overriding This is the number of operations in a super-class that are over-
operations ridden in a sub-class. A high value for this metric indicates
that
the super-class used may not be an appropriate parent for
the sub-class.
Measurement analysis
• It is not always obvious what data means
• Analysing collected data is very difficult.
• Professional statisticians should be consulted if available.
• Data analysis must take local circumstances into account.
Measurement surprises
• Reducing the number of faults in a program leads to an increased number of help desk calls
• The program is now thought of as more reliable and so has a wider more diverse
market. The percentage of users who call the help desk may have decreased but the
total may increase;
• A more reliable system is used in a different way from a system where users work
around the faults. This leads to more help desk calls.
ZIPF’s Law
• Zipf's Law as "the observation that frequency of occurrence of some event (P), as a function
of the rank (i) when the rank is determined by the above frequency of occurrence, is a power-
law function Pi ~ 1/ia with the exponent a close to unity (1)."
• Let P (a random variable) represented the frequency of occurrence of a keyword in a
program listing.
• It applies to computer programs written in any modern computer language.
• Without empirical proof because it's an obvious finding, that any computer program written
in any programming language has a power law distribution, i.e., some keywords are used
more than others.
• Frequency of occurrence of events is inversely proportional to the rank in this frequency of
occurrence.
• When both are plotted on a log scale, the graph is a straight line.
• we create entities that don't exist except in computer memory at run time; we create logic
nodes that will never be tested because it's impossible to test every logic branch; we create
information flows in quantities that are humanly impossible to analyze with a glance;
• Software application is the combination of keywords within the context of a solution and not
their quantity used in a program; context is not a trivial task because the context of an
application is attached to the problem being solved and every problem to solve is different
and must have a specific program to solve it.
• Although a progra coul be syntactically correct, it doesn't mean that t he algorithms
m d
implemented solve the problem at hand. What's more, a correct program can solve the wrong
problem. Let's say we have the simple requirement of printing "Hello, World!" A
syntactically correct solution in Java looks as follows:
• Public class SayHello {
public static void main(String[] args)
{ System.out.println("John
Sena!");
}
}
• This solution is obviously wrong because it doesn't solve the original requirement. This
means that the context of the solution within the problem being solved needs to be
determined to ensure its quality. In other words, we need to verify that the output matches the
original requirement.
• Zip's Law can't even say too much about larger systems.
Software productivity
• A measure of the rate at which individual engineers involved in software development
produce software and associated documentation.
• Not quality-oriented although quality assurance is a factor in productivity assessment.
• Essentially, we want to measure useful functionality produced per time unit.
Productivity measures
• Size related measures based on some output from the software process. This may be lines of
delivered source code, object code instructions, etc.
• Function-related measures based on an estimate of the functionality of the delivered
software. Function-points are the best known of this type of measure.
Measurement problems
• Estimating the size of the measure (e.g. how many function points).
• Estimating the total number of programmer months that have elapsed.
• Estimating contractor productivity (e.g. documentation team) and incorporating
this estimate in overall estimate.
Lines of code
• The measure was first proposed when programs were typed on cards with one line per card;
• How does this correspond to statements as in Java which can span several lines or where
there can be several statements on one line.
Productivity comparisons
• The lower level the language, the more productive the programmer
• The same functionality takes more code to implement in a lower-level language than
in a high-level language.
• The more verbose the programmer, the higher the productivity
• Measures of productivity based on lines of code suggest that programmers who write
verbose code are more productive than programmers who write compact code.
COCOMO model
• An empirical model based on project experience.
• Well-documented, ‗independent‘ model which is not tied to a specific software vendor.
• Long history from initial version published in 1981 (COCOMO-81) through various
instantiations to COCOMO 2.
• COCOMO 2 takes into account different approaches to software development, reuse, etc.
COCOMO 81
COCOMO 2
• COCOMO 81 was developed with the assumption that a waterfall process would be used and
that all software would be developed from scratch.
• Since its formulation, there have been many changes in software engineering practice and
COCOMO 2 is designed to accommodate different approaches to software development.
COCOMO 2 models
• COCOMO 2 incorporates a range of sub-models that produce increasingly detailed software
estimates.
• The sub-models in COCOMO 2 are:
• Application composition model. Used when software is composed from existing
parts.
• Early design model. Used when requirements are available but design has not yet
started.
• Reuse model. Used to compute the effort of integrating reusable components.
• Post-architecture model. Used once the system architecture has been designed and
more information about the system is available.
Multipliers
• Multipliers reflect the capability of the developers, the non-functional requirements, the
familiarity with the development platform, etc.
• RCPX - product reliability and complexity;
• RUSE - the reuse required;
• PDIF - platform difficulty;
• PREX - personnel experience;
• PERS - personnel capability;
• SCED - required schedule;
• FCIL - the team support facilities.
Post-architecture level
• Uses the same formula as the early design model but with 17 rather than 7 associated
multipliers.
• The code size is estimated as:
• Number of lines of new code to be developed;
• Estimate of equivalent number of lines of new code computed using the reuse model;
• An estimate of the number of lines of code that have to be modified according to
requirements changes.
The exponent term
• This depends on 5 scale factors (see next slide). Their sum/100 is added to 1.01
• A company takes on a project in a new domain. The client has not defined the process to be
used and has not allowed time for risk analysis. The company has a CMM level 2 rating.
• Precedenteness - new project (4)
• Development flexibility - no client involvement - Very high (1)
• Architecture/risk resolution - No risk analysis - V. Low .(5)
• Team cohesion - new team - nominal (3)
• Process maturity - some control - nominal (3)
• Scale factor is therefore 1.17.
Multipliers
• Product attributes
• Concerned with required characteristics of the software product being developed.
• Computer attributes
• Constraints imposed on the software by the hardware platform.
• Personnel attributes
• Multipliers that take the experience and capabilities of the people working on the
project into account.
• Project attributes
• Concerned with the particular characteristics of the software development project.
Delphi method
The Delphi method is a systematic, interactive forecasting method which relies on a panel
of experts. Th experts answer questionnaires in two or more rounds. After each round, a
e
facilitator provides an anonymous summary of the experts‘ forecasts from the previous round as
well as the reasons they provided for their judgments. Thus, experts are encouraged to revise
their earlier answers in light of the replies of other members of their panel. It is believed that
during this process the range of the answers will decrease and the group will converge towards
the "correct" answer. Finally, the process is stopped after a pre-defined stop criterion (e.g.
number of rounds, achievement of consensus, stability of results) and the mean or median scores
of the final rounds determine the results.
The Delphi Technique is an essential project management technique that refers to an
information gathering technique in which the opinions of those whose opinions are most
valuable, traditionally industry experts, is solicited, with the ultimate hope and go al of attaining a
consensus. Typically, the polling of these industry experts is done on an anonymous basis, in
hopes of attaining opinions that are unfettered by fears or identifiability. The experts are
presented with a series of questions in regards to the project, which is typically, but not always,
presented to the expert by a third-party facilitator, in hopes of eliciting new ideas regarding
specific project points. The responses from all experts are typically combined in the form of an
overall summary, which is then provided to the experts for a review and for the opportunity to
make further comments. This process typically results in consensus within a number of rounds,
and this technique typically helps minimize bias, and minimizes the possibility t hat any one
person can have too much influence on the outcomes.
Key characteristics
The following key characteristics of the Delphi method help the participants to focus on
the issues at hand and separate Delphi from other methodologies:
• Structuring of information flow
The initial contributions from the experts are collected in the form of answers to
questionnaires and their comments to these answers. The panel director controls the interactions
among the participants by processing the information and filt ering out irrelevant content. This
avoids the negative effects of face-to-face panel discussions and solves the usual problems of
group dynamics.
• Regular feedback
Participants comment on their own forecasts, the responses of others and on the progress
of the panel as a whole. At any moment they can revise their earlier statements. While in regular
group meetings participants tend to stick to previously stated opinions and often conform too
much to group leader, the Delphi method prevents it.
• Anonymity of the participants
Usually all participants maintain anonymity. Their identity is not revealed even after the
completion of the final report. This stops them from dominating others in the process using their
authority or personality, frees them to some extent from their personal biases, minimizes the
"bandwagon effect" or "halo effect", allows them to freely express their opinions, and
encourages open critique and admitting errors by revising earlier judgments.
The first step is to found a steering committee (if you need one) and a management team
with sufficient capacities for the process. Then expert panels to prepare and formulate the
statements are helpful unless it is decided to let that be done by the management team. The
whole procedure has to be fixed in advance: Do you need panel meetings or do the teams work
virtually. Is the questionnaire an electronic or a paper one? This means, that logistics (from
Internet programming to typing the results from the paper versions) have to be organised. Will
there be follow-up work-shops,interviews, presentations? If yes, these also have to be organised
and pre-pared. Printing of brochures, leaflets, questionnaire, reports have also be considered. The
last organisational point is the interface with the financing organisation if this is different from
the management team.
Scheduling
Scheduling Principles
• compartmentalization—define distinct tasks
• interdependency—indicate task interrelationship
• effort validation—be sure resources are available
• defined responsibilities—people must be assigned
• defined outcomes—each task must have an output
• defined milestones—review for quality
Effor
t
4 4
Ea = m ( t d / t a )
Eo
Empirical Relationship: P vs E
Given Putnam‘s Software Equation (5-3),
E = L3 / (P3t4)
Consider a project estimated at 33 KLOC, 12 person-years of effort, with a P of 10K, the
completion time would be 1.3 years
If deadline can be extended to 1.75 years,
E = L3 / (P3t4) ≈ 3.8 p-years vs 12 p-years
Timeline Charts
Effort Allocation
• ―front endǁ activities
• customer communication
• analysis
• design
• review and modification
• construction activities
• coding or code generation
• testing and installation
• unit, integration
• white-box, black box
• regression
Problem
• Assume you are a software project manager and that you‘ve been asked to computer earned
value statistics for a small software project. The project has 56 planned work tasks that are
estimated to require 582 person-days to complete. At the time that you‘ve been asked to do
the earned value analysis, 12 tasks have been completed. However, the project schedu le
indicates that 15 tasks should have been completed. The following scheduling data (in
person-days) are available:
• Task Planned Effort Actual Effort
• 1 12 12.5
• 2 15 11
• 3 13 17
• 4 8 9.5
• 5 9.5 9.0
• 6 18 19
• 7 10 10
• 8 4 4.5
• 9 12 10
• 10 6 6.5
• 11 5 4
• 12 14 14.5
• 13 16
• 14 6
• 15 8
Error Tracking
• Schedule Tracking
• conduct periodic project status meetings in which each team member reports progress
and problems.
• evaluate the results of all reviews conducted throughout the software engineering
process.
• determine whether formal project milestones (diamonds in previous slide) have been
accomplished by the scheduled date.
• compare actual start-date to planned start-date for each project task listed in the
resource table
• meet informally with practitioners to obtain their subjective assessment of progress to
date and problems on the horizon.
• use earned value analysis to assess progress quantitatively.
• Progress on an OO Project-I
• Technical milestone: OO analysis completed
• All classes and the class hierarchy have been defined and reviewed.
• Class attributes and operations associated with a class have been defined and
reviewed.
• Class relationships (Chapter 8) have been established and reviewed.
• A behavioral model (Chapter 8) has been created and reviewed.
• Reusable classes have been noted.
• Technical milestone: OO design completed
• The set of subsystems (Chapter 9) has been defined and reviewed.
• Classes are allocated to subsystems and reviewed.
• Task allocation has been established and reviewed.
• Responsibilities and collaborations (Chapter 9) have been identified.
• Attributes and operations have been designed and reviewed.
• The communication model has been created and reviewed.
• Progress on an OO Project-II
• Technical milestone: OO programming completed
• Each new class has been implemented in code from the design model.
• Extracted classes (from a reuse library) have been implemented.
• Prototype or increment has been built.
• Technical milestone: OO testing
• The correctness and completeness of OO analysis and design models has been
reviewed.
• A class-responsibility-collaboration network (Chapter 8) has been developed and
reviewed.
• Test cases are designed and class-level tests (Chapter 14) have been conducted for
each class.
• Test cases are designed and cluster testing (Chapter 14) is completed and the classes
are integrated.
• System level tests have been completed.
Elements of SCM
• Component element
- Tools coupled with file management
• Process element
-Procedures define change management
• Construction element
-Automate construction of software
• Human elements
-Give guidance for activities and process features
Baselines
• A work product becomes a baseline only after it is reviewed and approved.
• Before baseline – changes informal
• Once a baseline is established each change request must be evaluated and verified before it is
processed.
Software Configuration Items
• SCI
• Document
• Test cases
• Program component
• Editors, compilers, browsers
– Used to produce documentation.
Importance of evolution
• Organizations have huge investments in their software systems - they are critical business
assets.
• To maintain the value of these assets to the business, they must be changed and updated.
• The majority of the software budget in large companies is devoted to evolving existing
software rather than developing new software.
Software change
• Software change is inevitable
• New requirements emerge when the software is used;
• The business environment changes;
• Errors must be repaired;
• New computers and equipment is added to the system;
• The performance or reliability of the system may have to be improved.
• A key problem for organisations is implementing and managing change to their existing
software systems.
Lehman’s laws
Law Description
Continuing change A program that is used in a real-world environment
necessarily must change or become progressively less
useful in that environment.
Increasing complexity As an evolving program changes, its structure tends to
become more complex. Extra resources must be devoted to
preserving and simplifying the structure.
Large program Program evolution is a self-regulating process. System
evolution attributes such as size, time between releases and the
number of reported errors is approximately invariant for
each system release.
Organisational stability Over a program‘s lifetime, its rate of development is
approximately constant and independent of the resources
devoted to system development.
Conservation of Over the lifetime of a system, the incremental change in
familiarity each release is approximately constant.
Continuing growth The functionality offered by systems has to continually
increase to maintain user satisfaction.
Declining quality The quality of systems will appear to be declining unless
they are adapted to changes in their operational
environment.
Feedback system Evolution processes incorporate multi-agent, multi-loop
feedback systems and you have to treat them as feedback
systems to achieve significant product improvement.
Software maintenance
• Modifying a program after it has been put into use or delivered.
• Maintenance does not normally involve major changes to the system‘s architecture.
• Changes are implemented by modifying existing components and adding new components to
the system.
• Maintenance is inevitable
• The system requirements are likely to change while the system is being developed because
the environment is changing. Therefore a delivered system won't meet its requirements!
• Systems are tightly coupled with their environment. When a system is installed in an
environment it changes that environment and therefore changes the system
requirements.
• Systems MUST be maintained therefore if
they are to remain useful in an environment.
Types of maintenance
• Maintenance to repair software faults
• Code ,design and requirement errors
• Code & design cheap. most expensive.
Requirements
• Maintenance to adapt software to a different operating environment
• Changing a system‘s hardware and other support so that it operates in a
different environment (computer, OS, etc.) from its initial implementation.
• Maintenance to add to or modify the system‘s functionality
• Modifying the system to satisfy new requirements for org or business change.
Maintenance costs
• Usually greater than development costs (2* to 100* depending on the application).
• Affected by both technical and non-technical factors.
• Increases as software is maintained. Maintenance corrupts the software structure so
makes further maintenance more difficult.
• Ageing software can have high support costs
(e.g. old languages, compilers etc.).
Development/maintenance costs
Maintenance prediction
• Maintenance prediction is concerned with assessing which parts of the system may
cause problems and have high maintenance costs
• Change acceptance depends on the maintainability of the components affected by
the change;
• Implementing changes degrades th system structure an reduces its
maintainability; e d
• Maintenance costs depend on the number of changes and costs of change depend
on maintainability.
Change prediction
• Predicting the number of changes requires and understanding of the relationships between a
system and its environment.
• Tightly coupled systems require changes whenever the environment is changed.
• Factors influencing this relationship are
• Number and complexity of system interfaces;
• Number of inherently volatile system requirements;
• The business processes where the system is used.
Complexity metrics
• Predictions of maintainability can be made by assessing the complexity of system
components.
• Studies have shown that most maintenance effort is spent on a relatively small number of
system components of complex system.
• Reduce maintenance cost – replace complex components with simple alternatives.
• Complexity depends on
• Complexity of control structures;
• Complexity of data structures;
• Object, method (procedure) and module size.
Process metrics
• Process measurements may be used to assess maintainability
• Number of requests for corrective maintenance;
• Average time required for impact analysis;
• Average time taken to implement a change request;
• Number of outstanding change requests.
• If any or all of these is increasing, this may indicate a decline in maintainability.
• COCOMO2 model maintenance = understand existing code + develop new code.
Project management
Objectives
• To explain the main tasks undertaken by project managers
• To introduce software project management and to describe its distinctive characteristics
• To discuss project planning and the planning process
• To show how graphical schedule representations are used by project management
• To discus the notion of risks and the risk management process Software project
s
management
• Concerned with activities involved in ensuring that software is delivered on time and on
schedule and in accordance with the requirements of the organisations develoing
and procuring the software.
• Project management is needed because software development is always subject to budget
and schedule constraints that are set by the organisation developing the software.
Project planning
• Probably the most time-consuming project management activity.
• Continuous activity from initial concept through to system delivery. Plans must be
regularly revised as new information becomes available.
• Various different types of plan may be developed to support the main software project
plan that is concerned with schedule and budget.
Plan Description
Quality plan Describes the quality procedures and standards that
will be used in a project.
Validation plan Describes the approach, resources and schedule used
for system validation.
Configuration management Describes the configuration management procedures
Plan and structures to be used.
Maintenance plan Predicts the maintenance requirements of the
system, maintenance costs and effort required.
Development plan. Describes how the skills and experience of the project
team members will be developed.
Project scheduling
• Split project into tasks and estimate time and resources required to complete each task.
• Organize tasks concurrently to make
optimal use of workforce.
• Minimize task dependencies to avoid delays
caused by one task waiting for another to complete.
• Dependent on project managers intuition and experience.
The project scheduling process
Scheduling problems
• Estimating the difficulty of problems and hence the cost of developing a solution is hard.
• Productivity is not proportional to the number of people working on a task.
• Adding people to a late project makes it later because of communication overheads.
• The unexpected always happens. Always allow contingency in planning.
T10 10da ys
1 8/7 /03
T12
M5
2 5 da ys
T8 Finish
19/9/03
Activity timeline
Sta r t
T4
T1
T2
M1
T7 T3
M5
T8
M3 M2 T6
T5
M4
T9
M7
T10
M6
T11
M8
T12
Finish
Staff allocation
4/7 1 1/7 18/7 2 5/7 1/8 8/8 15/8 2 2/8 2 9/8 5/9 1 2/9 19/9
Fred T4
T8 T11
T12
T1
Ja ne
T3
T9
T2
Anne T6 T10
T7
Jim T5
Ma ry
Risk management
• Risk management - identifying risks and drawing up plans to minimise their effect on a
project.
• A risk is a probability that some adverse circumstance will occur
• Project risks : affect schedule or resources. eg: loss of experienced designer.
• Product risks: affect the quality or performance of the software being developed.
eg: failure of purchased component.
• Business risks : organisation developing software. Eg: competitor
affect
introducing new product.
Software risks
Risk identification
• Discovering possible risk
• Technology risks.
• People risks.
• Organisational risks.
• Tool risk.
• Requirements risks.
• Estimation risks.
Risk analysis
• Make judgement about probability and seriousness of each identified risk.
• Made by experienced project managers
• Probability may be very low(<10%), low(10-25%), moderate(25-50%), high(50-75%) or
very high(>75%). not precise value. Only range.
• Risk effects might be catastrophic, serious, tolerable or insignificant.
Risk planning
• Consider each identified risk and develop a strategy to manage that risk.
• categories
• Avoidance strategies
• The probability that the risk will arise is reduced;
• Minimisation strategies
• The impact of the risk on the project will be reduced;
• Contingency plans
• If the risk arises, contingency plans are plans to deal with that risk. eg:
financial problems
Risk monitoring
• Assess each identified risks regularly to decide whether or not it is becoming less or
more probable.
• Also assess whether the effects of the risk have changed.
• Cannot be observed directly. Factors affecting will give clues.
• Each key risk should be discussed at management progress meetings & review.
Risk indicators