Se Cse s307 Notes
Se Cse s307 Notes
Overview
Software
System software
Application software
Engineering or Scientific Software
Embedded software
Product-line software (includes entertainment software)
Web-Applications
Artificial intelligence software
Open-world computing
Page 1 of 77
o Creating software to allow machines of all sizes to communicate with each other
across vast networks
Netsourcing
o Architecting simple and sophisticated applications that benefit targeted end-user
markets worldwide
Open Source
o Distributing source code for computing applications so customers can make local
modifications easily and reliably
Network intensive
Concurrency
Unpredictable load
Availability (24/7/365)
Data driven
Content sensitive
Continuous evolution
Immediacy (short time to market)
Security
Aesthetics
Software Engineering
Page 2 of 77
Generic Software Process Framework
Software project tracking and control (allows team to assess progress and take corrective
action to maintain schedule)
Risk management (assess risks that may affect project outcomes or quality)
Software quality assurance (activities required to maintain software quality)
Technical reviews (assess engineering work products to uncover and remove errors before
they propagate to next activity)
Measurement (define and collect process, project, and product measures to assist team in
delivering software meeting customer needs)
Software configuration management (manage effects of change)
Reusability management (defines criteria for work product reuse and establish mechanisms
to achieve component reuse)
Work product preparation and production (activities to create models, documents, logs,
forms, lists, etc.)
Essence of Practice
Page 3 of 77
Understand the Problem
Software Creation
Almost every software project is precipitated by a business need (e.g. correct a system defect,
adapt system to changing environment, extend existing system, create new system)
Many times an engineering effort will only succeed if the software created for the project
succeeds
The market will only accept a product is the software embedded within it meets the
customer’s stated or unstated needs
Page 4 of 77
Process Models
Overview
Software Process
Framework for the activities, actions, and tasks required to build high quality software
Defines approach taken as software is engineered
Adapted by creative, knowledgeable software engineers so that it is appropriate for the
products they build and the demands of the marketplace
Communication
Planning
Modeling
Construction
Deployment
Process Flow
Describes how each of the five framework activities, actions, and tasks are organized with
respect to sequence and time
Page 5 of 77
Linear process flow executes each of the framework activities in order beginning with
communication and ending with deployment
Iterative process flow executes the activities in a circular manner creating a more complete
version of the software with each circuit or iteration
Parallel process flow executes one on more activities in parallel with other activities
Task Sets
Each software engineering action associated with a framework activity can be represented by
different task sets
Small one person projects do not require task sets that are as large and detailed as complex
projects team oriented project task sets
Task sets are adapted to meet the specific needs of a software project and the project team
characteristics
Process Patterns
Templates or methods for describing project solutions within the context of software
processes
Software teams can combine patterns to construct processes that best meet the needs of
specific projects
Page 6 of 77
Process Assessment and Improvement
Standard CMMI Assessment Method for Process Improvement (SCAMPI) provides a five
step process assessment model that incorporates five phases (initiating, diagnosing,
establishing, acting, learning)
CMM-Based Appraisal for Internal Process Improvement (CBAIPI) provides diagnostic
technique for assessing the relative maturity of a software organization
SPICE (ISO/IE15504) standard defines a set of requirements for process assessment
ISO 9001:2000 for Software defines requirements for a quality management system that will
produce higher quality products and improve customer satisfaction
Waterfall Model (classic life cycle - old fashioned but reasonable approach when
requirements are well understood)
Incremental Models (deliver software in small but usable pieces, each piece builds on pieces
already delivered)
Evolutionary Models
o Prototyping Model (good first step when customer has a legitimate need, but is clueless
about the details, developer needs to resist pressure to extend a rough prototype into a
production product)
o Spiral Model (couples iterative nature of prototyping with the controlled and systematic
aspects of the Waterfall Model)
Concurrent Development Model (concurrent engineering - allows software teams to represent
the iterative and concurrent element of any process model)
Page 7 of 77
Attempts to draw on best features of traditional software process models and implements
many features of agile software development
Phases
o Inception phase (customer communication and planning)
o Elaboration phase (communication and modeling)
o Construction phase
o Transition phase (customer delivery and feedback)
o Production phase (software monitoring and support)
Emphasizes personal measurement of both work products and the quality of the work
products
Stresses importance of indentifying errors early and to understand the types of errors likely to
be made
Framework activities
o Planning (size and resource estimates based on requirements)
o High-level design (external specifications developed for components and component
level design is created)
o High-level design review (formal verification methods used to uncover design errors,
metrics maintained for important tasks)
o Development (component level design refined, code is generated, reviewed,
compiled, and tested, metric maintained for important tasks and work results)
o Postmortem (effectiveness of processes is determined using measures and metrics
collected, results of analysis should provide guidance for modifying the process to
improve its effectiveness)
Objectives
o Build self-directed teams that plan and track their work, establish goals, and own their
processes and plans
o Show managers how to coach and motivate their teams and maintain peak
performance
o Accelerate software process improvement by making CCM Level 5 behavior normal
and expected
o Provide improvement guidance to high-maturity organizations
o Facilitate university teaching of industrial team skills
Scripts for Project Activities
o Project launch
o High Level Design
o Implementation
o Integration and system testing
o Postmortem
Page 8 of 77
Allow organizations to build automated models of common process framework, task sets,
and umbrella activities
These automated models can be used to determine workflow and examine alternative process
structures
Tools can be used to allocate, monitor, and even control all software engineering tasks
defined as part of the process model
Agile Development
Overview
Agility
Page 9 of 77
Agile Processes
Agility Principles
1. Highest priority is to satisfy customer through early and continuous delivery of valuable
software
2. Welcome changing requirements even late in development, accommodating change is
viewed as increasing the customer’s competitive advantage
3. Delivering working software frequently with a preference for shorter delivery schedules (e.g.
every 2 or 3 weeks)
4. Business people and developers must work together daily during the project
5. Build projects around motivated individuals, given them the environment and support they
need, trust them to get the job done
6. Face-to-face communication is the most effective method of conveying information within
the development team
7. Working software is the primary measure of progress
8. Agile processes support sustainable development, developers and customers should be able
to continue development indefinitely
9. Continuous attention to technical excellence and good design enhances agility
10. Simplicity (defined as maximizing the work not done) is essential
11. The best architectures, requirements, and design emerge from self-organizing teams
12. At regular intervals teams reflects how to become more effective and adjusts its behavior
accordingly
Human Factors
Page 10 of 77
Extreme Programming (XP)
Adaptive Software Development (ASD)
Scrum
Dynamic Systems Development Method (DSDM)
Crystal
Feature Driven Development (FDD)
Lean Software Development (LSD)
Agile Modeling (AM)
Agile Unified Process (AUP)
Extreme Programming
Industrial XP
Readiness acceptance
o Does an appropriate development environment exists to support IXP?
o Will the team be populated by stakeholders?
o Does the organization have a distinct quality program that support continuous process
improvement?
o Will the organizational culture support new values of the agile team?
o Will the broader project community be populated appropriately?
Project community (finding the right people for the project team)
Project chartering (determining whether or not an appropriate business justification exists to
justify the project)
Test-driven management (used to establish measurable destinations and criteria for
determining when each is reached)
Retrospectives (specialized technical review focusing on issues, events, and lessons-learned
across a software increment or entire software release)
Continuous learning (vital part of continuous process improvement)
Page 11 of 77
XP Issues
Requirement volatility (can easily lead for scope creep that causes changes to earlier work
design for the then current needs)
Conflicting customer needs (many project with many customers make it hard to assimilate all
customer needs)
Requirements expressed informally (with only user stories and acceptance tests, its hard to
avoid omissions and inconsistencies)
Lack of formal design (complex systems may need a formal architectural design to ensure a
product that exhibits quality and maintainability)
Scrum
Scrum principles
o Small working team used to maximize communication, minimize overhead, and
maximize sharing of informal knowledge
o Process must be adaptable to both technical and business challenges to ensure best
product produced
o Process yields frequent increments that can be inspected, adjusted, tested,
documented and built on
o Development work and people performing it are partitioned into clean, low coupling
partitions
o Testing and documentation is performed as the product is built
o Provides the ability to declare the product done whenever required
Process patterns defining development activities
o Backlog (prioritized list of requirements or features the provide business value to
customer, items can be added at any time)
o Sprints (work units required to achieve one of the backlog items, must fir into a
predefined time-box, affected backlog items frozen)
Page 12 of 77
o Scrum meetings (15 minute daily meetings)
What was done since last meeting?
What obstacles were encountered?
What will be done by the next meeting?
o Demos (deliver software increment to customer for evaluation)
Provides a framework for building and maintaining systems which meet tight time
constraints using incremental prototyping in a controlled environment
Uses Pareto principle (80% of project can be delivered in 20% required to deliver the entire
project)
Each increment only delivers enough functionality to move to the next increment
Uses time boxes to fix time and resources to determine how much functionality will be
delivered in each increment
Guiding principles
o Active user involvement
o Teams empowered to make decisions
o Fitness foe business purpose is criterion for deliverable acceptance
o Iterative and incremental develop needed to converge on accurate business solution
o All changes made during development are reversible
o Requirements are baselined at a high level
o Testing integrates throughout life-cycle
o Collaborative and cooperative approach between stakeholders
Life cycle activities
o Feasibility study (establishes requirements and constraints)
o Business study (establishes functional and information requirements needed to
provide business value)
o Functional model iteration (produces set of incremental prototypes to demonstrate
functionality to customer)
o Design and build iteration (revisits prototypes to ensure they provide business value
for end users, may occur concurrently with functional model iteration)
o Implementation (latest iteration placed in operational environment)
Crystal
Page 13 of 77
o Emphasizes collaboration among team members
o Manages problem and project complexity using feature-based decomposition
followed integration of software increments
o Technical communication using verbal, graphical, and textual means
o Software quality encouraged by using incremental development, design and code
inspections, SQA audits, metric collection, and use of patterns (analysis, design,
construction)
Framework activities
o Develop overall model (contains set of classes depicting business model of
application to be built)
o Build features list (features extracted from domain model, features are categorized
and prioritized, work is broken up into two week chunks)
o Plan by feature (features assessed based on priority, effort, technical issues, schedule
dependencies)
o Design by feature (classes relevant to feature are chosen, class and method prologs
are written, preliminary design detail developed, owner assigned to each class, owner
responsible for maintaining design document for his or her own work packages)
o Build by feature (class owner translates design into source code and performs unit
testing, integration performed by chief programmer)
Eliminate waste
Build quality in
Create knowledge
Defer commitment
Deliver fast
Respect people
Optimize the whole
Agile Modeling
Page 14 of 77
Agile Unified Process
Overview
This chapter describes professional practice as the concepts, principles, methods, and tools used
by software engineers and managers to plan and develop software. Software engineers must be
concerned both with the technical details of doing things and the things that are needed to build
high-quality computer software. Software process provides the project stakeholders with a
roadmap to build quality products. Professional practice provides software engineers with the
detail needed to travel the road. Software practice encompasses the technical activities needed to
produce the work products defined by the software process model chosen for a project.
1. Be agile
2. Focus on quality at every step
3. Be ready to adapt
4. Build an effective team
5. Establish mechanisms for communications and control
6. Manage change
7. Assess risk
8. Create work products that provide value for others
Page 15 of 77
Principles that Guide Practice
1. Listen
2. Prepare before you communicate
3. Have a facilitator for any communication meeting
4. Face-to-face communication is best
5. Take notes and document decisions
6. Strive for collaboration
7. Stay focused and modularize your discussion
8. Draw a picture if something is unclear
9. Move on once you agree, move on when you can’t agree, move on if something unclear can’t
be clarified at the moment
10. Negotiation is not a contest or game
Planning Principles
Modeling Classes
Page 16 of 77
Requirements Modeling Principles
1. Primary goal of the software team is to build software not create models
2. Don’t create any more models than you have to
3. Strive to produce the simplest model that will describe the problem or software
4. Build models in a way that makes them amenable to change
5. Be able to state the explicit purpose for each model created
6. Adapt models to the system at hand
7. Try to build useful models, forget about trying to build perfect models
8. Don’t be dogmatic about model syntax as long as the model communicates content
successfully
9. If your instincts tell you there is something wrong with the model then you probably have a
reason to be concerned
10. Get feedback as soon as you can
Construction Activities
Coding includes
o Direct creation of programming language source code
o Automatic generation of source code using a design-like representation of component
to be built
o Automatic generation of executable code using a “fourth generation programming
language
Testing levels
o Unit testing
o Integration testing
Page 17 of 77
o Validation testing
o Acceptance testing
Coding Principles
Testing Objectives
Testing is the process of executing a program with the intent of finding an error
A good test is one that has a high probability of finding an undiscovered error
A successful test is one the uncovers and undiscovered error
Testing Principles
Deployment Actions
Delivery
Support
Feedback
Page 18 of 77
Deployment Principles
Understanding Requirements
Overview
Requirements engineering helps software engineers better understand the problems they are
trying to solve.
Building an elegant computer solution that ignores the customer’s needs helps no one.
It is very important to understand the customer’s wants and needs before you begin designing
or building a computer-based solution.
The requirements engineering process begins with inception, moves on to elicitation,
negotiation, problem specification, and ends with review or validation of the specification.
The intent of requirements engineering is to produce a written understanding of the
customer’s problem.
Several different work products might be used to communicate this understanding (user
scenarios, function and feature lists, analysis models, or specifications).
Requirements Engineering
Must be adapted to the needs of a specific process, project, product, or people doing the
work.
Begins during the software engineering communication activity and continues into the
modeling activity.
In some cases requirements engineering may be abbreviated, but it is never abandoned.
It is essential that the software engineering team understand the requirements of a problem
before the team tries to solve the problem.
Page 19 of 77
Specification (written work products produced describing the function, performance, and
development constraints for a computer-based system)
Requirements validation (formal technical reviews used to examine the specification work
products to ensure requirement quality and that all work products conform to agreed upon
standards for the process, project, and products)
Requirements management (activities that help project team to identify, control, and track
requirements and changes as project proceeds, similar to software configuration
management (SCM) techniques
Identify stakeholders
Recognize the existence of multiple stakeholder viewpoints
Work toward collaboration among stakeholders
These context-free questions focus on customer, stakeholders, overall goals, and benefits of
the system
o Who is behind the request for work?
o Who will use the solution?
o What will be the economic benefit of a successful solution?
o Is there another source for the solution needed?
The next set of questions enable developer to better understand the problem and the
customer’s perceptions of the solution
o How would you characterize good output form a successful solution?
o What problem(s) will this solution address?
o Can you describe the business environment in which the solution will be used?
o Will special performance constraints affect the way the solution os approached?
The final set of questions focuses on communication effectiveness
o Are you the best person to give “official” answers to these questions?
o Are my questions relevant to your problem?
o Am I asking too many questions?
o Can anyone else provide additional information?
o Should I be asking you anything else?
Eliciting Requirements
Goal is to identify the problem, propose solution elements, negotiate approaches, and specify
preliminary set of solutions requirements
Collaborative requirements gathering guidelines
o Meetings attended by both developers and customers
o Rules for preparation and participation are established
o Flexible agenda is used
o Facilitator controls the meeting
o Definition mechanism (e.g. stickers, flip sheets, electronic bulletin board) used to
gauge group consensus
Quality management technique that translates customer needs into technical software
requirements expressed as a customer voice table
Page 20 of 77
Identifies three types of requirements (normal, expected, exciting)
In customer meetings function deployment is used to determine value of each function that
is required for the system
Information deployment identifies both data objects and events that the system must
consume or produce (these are linked to functions)
Task deployment examines the system behavior in the context of its environment
Value analysis is conducted to determine relative priority of each requirement generated by
the deployment activities
Elicitation Problems
Developing Use-Cases
Each use-case tells stylized story about how end-users interact with the system under a
specific set of circumstances
First step is to identify actors (people or devices) that use the system in the context of the
function and behavior of the system to be described
o Who are the primary (interact with each other) or secondary (support system) actors?
o What are the actor’s goals?
o What preconditions must exist before story begins?
o What are the main tasks or functions performed by each actor?
o What exceptions might be considered as the story is described?
o What variations in actor interactions are possible?
o What system information will the actor acquire, produce, or change?
o Will the actor need to inform the system about external environment changes?
o What information does the actor desire from the system?
o Does the actor need to be informed about unexpected changes?
Next step is to elaborate the basic use case to provide a more detailed description needed to
populate a use-case template
Page 21 of 77
Use-case template
Analysis Model
Analysis Patterns
Suggest solutions (a class, a function, or a behavior) that can be reused when modeling future
applications
Can speed up the development of abstract analysis models by providing reusable analysis
models with their advantages and disadvantages
Facilitate the transformation of the analysis model into a design model by suggesting design
patterns and reliable solutions to common patterns
Negotiating Requirements
Intent is to develop a project plan that meets stakeholder needs and real-world constraints
(time, people, budget) placed on the software team
Negotiation activities
o Identification of system key stakeholders
o Determination of stakeholders’ “win conditions”
o Negotiate to reconcile stakeholders’ win conditions into “win-win” result for all
stakeholders (including developers)
Page 22 of 77
Goal is to produce a win-win result before proceeding to subsequent software engineering
activities
Requirements Modeling
The requirements model is the first technical representation of a system. Requirements modeling
process uses a combination of text and diagrams to represent software requirements (data,
function, and behavior) in an understandable way. Software engineers build requirements models
using requirements elicited from customers. Building analysis models helps to make it easier to
uncover requirement inconsistencies and omissions. This chapter covers three perspectives of
requirements modeling: scenario-based, data (information), and class-based. Requirements
modeling work products must be reviewed for completeness, correctness, and consistency.
Requirements Models
Page 23 of 77
The model should focus on requirements that are visible within the problem or business
domain and be written as a relatively high level of abstraction.
Each element of the analysis model should add to the understanding of the requirements and
provide insight into the information domain, function, and behavior of the system.
Delay consideration of infrastructure and other non-functional models until design.
Minimize coupling throughout the system.
Be certain the analysis model provides value to all stakeholders.
Keep the model as simple as possible.
Domain Analysis
Structured analysis considers data and processes that transform data as separate entities
o Data objects are modeled to define their attributes and relationships
o Process are modeled to show how they transform data as it flows thought the system
Object-oriented analysis focuses on the definition of classes and the manner in which they
collaborate to effect the customer requirements
Scenario-Based Modeling
Makes use of use cases to capture the ways in which end-users will interact with the system
UML requirements modeling begins with the creation of scenarios in the from of use cases,
activity diagrams, and swim lane diagrams
Use cases capture the interactions between actors (i.e. entities that consume or produce
information)
Begin by listing the activities performed by a single actor to accomplish a single function
Continue this process for each actor and each system function
Use-cases are written first in narrative form and then mapped to a template if more formality
is required
Each primary scenarios should be reviewed and refined to see if alternative interactions are
possible
o Can the actor take some other action at this point?
o Is it possible that the actor will encounter an error condition at some point? If so,
what?
o Is it possible that the actor will encounter some other behavior at some point? If so,
what?
Page 24 of 77
Exceptions
Describe situations (failures or user choices) that cause the system to exhibit unusual
behavior
Brainstorming should be used to derive a reasonably complete set of exceptions for each use
case
o Are there cases where a validation function occurs for the use case?
o Are there cases where a supporting function (actor) fails to respond appropriately?
o Can poor system performance result in unexpected or improper use actions?
Handling exceptions may require the creation of additional use cases
Variation of activity diagrams used show flow of activities in use case as well as indicating
which actor has responsibility for activity rectangle actions
Responsibilities are represented by parallel line segments that divide the diagram vertically
headed by the responsible actor
Data Objects
Data object - any person, organization, device, or software product that produces or
consumes information
Attributes - name a data object instance, describe its characteristics, or make reference to
another data object
Relationships - indicate the manner in which data objects are connected to one another
Class-based Modeling
Page 25 of 77
Identifying Analysis Classes
Examine the problem statement and try to find nouns that fit the following categories and
produce or consume information (i.e. grammatical parse)
o External entities (systems, devices, people)
o Things (e.g. reports, displays, letters, signals)
o Events occurring during system operation
o Roles (e.g. manager, engineer, salesperson)
o Organizational units (e.g. division, group, team)
o Places
o Structures (e.g. sensors, vehicles, computers)
Consider whether each potential class satisfies one of these criteria as well
o Contains information that should be retained
o Provides needed services
o Contains multiple attributes
o Has common set of attributes that apply to all class instances
o Has common set of operations that apply to all object instances
o Represents external entity that produces or consumes information
Examine the processing narrative or use-case and select the things that reasonably can belong
to each class
Ask what data items (either composite or elementary) fully define this class in the context of
the problem at hand?
Defining Operations
Look at the verbs in the processing narrative and identify operations reasonably belonging to
each class that (i.e. grammatical parse)
o manipulate data
o perform computation
o inquire about the state of an object
o monitor object for occurrence of controlling event
Divide operations into sub-operations as needed
Also consider communications that need to occur between objects and define operations as
needed
Page 26 of 77
Cards are divide into three sections (class name, class responsibilities, class collaborators)
Once a complete CRC card set is developed it is reviewed examining the usage scenarios
Classes
Entity classes extracted directly from problem statement (things stored in a database and
persist throughout application)
Boundary classes used to create the interface that user sees and interacts with as software is
used
Controller classes manage unit of work from start to finish
o Create or update entity objects
o Instantiate boundary objects
o Complex communication between sets of objects
o Validation of data communicated between actors
Collaborations
Any time a class cannot fulfill a responsibility on its own it needs to interact with another
class
A server object interacts with a client object to fulfill some responsibility
Each review participant is given a subset of the CRC cards (collaborating cards must be
separated)
All use-case scenarios and use-case diagrams should be organized into categories
Review leader chooses a use-case scenario and begins reading it out loud
Each time a named object is read a token is passed to the reviewer holding the object's card
When the reviewer receives the token, he or she is asked to describe the responsibilities listed
on the card
The group determines whether one of the responsibilities on the card satisfy the use-case
requirement or not
If the responsibilities and collaborations on the index card cannot accommodate the use-case
requirements then modifications need to be made to the card set
Association - present any time two classes are related to one another in some fashion
Page 27 of 77
o association multiplicity or cardinality can be indicated in a UML class diagram (e.g.
0..1, 1..1, 0.., 1..)
Dependency – client/server relationship between two classes
o dependency relationships are indicated in class diagrams using stereotype names
surrounded by angle brackets (e.g. <<stereotype>>)
Analysis Packages
Requirements Modeling
Requirements modeling has many different dimensions. The discussion in this chapter focuses
on flow-oriented models, behavioral models, and patterns. This chapter also discusses WebApp
requirements models. Flow-oriented modeling shows how data objects are transformed by
processing functions. Behavioral modeling depicts the systems states and the impact of events on
system states. Pattern-based modeling makes use of existing domain knowledge to facilitate
requirements modeling. Software engineers build models using requirements elicited from
stakeholders. Developer insights into software requirements grows in direct proportion to the
number of different representations used in modeling. It is not always possible to develop every
model for every project given the available project resources. Requirements modeling work
products must be reviewed for correctness, completeness, consistency, and relevancy to
stakeholder needs.
Flow-oriented Modeling
Data flow diagrams (DFD) show the relationships of external entities, process or transforms,
data items, and data stores
DFD’s take an input-process-output view of the system
DFD's cannot show procedural detail (e.g. conditionals or loops) only the flow of data
through the software
In DFD’s data objects are represented by labeled arrows and data transformations are
represented by circles
First DFD (known as the level 0 or context diagram) represents system as a whole
Subsequent data flow diagrams refine the context diagram providing increasing levels of
detail
Refinement from one DFD level to the next should follow approximately a 1:5 ratio (this
ratio will reduce as the refinement proceeds)
Level 0 data flow diagram should depict the system as a single bubble
Primary input and output should be carefully noted
Refinement should begin by consolidating candidate processes, data objects, and data stores
to be represented at the next level
Page 28 of 77
Label all arrows with meaningful names
Information flow continuity must be maintained from one level to level
Refine one bubble at a time
Write a process specification (PSPEC) for each bubble in the final DFD
PSPEC is a "mini-spec" describing the process algorithm written using text narrative, a
program design language (PDL), equations, tables, or UML activity diagrams
Begin by stripping all the data flow arrows form the DFD
Events (solid arrows) and control items (dashed arrows) are added to the control flow
diagram (CFD)
Create a control specification (CSPEC) for each bubble in the final CFD
CSPEC contains a state diagram that is a sequential specification of the behavior and may
also contain a program activation table that is a combinatorial specification of system
behavior
Behavioral Modeling
A state transition diagrams (STD) represents the system states and events that trigger state
transitions
STD's indicate actions (e.g. process activation) taken as a consequence of a particular event
A state is any observable mode of behavior
Evaluate all use-cases to understand the sequence of interaction within the system
Identify events that drive the interaction sequence and how these events relate to specific
objects
Create a sequence or event-trace for each use-case
Build a state transition diagram for the system
Review the behavior model to verify accuracy and consistency
Page 29 of 77
Built from use-case descriptions by determining how events cause transitions from one object
to another
Key classes and actors are shown across the top
Object and actor activations are shown as vertical rectangles arranged along vertical dashed
lines called lifelines
Arrows connecting activations are labeled with the name of the event that triggers the
transition from one class or actor to another
Object flow among objects and actors may be represented by labeling a dashed horizontal
line with the name of the object being passed
States may be shown along the lifelines
Analysis Patterns
Content Model
Interaction Model
Functional Model
Configuration Model
Navigation Model
Web engineers consider requirements that dictate how each type of user will navigate from
one content object to another
Navigation mechanics are defined as part of design
Web engineers and stakeholders must determine navigation requirements
Quality Concepts
Overview
This chapter provides an introduction to software quality. Software quality is the concern of
every software process stakeholder. If a software team stresses quality in all software
engineering activities, it reduces the amount of rework that must be done. This results in
lower costs and improved time-to-market. To achieve high quality software, four elements
must be present: proven software engineering process and practice, solid project
management, comprehensive quality control, and the presence of a quality assurance
infrastructure. Software that meets its customer’s needs, performs accurately and reliably,
and provides value to all who use it. Developers track quality by examining the results of all
Page 31 of 77
quality control activities. Developers measure quality by examining errors before delivery
and defects released to the field.
What is Quality?
The transcendental view - quality is something that you immediately recognize, but cannot
explicitly define.
The user view - quality in terms of meeting an end-user’s specific goals.
The manufacturer’s view - quality in terms of the conformance to the original specification
of the product.
The product view - quality can be tied to inherent characteristics (e.g., functions and features)
of a product.
The value-based view - quality based on how much a customer is willing to pay for a
product.
Quality of design - refers to characteristics designers specify for the end product to be
constructed
Quality of conformance - degree to which design specifications are followed in
manufacturing the product
Software Quality
Software quality defined as an effective software process applied in a manner that creates a
useful product that provides measurable value for those who produce it and those who use it.
An effective software process establishes the infrastructure that supports any effort at
building a high quality software product.
A useful product delivers the content, functions, and features that the end-user desires, but as
important, it delivers these assets in a reliable, error free way.
By adding value for both the producer and user of a software product, high quality software
provides benefits for the software organization and the end-user community.
Performance quality
Feature quality
Reliability
Conformance
Durability
Serviceability
Aesthetics
Perception
Correctness - extent to which a program satisfies its specification and fulfills the
customer's mission objectives
Page 32 of 77
Reliability - extent to which a program can be expected to perform its intended function
with required precision
Efficiency - amount of computing resources and code required by a program to perform
its function
Integrity - extent to which access to software or data by unauthorized persons can be
controlled
Usability - effort required to learn, operate, prepare input for, and interpret output of a
program
Maintainability - effort required to locate and fix an error in a program
Flexibility - effort required to modify an operational program
Testability - effort required to test a program to ensure that it performs its intended
function
Portability - effort required to transfer the program from one hardware and/or software
system environment to another
Reusability - extent to which a program [or parts of a program] can be reused in other
applications
Interoperability. Effort required to couple one system to another
Functionality
Reliability
Usability
Efficiency
Maintainability
Portability
Measuring Quality
General quality dimensions and factors are not adequate for assessing the quality of an
application in concrete terms
Project teams need to develop a set of targeted questions to assess the degree to which each
quality factor has been satisfied in the application
Subjective measures of software quality may be viewed as little more than personal opinion
Software metrics represent indirect measures of some manifestation of quality and are an
attempt to quantify the assessment of software quality
If you produce software with terrible quality you lose because no one will but it
If you spend infinite time and money to create software you lose because you will go out of
business without bringing the software to market
The trick is to balance the construction costs and the product quality
Producing software using a “good enough” attitude may leave your production team exposed
to serious liability issues resulting from product failures after release
Page 33 of 77
Developers need to realize the taking time to do things right mean that they don’t need to
find the resources to do it over again
Cost of Quality
Prevention costs - quality planning, formal technical reviews, test equipment, training
Appraisal costs - in-process and inter-process inspection, equipment calibration and
maintenance, testing
Internal failure costs - rework, repair, failure mode analysis
External failure costs - complaint resolution, product return and replacement, help line
support, warranty work
Low quality software increases risks for both developers and end-users
Risks are areas of uncertainty in the development process such that if they occur may result
in unwanted consequences or losses
When systems are delivered late, fail to deliver functionality, and does not meet customer
expectations litigation ensues
Low quality software is easier to hack and can increase the security risks for the application
once deployed
A secure system cannot be built without focusing on quality (security, reliability,
dependability) during the design phase
Low quality software is liable to contain architectural flaws as well as implementation
problems (bugs)
Estimation decisions – irrational delivery date estimates cause teams to take short-cuts that
can lead to reduced product quality
Scheduling decisions – failing to pay attention to task dependencies when creating the project
schedule may force the project team to test modules without their subcomponents and quality
may suffer
Risk-oriented decisions – reacting to each crisis as it arises rather than building in
mechanisms to monitor risks and having established contingency plans may result in
products having reduced quality
Software quality is the result of good project management and solid engineering practice
To build high quality software you must understand the problem to be solved and be capable
of creating a quality design the conforms to the problem requirements
Eliminating architectural flaws during design can improve quality
Project management – project plan includes explicit techniques for quality and change
management
Page 34 of 77
Quality control - series of inspections, reviews, and tests used to ensure conformance of a
work product to its specifications
Quality assurance - consists of the auditing and reporting procedures used to provide
management with data needed to make proactive decisions
Review Techniques
Overview
People discover mistakes as they develop software engineering work products. Technical
reviews are the most effective technique for finding mistakes early in the software process. If
you find an error early in the process, it is less expensive to correct. In addition, errors have a
way of amplifying as the process proceeds. Reviews save time by reducing the amount of rework
that will be required late in the project. In general, six steps are employed: planning, preparation,
structuring the meeting, noting errors, making corrections, and verifying that corrections have
been performed properly. The output of a review is a list of issues and/or errors that have been
uncovered, as well as the technical status of the work product reviewed.
Review Goals
Defects or faults are quality problems discovered after software has been released to end-user
or another software process framework activity
Industry studies suggest that design activities introduce 50-65% of all defects or errors during
the software process
Review techniques have been shown to be up to 75% effective in uncovering design flaws
which ultimately reduces the cost of subsequent activities in the software process
Defect amplification models can be used to show that the benefits of detecting and removing
defects from activities that occur early in the software process
Review Metrics
Preparation effort, Ep - the effort (in person-hours) required to review a work product prior
to the actual review meeting
Assessment effort, Ea - the effort (in person-hours) that is expending during the actual review
Rework effort, Er - the effort (in person-hours) that is dedicated to the correction of those
errors uncovered during the review
Work product size, WPS - a measure of the size of the work product that has been reviewed
(e.g., the number of UML models, or the number of document pages, or the number of lines
of code)
Page 35 of 77
Minor errors found, Errminor - the number of errors found that can be categorized as minor
(requiring less than some pre-specified effort to correct)
Major errors found, Errmajor - the number of errors found that can be categorized as major
(requiring more than some pre-specified effort to correct)
Total review effort, Ereview = Ep + Ea + Er
Total number of errors discovered, Errtot = Errminir + Errmajor
Defect density = Errtot / WPS
Software review organizations can only assess the effectiveness and cost benefits after
reviews are completed, review metrics collected, average data computed, and downstream
software quality is measured by testing
Some people have found a 10 to 1 return on inspection costs, accelerated product delivery
times, and productivity increases
Review costs benefits are most pronounced during the latter phases of software process
leading up to product deployment
Review Formality
Informal Reviews
Uncover errors in function, logic, or implementation for any representation of the software
Verify that the software under review meets its requirements
Ensure that the software has been represented according to predefined standards
Achieve software that is developed in a uniform manner
Make projects more manageable
Serve as a training ground, enabling junior engineers to observe different approaches to
software analysis, design, and implementation
Serves to promote backup and continuity because a number of people become familiar with
parts of the software that they may not have otherwise seen
Page 36 of 77
Formal Technical Reviews
Samples of all software engineering work products are reviewed to determine the most error-
prone
Full FTR resources are focused on the likely error-prone work products based on sampling
results
To be effective the sample driven review process must be driven by quantitative measures of
the work products
Page 37 of 77
1. Inspect a representative fraction of the content of each software work product (i) and
record the number of faults (fi) found within (ai)
2. Develop a gross estimate of the number of faults within work product i by multiplying fi
by 1/ai
3. Sort work products in descending order according to the gross estimate of the number of
faults in each
4. Focus on available review resources on those work products with the highest estimated
number of faults
Overview
This chapter provides an introduction to software quality assurance. Software quality assurance
(SQA) is the concern of every software engineer to reduce costs and improve product time-to-
market. A Software Quality Assurance Plan is not merely another name for a test plan, though
test plans are included in an SQA plan. SQA activities are performed on every software project.
Use of metrics is an important part of developing a strategy to improve the quality of both
software processes and work products.
SQA Questions
Page 38 of 77
Safety – responsible for assessing impact of software failure and initiating steps to reduce
risk
Risk management – ensures risk management activities are properly conducted and that
contingency plans have been established
SQA Tasks
SQA Goals
Requirements quality
o Ambiguity
o Completeness
o Volatility
o Traceability
o Model clarity
Design quality
o Architectural integrity
o Component completeness
o Interface complexity
o Patterns
Code quality
o Complexity
o Maintainability
o Understandability
o Reusability
o Documentation
Quality control effectiveness
o Resource allocation
o Completion rate
o Review effectiveness
o Testing effectiveness
Formal SQA
Assumes that a rigorous syntax and semantics can be defined for every programming
language
Page 39 of 77
Allows the use of a rigorous approach to the specification of software requirements
Applies mathematical proof of correctness techniques to demonstrate that a program
conforms to its specification
Define customer requirements, deliverables, and project goals via well-defined methods of
customer communication.
Measure each existing process and its output to determine current quality performance (e.g.
compute defect metrics)
Analyze defect metrics and determine viral few causes.
For an existing process that needs improvement
o Improve process by eliminating the root causes for defects
o Control future work to ensure that future work does not reintroduce causes of defects
If new processes are being developed
o Design each new process to avoid root causes of defects and to meet customer
requirements
o Verify that the process model will avoid defects and meet customer requirements
Software Reliability
Software Safety
Defined as a software quality assurance activity that focuses on identifying potential hazards
that may cause a software system to fail.
Early identification of software hazards allows developers to specify design features to can
eliminate or at least control the impact of potential hazards.
Page 40 of 77
Software reliability involves determining the likelihood that a failure will occur, while
software safety examines the ways in which failures may result in conditions that can lead to
a mishap.
SQA Plan
Management section - describes the place of SQA in the structure of the organization
Documentation section - describes each work product produced as part of the software
process
Standards, practices, and conventions section - lists all applicable standards/practices applied
during the software process and any metrics to be collected as part of the software
engineering work
Reviews and audits section - provides an overview of the approach used in the reviews and
audits to be conducted during the project
Test section - references the test plan and procedure document and defines test record
keeping requirements
Problem reporting and corrective action section - defines procedures for reporting, tracking,
and resolving errors or defects, identifies organizational responsibilities for these activities
Other - tools, SQA methods, change control, record keeping, training, and risk management
Software Testing Strategies
Overview
This chapter describes several approaches to testing software. Software testing must be planned
carefully to avoid wasting development time and resources. Testing begins “in the small” and
progresses “to the large”. Initially individual components are tested and debugged. After the
individual components have been tested and added to the system, integration testing takes place.
Once the full software product is completed, system testing is performed. The Test Specification
document should be reviewed like all other software engineering work products.
Many software errors are eliminated before testing begins by conducting effective technical
reviews
Testing begins at the component level and works outward toward the integration of the entire
computer-based system.
Different testing techniques are appropriate at different points in time.
Page 41 of 77
The developer of the software conducts testing and may be assisted by independent test
groups for large projects.
Testing and debugging are different activities.
Debugging must be accommodated in any testing strategy.
Make a distinction between verification (are we building the product right?) and validation
(are we building the right product?)
Software testing is only one element of Software Quality Assurance (SQA)
Quality must be built in to the development process, you can’t use testing to add quality after
the fact
The role of the Independent Test Group (ITG) is to remove the conflict of interest inherent
when the builder is testing his or her own product.
Misconceptions regarding the use of independent testing teams
o The developer should do no testing at all
o Software is tossed “over the wall” to people to test it mercilessly
o Testers are not involved with the project until it is time for it to be tested
The developer and ITGC must work together throughout the software project to ensure that
thorough tests will be conducted
Unit Testing – makes heavy use of testing techniques that exercise specific control paths to
detect errors in each software component individually
Integration Testing – focuses on issues associated with verification and program construction
as components begin interacting with one another
Validation Testing – provides assurance that the software validation criteria (established
during requirements analysis) meets all functional, behavioral, and performance requirements
System Testing – verifies that all system elements mesh properly and that overall system
function and performance has been achieved
Integration Testing
Sandwich testing uses top-down tests for upper levels of program structure coupled with
bottom-up tests for subordinate levels
Testers should strive to indentify critical modules having the following requirements
Overall plan for integration of software and the specific tests are documented in a test
specification
Regression testing – used to check for defects propagated to other modules by changes made
to existing program
1. Representative sample of existing test cases is used to exercise all software functions.
2. Additional test cases focusing software functions likely to be affected by the change.
3. Tests cases that focus on the changed software components.
Smoke testing
1. Software components already translated into code are integrated into a build.
2. A series of tests designed to expose errors that will keep the build from performing its
functions are created.
3. The build is integrated with the other builds and the entire product is smoke tested daily
(either top-down or bottom integration may be used).
Page 43 of 77
General Software Test Criteria
Interface integrity – internal and external module interfaces are tested as each module or
cluster is added to the software
Functional validity – test to uncover functional defects in the software
Information content – test for errors in local or global data structures
Performance – verify specified performance bounds are tested
Object-Oriented Test Strategies
Page 44 of 77
Validation Testing
Focuses on visible user actions and user recognizable outputs from the system
Validation tests are based on the use-case scenarios, the behavior model, and the event flow
diagram created in the analysis model
o Must ensure that each function or performance characteristic conforms to its
specification.
o Deviations (deficiencies) must be negotiated with the customer to establish a means for
resolving the errors.
Configuration review or audit is used to ensure that all elements of the software configuration
have been properly developed, cataloged, and documented to allow its support during its
maintenance phase.
Acceptance Testing
Making sure the software works correctly for intended user in his or her normal work
environment.
Alpha test – version of the complete software is tested by customer under the supervision of
the developer at the developer’s site
Beta test – version of the complete software is tested by customer at his or her own site
without the developer being present
System Testing
Bug Causes
The symptom and the cause may be geographically remote (symptom may appear in one part
of a program).
The symptom may disappear (temporarily) when another error is corrected.
The symptom may actually be caused by non-errors (e.g., round-off inaccuracies).
The symptom may be caused by human error that is not easily traced.
The symptom may be a result of timing problems, rather than processing problems.
It may be difficult to accurately reproduce input conditions (e.g., a real-time application in
which input ordering is indeterminate).
Page 45 of 77
The symptom may be intermittent. This is particularly common in embedded systems that
couple hardware and software inextricably.
The symptom may be due to causes that are distributed across a number of tasks running on
different processors.
Debugging Strategies
Overview
The importance of software testing to software quality can not be overemphasized. Once source
code has been generated, software must be tested to allow errors to be identified and removed
before delivery to the customer. While it is not possible to remove every error in a large software
package, the software engineer’s goal is to remove as many as possible early in the software
development cycle. It is important to remember that testing can only find errors it cannot prove
that a program is free of bugs. Two basic test techniques exist for testing coventional software,
testing module input/output (black-box) and exercising the internal logic of software components
(white-box). Formal technical reviews by themselves cannot find all software defects, test data
must also be used. For large software projects, separate test teams may be used to develop and
execute the set of test cases used in testing. Testing must be planned and designed.
Testing is the process of executing a program with the intent of finding errors.
A good test case is one with a high probability of finding an as-yet undiscovered error.
A successful test is one that discovers an as-yet-undiscovered error.
Page 46 of 77
Controllability – the better software can be controlled the more testing can be automated and
optimized
Decomposability – by controlling the scope of testing, the more quickly problems can be
isolated and retested intelligently
Simplicity – the less there is to test, the more quickly we can test
Stability – the fewer the changes, the fewer the disruptions to testing
Understandability – the more information known, the smarter the testing
Good Test Attributes
Black-box or behavioral testing – knowing the specified function a product is to perform and
demonstrating correct operation based solely on its specification without regard for its
internal logic
White-box or glass-box testing – knowing the internal workings of a product, tests are
performed to check the workings of all possible logic paths
Can you guarantee that all independent paths within a module will be executed at least once?
Can you exercise all logical decisions on their true and false branches?
Will all loops execute at their boundaries and within their operational bounds?
Can you exercise internal data structures to ensure their validity?
Black-box methods based on the nature of the relationships (links) among the program
objects (nodes), test cases are designed to traverse the entire graph
Transaction flow testing – nodes represent steps in some transaction and links represent
logical connections between steps that need to be validated
Finite state modeling – nodes represent user observable states of the software and links
represent transitions between states
Data flow modeling – nodes are data objects and links are transformations from one data
object to another
Timing modeling – nodes are program objects and links are sequential connections between
these objects, link weights are required execution times
Equivalence Partitioning
Black-box technique that divides the input domain into classes of data from which test cases
can be derived
An ideal test case uncovers a class of errors that might require many arbitrary test cases to be
executed before a general error is observed
Equivalence class guidelines:
1. If input condition specifies a range, one valid and two invalid equivalence classes are
defined
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class is defined
4. If an input condition is Boolean, one valid and one invalid equivalence class is defined
Black-box technique that focuses on the boundaries of the input domain rather than its center
Page 48 of 77
BVA guidelines:
1. If input condition specifies a range bounded by values a and b, test cases should include a
and b, values just above and just below a and b
2. If an input condition specifies and number of values, test cases should be exercise the
minimum and maximum numbers, as well as values just above and just below the
minimum and maximum values
3. Apply guidelines 1 and 2 to output conditions, test cases should be designed to produce
the minimum and maxim output reports
4. If internal program data structures have boundaries (e.g. size limitations), be certain to
test the boundaries
Black-box technique that enables the design of a reasonably small set of test cases that
provide maximum test coverage
Focus is on categories of faulty logic likely to be present in the software component (without
examining the code)
Priorities for assessing tests using an orthogonal array
1. Detect and isolate all single mode faults
2. Detect all double mode faults
3. Mutimode faults
Model-Based Testing
Black-bix testing technique using information contained in the requirements model as a basis
for test case generation
Steps for MBT
1. Analyze an existing behavior model for the software or create one.
2. Traverse behavioral model and specify inputs that force software to make transition from
state to state.
3. Review behavioral model and note expected outputs as software makes transition from
state to state.
4. Execute test cases.
5. Compare actual and expected results (take corrective action as required).
Specialized Testing
Graphical User Interface (GUI) – test cases can be developed from behavioral model of user
interface, use of automated testing tools is strongly recommended.
Client/Sever Architectures – operational profiles derived from usage scenarios are tested at
three levels (client application “disconnected mode”, client and server software without
network, complete application)
Applications function tests
Server tests
Database tests
Transaction tests
Network communications tests
Documentation and Help
Page 49 of 77
Review and inspection to check for editorial errors
Black-Box for live tests
o Graph-based testing to describe program use
o Equivalence partitioning and boundary value analysis to describe classes of input and
interactions
Real-Time Systems
1. Task testing – test each task independently
2. Behavioral testing – using technique similar to equivalence partitioning of external event
models created by automated tools
3. Intertask testing – testing for timing errors (e.g. synchronization and communication
errors)
4. System testing – testing full system, especially the handling of Boolean events
(interrupts), test cased based on state model and control specification
Testing Patterns
Overview
It is important to test object-oriented at several different levels to uncover errors that may occur
as classes collaborate with one another and with other subsystems. The process of testing object-
oriented systems begins with a review of the object-oriented analysis and design models. Once
the code is written object-oriented testing begins by testing "in the small" with class testing
(class operations and collaborations). As classes are integrated to become subsystems class
collaboration problems are investigated using thread-based testing, use-based testing, cluster
testing, and fault-based approaches. Use-cases from the analysis model are used to uncover
software validation errors. The primary work product is a set of documented test cases with
defined expected results and the actual results recorded.
OO Testing
The analysis and design models cannot be tested because they are not executable
Page 50 of 77
The syntax correctness of the analysis and design models can check for proper use of
notation and modeling conventions
The semantic correctness of the analysis and design models are assessed based on their
conformance to the real world problem domain (as determined by domain experts)
OO Model Consistency
Comparison Testing
Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to specifications
Often equivalence class partitioning is used to develop a common set of test cases for each
implementation
Page 51 of 77
3. List the testing steps for each test including:
a. list of states to test for each object involved in the test
b. list of messages and operations to exercised as a consequence of the test
c. list of exceptions that may occur as the object is tested
d. list of external conditions needed to be changed for the test
e. supplementary information required to understand or implement the test
White-box testing methods can be applied to testing the code used to implement class
operations, but not much else
Black-box testing methods are appropriate for testing OO systems just as they are for testing
conventional systems
OO Fault-Based Testing
Best reserved for operations and the class level
Uses the inheritance structure
Tester examines the OOA model and hypothesizes a set of plausible defects that may be
encountered in operation calls and message connections and builds appropriate test cases
Misses incorrect specification and errors in subsystem interactions
Finds client errors not server errors
Subclasses may contain operations that are inherited from super classes
Subclasses may contain operations that were redefined rather than inherited
All classes derived from a previously tested base class need to be tested thoroughly
OO Scenario-Based Testing
Using the user tasks described in the use-cases and building the test cases from the tasks and
their variants
Uncovers errors that occur when any actor interacts with the OO software
Concentrates on what the user does, not what the product does
You can get a higher return on your effort by spending more time on reviewing the use-cases
as they are created, than spending more time on use-case testing
Overview
This chapter describes Web testing as a collection of activities whose purpose is to uncover
errors in WebApp content, function, usability, navigability, performance, capacity, and security.
A testing startegy that involves both reviews and executable testing is applied throughout the
WebE process. The WebApp testing process involves all project stakeholders. Web testing
begins with user-visible aspects of WebApps and proceeds to exercise technology and
infrastructure. Seven testing steps are performed: content testing, interface testing, navigation
testing, component testing, configuration testing, performance testing, and security testing. In
sometimes a test plan is written. A suite of test cases is always developed for every testing step
and an archive of testing results is maintained for future use.
Dimensions of Quality
Page 53 of 77
Characteristics of WebApp Errors
Many types of WebApp tests uncover problems evidenced on the client side using an specific
interface (e.g. may be an error symptom, not the error itself)
It may be difficult to reproduce errors outside of the environment in which the error was
originally encountered
Many errors can be traced to the WebApp configuration, incorrect design, or improper
HTML
It is hard to determine whether errors are caused by problems with the server, the client, or
the network itself
Some errors are attributable to problems in the static operating environment and some are
attributable to the dynamic operating environment
Page 54 of 77
Security testing – tests designed to exploit WebApp or environment vulnerabilities
Performance testing – series of tests designed to assess:
o WebApp response time and reliability under varying system loads
o Which WebApp components are responsible for system degradation
o How performance degradation impacts overall WebApp requirements
The original query must be checked to uncover errors in translating the user’s request to SQL
Problems in communicating between the WebApp server and Database server need to be
tested.
Need to demonstrate the validity of the raw data from the database to the WebApp and the
validity of the transformations applied to the raw data.
Need to test validity of dynamic content object formats transmitted to the user and the
validity of the transformations to make the data visible to the user.
Interface features are tested to ensure that design rules, aesthetics, and related visual content
Page 55 of 77
is available for user without error.
Individual interface mechanisms are tested using unit testing strategies.
Each interface mechanism is tested in the context of a use-case of navigation semantic unit
(e.g. thread) for a specific user category
Complete interface is tested against selected use-cases and navigation semantic unit to
uncover interface semantic errors
Interface is tested in a variety of environments to ensure compatibility
Usability Testing
Define set of usability testing categories and identify goals for each
o Interactivity – interaction mechanisms are easy to understand and use
o Layout – navigation, content, and functions allows user to find them quickly
o Readability – content understandable
o Aesthetics – graphic design supports easy of use
o Display characteristics – WebApp makes good use of screen size and resolution
o Time sensitivity – content and features can be acquired in timely manner
o Personalization – adaptive interfaces
o Accessibility – special needs users
Design tests the will enable each goal to be evaluated
Select participants to conduct the tests
Instrument participants’ interactions with the WebApp during testing
Develop method for assessing usability of the WebApp
Compatibility Testing
Navigation Testing
Need to ensure that all mechanisms that allow the WebApp to user to travel through the
WebApp are functional
Need to validate that each navigation semantic unit (NSU) can be achieved by the
appropriate user category
Navigational Links
Redirects
Bookmarks
Frames and framesets
Site maps
Internal search engines
Navigation semantic units are defined by a set of pathways that connect navigation nodes
Each NSU must allows a user from a defined user category achieve specific requirements
defined by a use-case
Testing needs to ensure that each path is executed in its entity without error
Every relevant path must be tested
User must be given guidance to follow or discontinue each path based on current location in
site map
Configuration Testing
Server-side Issues
Compatibility of WebApp with server OS
Correct file and directory creation by WebApp
System security measures do not degrade user service by WebApp
Testing WebApp with distributed server configuration
WebApp properly integrated with database software
Correct execution of WebApp scripts
Examination system administration errors for impact on WebApp
On-site testing of proxy servers
Client-side issues
Page 57 of 77
Hardware
Operating systems
Browser software
User interface components
Plug-ins
Connectivity
Firewalls
Authentication
Encryption
Authorization
Performance Testing
Used to performance problems that can result from lack of server-side resources,
inappropriate network bandwidth, inadequate database capabilities, faulty operating system
capabilities, poorly designed WebApp functionality, and hardware/software issues
Intent is to discover how system responds to loading and collect metrics that will lead to
improve performance
o Does the server response time degrade to a point where it is noticeable and unacceptable?
o At what point (in terms of users, transactions or data loading) does performance become
unacceptable?
o What system components are responsible for performance degradation?
o What is the average response time for users under a variety of loading conditions?
o Does performance degradation have an impact on system security?
o Is WebApp reliability or accuracy affected as the load on the system grows?
o What happens when loads that are greater than maximum server capacity are applied?
o Does performance degradation have an impact on company revenues?
Forces loading to be increases to breaking point to determine how much capacity the
WebApp can handle
o Does system degrade gracefully?
o Are users made aware that they cannot reach the server?
Page 58 of 77
o Does server queue resource requests during heavy demand and then process the queue
when demand lessens?
o Are transactions lost as capacity is exceeded?
o Is data integrity affected when capacity is exceeded?
o How long till system comes back on-line after a failure?
o Are certain WebApp functions discontinued as capacity is reached?
Product Metrics for Software
Overview
This chapter describes the use of product metrics in the software quality assurance process.
Software engineers use product metrics to help them assess the quality of the design and
construction the software product being built. Product metrics provide software engineers with a
basis to conduct analysis, design, coding, and testing more objectively. Qualitative criteria for
assessing software quality are not always sufficient by themselves. The process of using product
metrics begins by deriving the software measures and metrics that are appropriate for the
software representation under consideration. Then data are collected and metrics are computed.
The metrics are computed and compared to pre-established guidelines and historical data. The
results of these comparisons are used to guide modifications made to work products arising from
analysis, design, coding, or testing.
Definitions
Measure – provides a quantitative indication of the extent, amount, capacity, or size of some
attribute of a product or process
Measurement – act of determining a measure
Metric – statistic that relates individual measures to one another
Indicator – metric or combination of metrics that provide insight into the software process,
software project, or the product itself to make things better
Page 59 of 77
Metrics Characterization and Validation Principles
Function-based metrics
o Function points
Specification quality metrics (Davis)\
o Specificity
o Completeness
Page 60 of 77
Morphology (number of nodes and arcs in program graph)
Design structure quality index (DSQI)
OO Design Metrics
Class-Oriented Metrics
Cohesion metrics (data slice, data tokens, glue tokens, superglue tokens, stickiness)
Coupling metrics (data and control flow, global, environmental)
Complexity metrics (e.g. cyclomatic complexity)
Operation-Oriented Metrics
Average operation size (OSavg)
Operation complexity (OC)
Average number of parameters per operation (NPavg)
Layout appropriateness
Layout complexity
Layout region complexity
Recognition complexity
Recognition time
Typing effort
Mouse pick effort
Selection complexity
Content acquisition time
Word count
Body text percentage
Emphasized body text %
Text positioning count
Text cluster count
Link count
Page size
Graphic percentage
Graphics count
Color count
Font count
Content Metrics
Page wait
Page complexity
Graphic complexity
Audio complexity
Video complexity
Animation complexity
Scanned image complexity
Page 62 of 77
Navigation Metrics
Testing Metrics
Metrics that predict the likely number of tests required during various testing phases
o Architectural design metrics
o Cyclomatic complexity can target modules that are candidates for extensive unit testing
o Halstead effort
Metrics that focus on test coverage for a given component
o Cyclomatic complexity lies at the core of basis path testing
Encapsulation
o Lack of cohesion in methods (LCOM)
o Percent public and protected (PAP)
o Public access to data members (PAD)
Inheritance
o Number of root classes (NOR)
o Fan in (FIN)
o Number of children (NOC)
o Depth of inheritance tree (DIT)
Maintenance Metrics
Overview
Project management involves the planning, monitoring, and control of people, process, and
events that occur during software development.
Page 63 of 77
Everyone manages, but the scope of each person's management activities varies according his
or her role in the project.
Software needs to be managed because it is a complex undertaking with a long duration time.
Managers must focus on the fours P's to be successful (people, product, process, and project).
A project plan is a document that defines the four P's in such a way as to ensure a cost
effective, high quality software product.
The only way to be sure that a project plan worked correctly is by observing that a high
quality product was delivered on time and under budget.
Management Spectrum
People
Closed paradigm (top level problem solving and internal coordination managed by team
leader, good for projects that repeat past efforts)
Random paradigm (team loosely structured success depends on initiative of individual team
members, paradigm excels when innovation and technical breakthroughs required)
Open paradigm (rotating task coordinators and group consensus, good for solving complex
problems – not always efficient as other paradigms)
Synchronous paradigm (relies on natural problem compartmentalization and team organized
to require little active communication with each other)
Page 64 of 77
Toxic Team Environment Characteristics
1. Frenzied work atmosphere where team members waste energy and lose focus on work
objectives
2. High frustration and group friction caused by personal, business, or technological problems
3. Fragmented or poorly coordinated procedures or improperly chosen process model blocks
accomplishments
4. Unclear role definition that results in lack of accountability or finger pointing
5. Repeated exposure to failure that leads to loss of confidence and lower morale
Agile Teams
Teams have significant autonomy to make their own project management and technical
decisions
Planning kept to minimum and is constrained only by business requirements and
organizational standards
Team self-organizes as project proceeds to maximum contributes of each individual’s talents
May conduct daily (10 – 20 minute) meeting to synchronized and coordinate each day’s work
o What has been accomplished since the last meeting?
o What needs to be accomplished by the next meeting?
o How will each team member contribute to accomplishing what needs to be done?
o What roadblocks exist that have to be overcome?
The Product
The Process
Page 65 of 77
Work tasks may vary but the common process framework (CPF) is invariant (project size
does not change the CPF)
The detail of the actual work tasks used to complete each framework activity and dependent
on the size and complexity of the project
The job of the software engineer is to estimate the resources required to move each function
though the framework activities to produce each work product
Project decomposition begins when the project manager tries to determine how to accomplish
each CPF activity
W5HH Principle
Critical Practices
Page 66 of 77
Process and Project Metrics
Overview
Software process and project metrics are quantitative measures that enable software engineers to
gain insight into the efficiency of the software process and the projects conducted using the
process framework. In software project management, we are primarily concerned with
productivity and quality metrics. There are four reasons for measuring software processes,
products, and resources (to characterize, to evaluate, to predict, and to improve).
Metrics should be collected so that process and product indicators can be ascertained
Process metrics used to provide indictors that lead to long term process improvement
Project metrics enable project manager to
o Assess status of ongoing project
o Track potential risks
o Uncover problem are before they go critical
o Adjust work flow or tasks
o Evaluate the project team’s ability to control quality of software wrok products
Process Metrics
Private process metrics (e.g. defect rates by individual or module) are only known to by the
individual or team concerned.
Public process metrics enable organizations to make strategic changes to improve the
software process.
Metrics should not be used to evaluate the performance of individuals.
Statistical software process improvement helps and organization to discover where they are
strong and where are week.
Project Metrics
Page 67 of 77
A software team can use software project metrics to adapt project workflow and technical
activities.
Project metrics are used to avoid development schedule delays, to mitigate potential risks,
and to assess product quality on an on-going basis.
Every project should measure its inputs (resources), outputs (deliverables), and results
(effectiveness of deliverables).
Software Measurement
Size-Oriented Metrics
Derived by normalizing (dividing) any direct measure (e.g. defects or human effort)
associated with the product or project by LOC.
Size oriented metrics are widely used but their validity and applicability is widely debated.
Function-Oriented Metrics
Function points are computed from direct measures of the information domain of a business
software application and assessment of its complexity.
Once computed function points are used like LOC to normalize measures for software
productivity, quality, and other attributes.
The relationship of LOC and function points depends on the language used to implement the
software.
The relationship between lines of code and function points depends upon the programming
language that is used to implement the software and the quality of the design
Function points and LOC-based metrics have been found to be relatively accurate predictors
of software development effort and cost
Using LOC and FP for estimation a historical baseline of information must be established.
Object-Oriented Metrics
Factors assessing software quality come from three distinct points of view (product
operation, product revision, product modification).
Software quality factors requiring measures include
o correctness (defects per KLOC)
o maintainability (mean time to change)
o integrity (threat and security)
o usability (easy to learn, easy to use, productivity increase, user attitude)
Defect removal efficiency (DRE) is a measure of the filtering ability of the quality assurance
and control activities as they are applied through out the process framework
DRE = E / (E + D)
E = number of errors found before delivery of work product
D = number of defects found after work product delivery
Page 69 of 77
Arguments for Software Metrics
Baselines
Establishing a metrics baseline can benefit portions of the process, project, and product levels
Baseline data must often be collected by historical investigation of past project (better to
collect while projects are on-going)
To be effective the baseline data needs to have the following attributes:
o data must be reasonably accurate, not guesstimates
o data should be collected for as many projects as possible
o measures must be consistent
o applications should be similar to work that is to be estimated
Page 70 of 77
Estimation for Software Projects
Overview
Software planning involves estimating how much time, effort, money, and resources will be
required to build a specific software system. After the project scope is determined and the
problem is decomposed into smaller problems, software managers use historical project data (as
well as personal experience and intuition) to determine estimates for each. The final estimates
are typically adjusted by taking project complexity and risk into account. The resulting work
product is called a project management plan. Managers will not know that they have done a good
job estimating until the project post mortem. It is essential to track resources and revise estimates
as a project progresses.
Project complexity
Project size
Degree of structural uncertainty (degree to which requirements have solidified, the ease with
which functions can be compartmentalized, and the hierarchical nature of the information
processed)
Availability of historical information
Software Scope
Describes the data to be processed and produced, control parameters, function, performance,
constraints, external interfaces, and reliability.
Often functions described in the software scope statement are refined to allow for better
estimates of cost and schedule.
Determine the customer's overall goals for the proposed system and any expected benefits.
Determine the customer's perceptions concerning the nature if a good solution to the
problem.
Evaluate the effectiveness of the customer meeting.
Feasibility
Estimation of Resources
Human Resources (number of people required and skills needed to complete the development
project)
Reusable Software Resources (off-the-shelf components, full-experience components,
partial-experience components, new components)
Environment Resources (hardware and software required to be accessible by software team
during the development process)
Decomposition Techniques
Page 72 of 77
Process-based estimation (decomposition based on tasks required to complete the software
process framework)
Use-case estimation (promising, but controversial due to lack of standardization of use cases)
Typically derived from regression analysis of historical software project data with estimated
person-months as the dependent variable and KLOC, FP, or object points as independent
variables.
Constructive Cost Model (COCOMO) is an example of a static estimation model.
COCOMO II is a hierarchy of estimation models that take the process phase into account
making it more of dynamic estimation model.
The Software Equation is an example of a dynamic estimation model.
1. Develop estimates using effort decomposition, FP analysis, and any other method that is
applicable for conventional applications.
2. Using the requirements model (Chapter 6), develop use cases and determine a count.
Recognize that the number of use cases may change as the project progresses.
3. From the requirements model, determine the number of key classes (called analysis classes in
Chapter 6).
4. Categorize the type of interface for the application and develop a multiplier for support
classes
5. Multiply the number of key classes (step 3) by the multiplier to obtain an estimate for the
number of support classes.
6. Multiply the total number of classes (key + support) by the average number of work-units per
class.
7. Cross check the class-based estimate by multiplying the average number of work-units per
use case.
Page 73 of 77
5. The effort estimates for all scenarios in the increment are summed to get an increment
estimate
Make-Buy Decision
It may be more cost effective to acquire a piece of software rather than develop it.
Decision tree analysis provides a systematic way to sort through the make-buy decision.
As a rule outsourcing software development requires more skillful management than in-
house development of the same product.
Risk Management
Overview
Risks are potential problems that might affect the successful completion of a software project.
Risks involve uncertainty and potential losses. Risk analysis and management is intended to help
a software team understand and manage uncertainty during the development process. The
important thing is to remember that things can go wrong and to make plans to minimize their
impact when they do. The work product is called a Risk Mitigation, Monitoring, and
Management Plan (RMMM) or a set of Risk Information Sheets (RIS).
Risk Strategies
Reactive strategies - very common, also known as fire fighting, project team sets resources
aside to deal with problems and does nothing until a risk becomes a problem
Proactive strategies - risk management begins long before technical work starts, risks are
identified and prioritized by importance, then team builds a plan to avoid risks if they can or
minimize them if the risks turn into problems
Software Risks
Page 74 of 77
Business risks - threaten the viability of the software to be built (market risks, strategic risks,
sales risks, management risks, budget risks)
Known risks - predictable from careful evaluation of current project plan and those
extrapolated from past project experience
Unknown risks - some problems simply occur without warning
Risk Identification
Product-specific risks - the project plan and software statement of scope are examined to
identify any special characteristics of the product that may threaten the project plan
Generic risks - are potential threats to every software product (product size, business impact,
customer characteristics, process definition, development environment, technology to be
built, staff size and experience)
Product size
Business impact
Stakeholder characteristics
Process definition
Development environment
Technology to be built
Staff size and experience
1. Have top software and customer managers formally committed to support the project?
2. Are end-users enthusiastically committed to the project?
3. Are requirement fully understood by developers and customers?
4. Were customers fully involved in requirements definition?
5. Do end-users have realistic expectations?
6. Is project scope stable?
7. Does software team have the right skill set?
8. Are project requirements (scope) stable?
9. Does the project team have experience with technology to be implemented?
10. Is the number of people on project team adequate to do the job?
11. Do all stakeholders agree on the importance of the project the requirements for the systems
being built?
Risk Impact
Page 75 of 77
Risk Projection (Estimation)
Factors affecting risk consequences - nature (types of problems arising), scope (combines
severity with extent of project affected), timing (when and how long impact is felt)
If costs are associated with each risk table entry Halstead's risk exposure metric can be
computed (RE = Probability * Cost) and added to the risk table.
Risk Assessment
1. Define referent levels for each project risk that can cause project termination (performance
degradation, cost overrun, support difficulty, schedule slippage).
2. Attempt to develop a relationship between each risk triple (risk, probability, impact) and each
of the reference levels.
3. Predict the set of referent points that define a region of termination, bounded by a curve or
areas of uncertainty.
4. Try to predict how combinations of risks will affect a referent level.
Risk Refinement
Process of restating the risks as a set of more detailed risks that will be easier to mitigate,
monitor, and manage.
CTC (condition-transition-consequence) format may be a good representation for the detailed
risks (e.g. given that <condition> then there is a concern that (possibly) <consequence>).
Page 76 of 77
Risk mitigation (proactive planing for risk avoidance)
Risk monitoring
o Assessing whether predicted risks actually occur
o Ensuring risk aversion steps are being properly applied
o Collecting information for future risk analysis, attempt to determine which risks caused
which problems
o Determining what risks cause which project problems
Risk management and contingency planing (actions to be taken in the event that mitigation
steps have failed and the risk has become a live problem)
Risks are also associated with software failures that occur in the field after the development
project has ended.
Computers control many mission critical applications in modern times (weapons systems,
flight control, industrial processes, etc.).
Software safety and hazard analysis are quality assurance activities that are of particular
concern for these types of applications and are discussed later in the text.
Page 77 of 77