SQA PDF Notes
SQA PDF Notes
software error
software fault
software failure
Causes of Software Error
1. Faulty requirements definition
• Usually considered the root cause of software errors
• Incorrect requirement definitions
• Simply stated, ‘wrong’ definitions (formulas, etc.)
• Incomplete definitions
• Unclear or implied requirements
• Missing requirements
• Just flat-out ‘missing.’ (e.g. Program Element Code)
• Inclusion of unneeded requirements
• (many projects have gone amuck for including far too
many requirements that will never be used.
• Impacts budgets, complexity, development time, …
Causes of Software Error
2. Client-developer communication failures
• Misunderstanding of instructions in requirements
documentation
• Misunderstanding of written changes during development.
• Misunderstanding of oral changes during development.
• Lack of attention
• to client messages by developers dealing with requirement
changes and
• to client responses by clients to developer questions
• Very often, these very talented individuals come from different
planets, it seems.
• Clients represent the users; developers represent a different
mindset entirely some times!
Causes of Software Error
3. Deliberate deviations from software requirements
• Developer reuses previous / similar work to save time.
• Often reused code needs modification which it may not
get or contains unneeded / unusable extraneous code.
• Book suggests developer(s) may overtly omit functionality due
to time / budget pressures.
• Another BAD choice; System testing will uncover these
problems to everyone’s dismay!
• I have never seen this done intentionally!
• Developer inserting unapproved ‘enhancements’ (perfective
coding; a slick new sort / search….); may also ignore some
seemingly minor features, which sometimes are quite major.
• Have seen this and it too causes problems and
embarrassment during reviews.
Causes of Software Error
🠶 4. Logical design errors
• Definitions that represent software requirements by means of
erroneous algorithms.
• Yep! Wrong formulas; Wrong Decision Logic Tables;
incorrect text; wrong operators / operands…
• Process definitions: procedures specified by systems analyst
not accurate reflection of the business process specified.
• Note: all errors are not necessarily software errors.
• This seems like a procedural error, and likely not a part of
the software system…
• Erroneous Definition of Boundary Condition – a common
source of errors
• The “absolutes” like ‘no more than’ “fewer than,” “n times
or more;” “the first time,” etc.
Causes of Software Error
🠶 4. Logical design errors (continued)
• Omission of required software system states
If rank is >= O1 and RPI is numeric, then….easy to miss action
based on the software system state.
8. Documentation errors
• Errors in the design documents
• Trouble for subsequent redesign and reuse
• Errors in the documentation within the software for the User
Manuals
• Errors in on-line help, if available.
• Listing of non-existing software functions
• Planned early but dropped; remain in documentation!
• Many error messages are totally meaningless
Causes of Software Error
The nine causes of software errors are:
1. Faulty requirements definition
2. Client-developer communication failures
3. Deliberate deviations from software requirements
4. Logical design errors
5. Coding errors
6. Non-compliance with documentation and coding instructions
7. Shortcomings of the testing process
8. User interface and procedure errors
9. Documentation errors
• Correctness
• Reliability
• Efficiency
• Integrity
• Usability
• Maintainability
• Flexibility
• Testability
3. Testability Requirements –
– Are intermediate results of computations predefined to assist
testing?
– Are log files created? Backup?
– Does the software diagnose itself prior to and perhaps during
operations?
Product transition factors
• Portability
• Reusability
• Interoperability
Planning
Engineering activities
(design, code, test…)
Customer Evaluation
(errors, changes, new
requirements…)
22
Inception Phase
23
Elaboration Phase
🠶 Requirements Analysis and Capture
🠶 Use Case Analysis
🠶 Use Cases (80% written and reviewed by end of phase)
🠶 Use Case Model (80% done)
🠶 Scenarios
🠶 Sequence and Collaboration Diagrams
🠶 Class, Activity, Component, State Diagrams
🠶 Glossary (so users and developers can speak common vocabulary)
🠶 Domain Model
🠶 To understand the problem: the system’s requirements as they
exist within the context of the problem domain
🠶 Risk Assessment Plan revised 24
🠶 Architecture Document
Construction Phase
25
Transition Phase
🠶 The transition phase consists of the transfer of the system to the user
community
🠶 Includes manufacturing, shipping, installation, training, technical
support, and maintenance
🠶 Development team begins to shrink
🠶 Control is moved to maintenance team
🠶 Alpha, Beta, and final releases
🠶 Software updates
🠶 Integration with existing systems (legacy, existing versions…)
26
Agile
27
Outline
Quality
Testing
Assurance
Quality Assurance vs Testing
Quality Assurance
Testing
Quality Assurance
🠶 The More Bugs you find, the More bugs there are.
Common Error Categories
🠶 Boundary-Related
🠶 Calculation/Algorithmic
🠶 Control flow
🠶 Errors in handling/interpretting data
🠶 User Interface
🠶 Exception handling errors
🠶 Version control errors
Testing Principles
🠶 Design
🠶 Does this satisfy the specification?
🠶 Does it conform to the required criteria?
🠶 Will this facilitate integration with existing systems?
🠶 Implemented Systems
🠶 Does the system do what is it supposed to do?
🠶 Documentation
🠶 Is this documentation accurate?
🠶 Is it up to date?
🠶 Does it convey the information that it is meant to convey?
The Testing Process
Test
Plannin
g
Test
Design
and
Specific
ation
Test
Implem
entatio
n (if
automa
Test
ted)
Result
Analysis
and
Reporti
Test
ng
Control
Manag
ement
and
Test Planning
🠶 Reporting problems
🠶 Short Description
🠶 Where the problem was found
🠶 How to reproduce it
🠶 Severity
🠶 Priority
🠶 Can this problem lead to new test case ideas?
Test Control, Management and Review
Component Component
A B
Component
C
Database
Integration Testing
Component Component
A B
Component
C
Database
Unit Testing
Component Component
A B
Component
C
Database
Outline
effectiveness rate
🠶 requirements specs review 50%
🠶 design inspection 60%
🠶 design review 50%
🠶 code inspections 65%
🠶 unit test 50%
🠶 Unit test > code review 30%
🠶 integration test 50%
🠶 system tests / acceptance 50%
🠶 documentation review 50%
Cost Removal
It also should contain the schedules for the following audits proposed
for the project:
🠶 Periodic conformance audits
🠶 Phase-end audits
🠶 Investigative audits (and criteria)
🠶 Delivery audits
Chapter 3: Reviews
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline
🠶 Review Choices:
🠶 Formal Design Reviews (FDR)
🠶 Peer reviews (inspections and walkthroughs)
🠶 Used especially in design and coding phase
Review objectives
Direct objectives – Deal with the current project
b. To record analysis and design errors that will serve as a basis for
future corrective actions. (very important)
Review objectives
🠶 Many different kinds of reviews that apply to different objectives.
🠶 Reviews are not randomly thrown together.
🠶 Well-planned and orchestrated.
🠶 Objectives, roles, actions, participation, …. Very involved tasks.
🠶 Participants are expected to contribute in their area of expertise.
Important to note that a design review can take place any time an
analysis or design document is produced, regardless whether that
document is a requirement specification or an installation
document.
Formal Design Reviews
Review Team
🠶 Needs to be from senior members of the team
with other senior members from other
departments.
🠶 Why?
🠶 Escalation?
🠶 Team size should be 3-5 members.
🠶 Too large, and we have too much coordination.
🠶 This is a time for serious business – not
lollygagging!
Preparations for the Design
Review
🠶 Participants in the review include the
🠶 Review leader,
🠶 Review team, and
🠶 Development team.
🠶 Review Leader:
🠶 appoint team members;
🠶 establishes schedule,
🠶 distribute design documents, and more
Preparations for the Design
Review
🠶 Review Team preparation:
🠶 review document; list comments PRIOR to review session.
🠶 For large design docs, leader may assign parts to individuals;
🠶 Complete a checklist
🠶 Development Team preparations:
🠶 Prepare a short presentation of the document.
🠶 Focus on main issues rather than describing the process.
🠶 This assumes review team has read document and is familiar
with project’s outlines.
The Design Review Session
🠶 Tipoff:
🠶 Very short report – limited to documented approval of
the design and listing few, if any, defects
🠶 Short report approving continuation to next project
phase in full - listing several minor defects but no action
items.
🠶 A report listing several action items of varied severity but
no indication of follow-up (no correction schedule, no
documented activities, …)
Guidelines for Design Review
Post-Review Activities
🠶 Will discuss
🠶 1. inspections and
🠶 2. walkthroughs.
🠶 Difference between formal design reviews and peer
reviews is really in both their participants and authority.
🠶 DRs: most participants hold superior positions to the
project leaders and customer reps;
🠶 Peer reviews, we have equals
🠶 members of his/her department and other units.
Peer Reviews
Author is the
presenter
Focus on Peer Reviews:
Walk Throughs:
🠶 A standards enforcer – team member specialized in development
standards and procedures;
🠶 locate deviations from these standards and procedures.
🠶 These problems substantially affect the team’s long-term
effectiveness for both development and follow-on
maintenance.
🠶 A maintenance expert – focus on maintainability / testability issues
to detect design defects that may hinder bug correction and
impact performance of future changes.
🠶 A maintenance expert - Focuses also on documentation
(completeness / correctness) vital for maintenance activity.
🠶 A user representation – need an internal user (if customer is in the
unit) or an external representative - review’s validity due to his/her
point of view as user-customer rather than the designer-supplier.
Participants of Peer Reviews
Team Assignments
🠶 Presenter:
🠶 For inspections:
🠶 The presenter of document; chosen by the moderator;
should not be document’s author
🠶 Sometimes the software coder serves as presenter due to
the familiarity of the design logic and its implications for
coding.
🠶 For walk-throughs:
🠶 Author most familiar with the document should be chosen
to present it to the group.
🠶 Some argue that a neutral person should be used..
🠶 Scribe:
🠶 Team leader will often serve as the scribe and record noted
defects to be corrected.
Preparation for a Peer Review Session
Session Documentation
🠶 For inspections – much more comprehensive
🠶 Inspection Session Findings Report – produced by scribe
🠶 Inspection Session Summary Report – compiled by
inspection leader after session or series of sessions dealing
with the same document
🠶 Report summarizes inspection findings and resources
invested int eh inspections…
🠶 Report serves as inputs for analysis aimed at inspection
process improvement and corrective actions that go
beyond the specific document or project.
🠶 For walkthroughs – copies of the error documentation should
be provided to the development team and the session
participants.
The Post Review Session
🠶 Inspection:
🠶 Does not end with a review session or distribution of
reports
🠶 Post inspection activities are conducted to attest to:
🠶 Prompt, effective correction / reworking of all erorr
🠶 Transmission of the inspection reports to controlling
authority for analysis
The Efficiency of Peer Reviews
5
Software testing objectives
🠶 To prevent defects by evaluate work products such as requirements,
user stories, design, and code
🠶 To verify whether all specified requirements have been fulfilled
🠶 To check whether the test object is complete and validate if it works as
the users and other stakeholders expect
🠶 To build confidence in the level of quality of the test object
🠶 To find defects and failures thus reduce the level of risk of inadequate
software quality
🠶 To provide sufficient information to stakeholders to allow them to make
informed decisions, especially regarding the level of quality of the test
object
🠶 To comply with contractual, legal, or regulatory requirements or
standards, and/or to verify the test object’s compliance with such
requirements or standards
6
Errors, Defects, and Failures
🠶 A person can make an error (mistake), which can lead to the introduction of a
defect (fault or bug) in the software code or in some other related work
product.
🠶 An error that leads to the introduction of a defect in one work product can trigger an
error that leads to the introduction of a defect in a related work product.
🠶 If a defect in the code is executed, this may cause a failure, but not necessarily
in all circumstances.
🠶 failures can also be caused by environmental conditions. For example, radiation,
electromagnetic fields, and pollution can cause defects in firmware or influence the
execution of software by changing hardware conditions.
🠶 Not all unexpected test results are failures. False positives may occur due to errors in
the way tests were executed, or due to defects in the test data, the test
environment, or other testware, or for other reasons. The inverse situation can also
occur, where similar errors or defects lead to false negatives. False negatives are
tests that do not detect defects that they should have detected; false positives are
reported as defects, but aren’t actually defects.
Errors, Defects, and Failures (cont.)
[email protected]
Test Techniques (cont.)
🠶 Some techniques are more applicable to certain situations and test levels;
others are applicable to all test levels.
🠶 When creating test cases, testers generally use a combination of test
techniques to achieve the best results from the test effort.
🠶 The use of test techniques in the test analysis, test design, and test
implementation activities can range from very informal (little to no
documentation) to very formal.
🠶 The appropriate level of formality depends on the context of testing, including
the maturity of test and development processes, time constraints, safety or
regulatory requirements, the knowledge and skills of the people involved, and
the software development lifecycle model being followed.
[email protected]
Blackbox test design technique
🠶 Suppose an input field accepts a single integer value as an input, using a keypad
to limit inputs so that non-integer inputs are impossible. The valid range is from 1 to
5, inclusive. So, there are three equivalence partitions: invalid (too low); valid;
invalid (too high). For the valid equivalence partition, the boundary values are 1
and 5. For the invalid (too high) partition, the boundary value is 6. For the invalid
(too low) partition, there is only one boundary value, 0, because this is a partition
with only one member.
🠶 Some variations of this technique identify three boundary values per boundary:
the values before, at, and just over the boundary. In the previous example, using
three-point boundary values, the lower boundary test values are 0, 1, and 2, and
the upper boundary test values are 4, 5, and 6 (see Jorgensen 2014).
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Black-box Test Techniques
Decision Table Testing
🠶 Decision tables are a good way to record complex business rules that a system
must implement.
🠶 When creating decision tables, the tester identifies conditions (often inputs) and
the resulting actions (often outputs) of the system.
🠶 These form the rows of the table, usually with the conditions at the top and the
actions at the bottom.
🠶 Each column corresponds to a decision rule that defines a unique combination of
conditions which results in the execution of the actions associated with that rule.
🠶 The values of the conditions and actions are usually shown as Boolean values (true or
false) or discrete values (e.g., red, green, blue), but can also be numbers or ranges of
numbers.
🠶 These different types of conditions and actions might be found together in the same
table.
Black-box Test Techniques
Decision Table Testing (cont.)
🠶 The common notation in decision tables is as
follows:
🠶 For conditions:
🠶 Y means the condition is true (may also be shown as T or 1)
🠶 N means the condition is false (may also be shown as F or 0)
🠶 — means the value of the condition doesn’t matter (may also be
shown as N/A)
🠶 For actions:
🠶 X means the action should occur (may also be shown as Y or T or
1)
🠶 Blank means the action should not occur (may also be shown as
– or N or F or 0)
Black-box Test Techniques
Decision Table Testing (cont.)
🠶 A full decision table has enough columns (test cases) to cover every
combination of conditions.
🠶 The common minimum coverage standard for decision table testing is to
have at least one test case per decision rule in the table. This typically
involves covering all combinations of conditions.
🠶 Coverage is measured as the number of decision rules tested by at least one
test case, divided by the total number of decision rules, expressed as a
percentage.
🠶 The strength of decision table testing :
🠶 helps to identify all the important combinations of conditions, some of which
might otherwise be overlooked.
🠶 helps in finding any gaps in the requirements.
🠶 may be applied to all situations in which the behavior of the software
depends on a combination of conditions, at any test level.
[email protected]
Black-box Test Techniques
Decision Table Testing: Example
12/31/13
Black-box Test Techniques
State Transition Testing: Example (cont.)
• Coverage level 1 :
Create test cases such
that each state occurs
at least once. For
example, 3 paths will
reach level 1 .
coverage
Black-box Test Techniques
State Transition Testing: Example (cont.)
• Coverage level 2 :
Create test cases such
that each event
occurs at least once.
For example, 3 paths
reach level 2 .
coverage
Black-box Test Techniques
State Transition Testing: Example (cont.)
• Testing level 3 : create paths
such that all transition paths
are tested. A transition path
is a defined state transition,
starting from the input state
and ending at the end state.
• This is the best coverage result because
it exhausts all possibilities, but it's not
feasible when the transition path has
loops.
🠶 For code:
for (i=1; i<=1000; i++)
for (j=1; j<=1000; j++)
for (k=1; k<=1000; k++)
doSomethingWith(i,j,k);
There is only 1 execution path, but length is 1000*1000*1000 = 1
billion statements doSomethingWith(i,j,k).
🠶 For code:
if (c1) s11 else s12;
if (c2) s21 else s22;
if (c3) s31 else s32;
...
if (c32) s321 else s322;
🠶 With TC1, TC2 from previou slide, we obain 3/4 x 75% branch
coverage. Add test case 3 :
🠶 TC3. foo(1,2,1,2), we obain 100% branch coverage.
Coverage level 3
• Each subcondition must be excuted >= 1 time for true
case, >=1 time for false case. We call it’s subcondition
coverage.
🠶 Guideline:
• From testing module, create control flow graph G.
• Compute complexity Cyclomatic of G (called C).
o V(G) = E - N + 2, for E is number of edges, N is number of nodes.
o V(G) = P + 1, for P is number of decision node.
o Note: if V(G) >10, we should devide the module to be submodules
to reduce probability of error
🠶 We have C = 3 + 1 = 4, path:
🠶 1, 9
🠶 1, 2, 3, 8, 1, 9
🠶 1, 2, 4, 5, 7, 8, 1, 9
🠶 1, 2, 4, 6, 7, 8, 1, 9
Loop testing
Loop testing
Unstructured
loops
Loop testing (cont.)
• For nested loop:
o Start with the innermost loop. Set
the iteration parameters for the
outer loops to the minimum value
o Check the min+1 parameter, a
typical value, max -1, max for the
inner loop while the iteration
parameters of the outer loops are
minimal Single
o Continue with outer loops until all loop Nested
Sequence
loops are tested loop
loops
Unstructured
loops
Loop testing (cont.)
• For sequence loops:
o Do similar to nested loop
Single
loop Nested
Sequence
loop
loops
Unstructured
loops
Loop testing: Example
1. // LOOP TESTING EXAMPLE PROGRAM import java.io.*;
2. class LoopTestExampleApp {
3. // ------------------ FIELDS ----------------------
4. public static BufferedReader keyboardInput = new BufferedReader(new
InputStreamReader(System.in));
5. private static final int MINIMUM = 1;
6. private static final int MAXIMUM = 10;
7. // ------------------ METHODS ---------------------
8. /* Main method */
9. public static void main(String[] args) throws IOException {
10. System.out.println("Input an integer value:");
11. int input = new Integer(keyboardInput.readLine()).intValue();
12. int numberOfIterations=0;
13. for(int index=input;index >= MINIMUM && index <= MAXIMUM;index++)
14. {
15. numberOfIterations++;
16. } // Output and end
17. System.out.println("Number of iterations = " + numberOfIterations); }
18.}
Loop testing: Example (cont.)
Input Result
11 0 (skip loop)
10 1 (loop 1 time)
5 5 (loop k time)
1 10 (loop n time)
0 0 (skip loop)
Outline
• Scenario 1 : ~dduk
• Scenario 3: ~dk
🠶 http://pathcrawler-online.com:8080
🠶 QA management tools assist testers in doing their tasks effectively. Their tasks
including:
1. Creating and maintaining release/project cycle/component information.
2. Creating and maintaining the test artifacts specific to each release/cycle for which we
have- requirements, test cases, etc.
3. Establishing traceability and coverage of the test assets.
4. Test execution support – test suite creation, test execution status capture, etc.
5. Metric collection/report-graph generation for analysis.
6. Bug tracking/defect management.
Documentation support software
🠶 Writing reports, user manuals, designing and building test cases, and other
documentation-related tasks frequently employ documentation support
software.
🠶 The most commonly used software are:
∙ Microsoft Office Word.
∙ Microsoft Office Exel.
∙ Microsoft Office Project.
∙ Microsoft Office Power Point.
Bug tracking system
🠶 A bug tracking system is required for software projects in order to track and
report defects while the software is being developed.
Bug tracking system (cont.)
🠶 Built for every member of agile teams and beyond to plan, track, and ship world-
class software.
🠶 Use cases
• Agile teams
• Bug tracking
• Project management
• Product management
• Process management
• Task management
• Software development
• Requirements & test case management
Outline
🠶 JUnit is an open source Unit Testing Framework for JAVA. It is useful for Java
Developers to write and run repeatable tests.
Annotation Description
@Test Denotes that a method is a test method. Unlike JUnit 4’s @Test annotation, this annotation does
not declare any attributes, since test extensions in JUnit Jupiter operate based on their own
dedicated annotations. Such methods are inherited unless they are overridden.
@BeforeEach Denotes that the annotated method should be
executed before each @Test, @RepeatedTest, @ParameterizedTest, or @TestFactory method
in the current class; analogous to JUnit 4’s @Before. Such methods are inherited – unless they
are overridden or superseded (i.e., replaced based on signature only, irrespective of Java’s
visibility rules).
@AfterEach Denotes that the annotated method should be
executed after each @Test, @RepeatedTest, @ParameterizedTest, or @TestFactory method in
the current class; analogous to JUnit 4’s @After. Such methods are inherited – unless they
are overridden or superseded (i.e., replaced based on signature only, irrespective of Java’s
visibility rules).
A standard test class
import static @Test
org.junit.jupiter.api.Assertions.f @Disabled("for
ail; demonstration purposes")
void skippedTest() { //
import static
not executed }
org.junit.jupiter.api.Assumptions @Test
.assumeTrue; void abortedTest() {
import
org.junit.jupiter.api.AfterAll; assumeTrue("abc".cont
import ains("Z"));
org.junit.jupiter.api.AfterEach; fail("test should
import have been aborted");
}
org.junit.jupiter.api.BeforeAll;
@AfterEach
import void tearDown() { }
org.junit.jupiter.api.BeforeEach; @AfterAll
import static void
org.junit.jupiter.api.Disabled; tearDownAll() { }
import org.junit.jupiter.api.Test;
}
class StandardTests {
@BeforeAll
Class Assertions
Assertions is a collection of utility methods that support asserting conditions in tests.
🠶 There are many automated functional testing tools, both free and paid, for
example:
🠶 Selenium free tool, website testing
🠶 Unified Functional Testing (UFT) a paid tool, testing websites and software running
on computers
🠶 Appium free tool, mobile app testing
🠶 Among the above tools, Selenium is widely used due to its free powerful
tool and strong support community.
Selenium introduction
🠶 Selenium (https://www.selenium.dev ) is an open-source and a portable
automated software testing tool for testing web applications.
🠶 Selenium is a set of tools that helps testers to automate web-based applications
more efficiently:
🠶 Selenium IDE: Selenium Integrated Development Environment (IDE) A Firefox plugin
enables testers to record their actions as they go through the process they need to
test.
🠶 Selenium RC: Selenium Remote Control (RC) The pioneering testing framework that
allowed for more than just straightforward browser activities and linear execution.
🠶 Selenium WebDriver: Selenium WebDriver is the replacement for Selenium RC, which
delivers commands to the browser directly and returns results..
🠶 Selenium Grid: Selenium Grid is is a technology that reduces execution time by running
parallel tests across multiple computers and browsers at once.
Outline
🠶 Support performance testing for the widest range of protocols and more than
50 technologies and application environments
🠶 Quickly identify the most likely causes of performance issues with a patented
auto-correlation engine.
🠶 Accurately predict application scalability and capacity with accurate
emulation of realistic loads.
🠶 Flexible Test Scenarios: Run high-scale tests with minimal hardware and
leverage the public cloud to scale up or down.
🠶 Scripting: Easily create, record, correlate, replay, and enhance scripts for better
load testing.
🠶 Continuous Testing: Built-in integrations include IDE, CI/CD, open source test
automation, monitoring, and source code management tools.
Performance testing tool: Apache JMeter:
Functional test
automation
Mobile test
automation
Performance test
🠶
Test management
🠶
Security test
API test
Chapter 7: Standards in SQA
management
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline
Organizational Organizational
IPDS
Quality Manual SD Process STD’s
Organizational
Product STD’s
🠶 Software is intangible
🠶 therefore difficult to control.
🠶 It is difficult to control anything that we cannot see
and feel.
🠶 In contrast, in a car manufacturing unit
🠶 Software project management is an altogether different
ball game.
🠶 During software development: the only raw material
consumed is data.
🠶 For any other product development: Lot of raw materials
consumed..
🠶 ISO 9000 standards have many clauses corresponding to
raw material control .
-> not relevant to software organizations.
The ISO 9000:2000 Software Quality
Standard
15
The ISO 9000:2000 Software Quality
Standard
ISO 9001:2000 Requirements
🠶 Part 4. Systemic requirements (partial)
🠶 Document the organizational policies and goals.
🠶 Document all quality processes and their interrelationship.
🠶 Implement a mechanism to approve documents before are distributed.
🠶 Part 5: Management requirements (partial)
🠶 Generate an awareness for quality to meet a variety of requirements,
such as customer, regulatory, and statutory.
🠶 Focus on customers by identifying and meeting their requirements in
order to satisfy them.
🠶 Develop a quality policy to meet the customer’s needs.
🠶 Clearly define individual responsibilities and authorities concerning the
16
18
Outline
20
Capability Maturity Model (CMM)
The CMM is organized into five maturity levels
Level 1 : Initial
🠶 The software process is characterized as ad hoc, few processes
are defined and success depends on individual efforts.
🠶 This period is chaotic without any procedure and process
established for software development and testing.
21
Capability Maturity Model (CMM)
Level 2 : Repreatable
🠶 Track cost, schedule, and functionality .
🠶 During this phase, measures and metrics will be reviewed to
include percentage compliance with various processes,
percentage of allocated requirements delivered, number of
changes to requirements, number of changes to project plan,
variance between estimated and actual size of deliverables.
🠶 The following are the key process activities during Level 2:
❑ Software configuration management
❑ Software quality assurance
❑ Software subcontract management
❑ Software project tracking and oversight
❑ Software project planning 22
❑ Requirement management
Capability Maturity Model (CMM)
Level 3: Defiened
🠶 The software process for management and engineering activities
is documented, standardized and integrated into a standard
software process for the organization.
🠶 All projects use an approved version of the organization standard
software process for developing and maintaining software.
🠶 The following are the key process activities during Level 3:
❑ Examine reviews
❑ Intergroup coordination
❑ Software program engineering
❑ Integrated software management
❑ Training Program
❑ Organization process definition
23
24
Capability Maturity Model (CMM)
Level 5: Optimizing
🠶 Continues process improvement is enabled by quantitative
feedback from the process and from piloting new idea and
technologies.
🠶 Continuous emphasis on process improvement and defect
reduction avoid process stagnancy and ensure continual
improvement translating into improved productivity, tracing
requirements across each development phase improves the
completeness of software, reduce rework, and simplify
maintenance. Verification and validation activities are planned
and executed to reduce defect leakage. Customers have access
to the project plane, receive regular status reports and their
feedback is sought and used for process tuning.
25
What is a CMMi?
so What is a PROCESS ?
What is a process
🠶 A process is a series of actions or steps taken in order to
achieve a particular end in the form of a product or
service
🠶 We may not realize it, but processes are everywhere and
in every aspect of our leisure and work. A few examples of
processes might include:
🠶 Preparing breakfast
🠶 Placing an order
🠶 Developing a budget
🠶 Writing a computer program
🠶 Obtaining application requirements
🠶 And so on
Process Improvement
32
How CMMI Helps?
34
Summary of levels
35
Level 1 – Initial
36
Level 2 – Repeatable
Key areas
🠶 Requirements management
🠶 Software project planning
🠶 Project tracking and oversight
🠶 Subcontracts management
🠶 Quality assurance
🠶 Configuration management
37
Level 3 – Defined
38
Level 4 – Managed
39
Level 5 – Optimizing