0% found this document useful (0 votes)
28 views10 pages

C36 c108 ICMIT2000Arumugam

Uploaded by

Prerna Singal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views10 pages

C36 c108 ICMIT2000Arumugam

Uploaded by

Prerna Singal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Title Software risk assessment model( Accepted version )

Author(s) Foo, Say Wei; Arumugam Muruganantham

Foo, S. W., & Arumugam, M. (2000). Software risk


Citation assessment model. IEEE International Conference on
Management of Innovation and Technology, 2, 536-544.

Date 2000

URL http://hdl.handle.net/10220/4607

© IEEE. Personal use of this material is permitted.


However, permission to reprint/republish this material for
advertising or promotional purposes or for creating new
collective works for resale or redistribution to servers or
lists, or to reuse any copyrighted component of this work
in other works must be obtained from the IEEE. This
material is presented to ensure timely dissemination of
Rights scholarly and technical work. Copyright and all rights
therein are retained by authors or by other copyright
holders. All persons copying this information are
expected to adhere to the terms and constraints invoked
by each author's copyright. In most cases, these works
may not be reposted without the explicit permission of the
copyright holder.
SOFTWARE RISK ASSESSMENT MODEL

Say-Wei Foo and Arumugam Muruganantham


National University of Singapore
Email: [email protected]

ABSTRACT complexity of the software, productive level of staff,


flexibility of schedule etc.
The software industry has grown rapidly with the
increasing applications of information technology. Systematic risk analysis and management is an
Just like any development project, there is inherent organized way of identifying risk factors, analyzing
risk in software development projects. Failure to them, assessing their impacts on the project and
develop software according to specifications or mitigating them when they arise. Systematic risk
within budget, or to deliver software products on analysis and management may be facilitated through
time can be costly. In this paper, a new model for the the use of a risk assessment model that provides
assessment of risk in software projects is proposed. quantitative assessments of the risks involved. In this
The model, named Software Risk Assessment Model paper, a new risk assessment model is proposed. The
(SRAM), makes use of a comprehensive proposed model is named Software Risk Assessment
questionnaire. Test results show that using the risk Model (SRAM).
indicator obtained from the SRAM, it is possible to
predict the possible outcome of software projects In Section 2, two existing risk assessment techniques
with good accuracy. and their limitations are presented. The proposed
SRAM model is described in Section 3. Detailed
Keywords: Software, Risk Assessment, Risk explanation of the calibration of the model and
Management, Model testing of the model are given in Section 4 and
Section 5 respectively. The concluding remark
1. INTRODUCTION follows in Section 6.
The cost of software product can be very high and
failure to produce the product on time, with the 2.EXISTING RISK ASSESSMENT
estimated resources and according to specifications MODELS
can be costly and damaging. Risk in the context of There are a few published models that evaluate the
software engineering is defined as the factors that risk of a software project. In [1], a method to
may cause late delivery, cost overrun or low quality evaluate risk using risk drivers is described. Risk
of a software product. Driver method is conceived following US Air
Force’s guidelines for software risk identification
A typical software development process consists of and abatement. The US Air force defined the major
identification of product requirements, analysis of risk components as performance risk, cost risk,
the requirements, design of the product, coding, unit support risk and schedule risk. This model, however,
testing and system testing. In the process of does not have questions that bring out process
development, it may be necessary to backtrack to related risks and is more suited for acquisition than
previous stages should problems be encountered in development of software.
any stage. Problems encountered in any of the stages
could result in delay in delivery and cost overrun. It Another model named Software Engineering Risk
is important that risks are assessed before Model (SERIM) [2] focuses on three risk elements:
development work begins and careful attention is technical risk, cost risk and schedule risk. Similar to
paid to risk management as a wrong move at a single the Risk Driver model, SERIM is based on subjective
stage of the software development process may result probability. SERIM defines a hierarchical probability
in overall failure of the project. tree formulated by risk elements, risk factors and
risk metrics of decision alternatives. The model,
Risks are associated with certain factors or risk however, does not take into account of the software
elements. Some of the important risk elements are complexity issues, which plays an important role in
determining the risk for the software project. It also 3.2 Assessment of the overall risk level of a
does not account for issues related to requirements. project
Requirements contribute significant risk to the For all questions with respect to a particular risk
project. element, choices are obtained and their numerical
ratings are accumulated. The normalized value for
3. PROPOSED SOFTWARE RISK the risk element is obtained by dividing the
ASSESSMENT MODEL (SRAM) accumulated value by the number of questions
3.1 Concept attempted for the particular risk element. This value
The proposed risk assessment model considers the is given the name of risk element probability. The
following nine critical risk elements: complexity of risk elements may be ranked according to the risk
software, staff involved in the project, targeted element probability. Manager can then decide to
reliability, product requirements, method of allocate resources proportional to the magnitude of
estimation, method of monitoring, development risk of the risk elements.
process adopted, usability of software and tools used
for development. A set of questions is carefully Let the nine risk element probabilities be denoted by
chosen for each of these elements with three choices r1, r2, .., ri, .., r9. As the nine risk elements have
of answers each. The answers are arranged in different degree of impact on different types of
increasing order of risk. software project, different weights may be assigned
to these elements when combining the risk element
For example, Software Complexity is one of the risk probabilities to derive at the overall risk value for the
elements of software projects. The higher the project. Let the weights assigned to the elements be
complexity of the software, the higher is the risk. denoted by w1, w2, .., wi, .., w9. The risk level R of
One of the questions to assess the risk associated the project is then computed as w1 r1+ w2 r2+ …+ wi
with the complexity of software is: rI+ …+ w9 r9.

Q1. What is the function of the software to be If the maximum rating for all questions is 3 and the
developed? minimum rating is 1, then the maximum value of R,
Choices: Rmax, is given by Rmax = w1 3+ w2 3+ …+ wi 3+ …+
a. Data processing software w9 3 = 3 (w1+ w2+ …+ wi+ …+ w9). Similarly, the
b. Service software (Communication software) minimum R, Rmin,, is given by Rmin = 1*(w1+ w2+
c. System software …+ wi+ …+ w9). The overall risk level R may then
be normalized as follows:
The risk assessor will pick one of the three choices
based on the nature of the project and the actual Normalized R = Rn= (R - Rmin) / (Rmax - Rmin)
situation. Choice a. is given a risk rating of one,
Choice b. a rating of two and Choice c. a rating of The normalized Rn provides the risk level of the
three. In the above example, system software is assessed project as a fraction between 0 and 1. Rn for
considered the most complex among the three types project with the lowest risk (no risk) is 0 and Rn for
of software as it has to interact with the hardware project with the highest risk is 1. We shall refer to
and facilitates the operation of other types of this normalized value for a project as project risk.
software. Hence a rating of 3 is given if the type of
software to be developed is system software. Data 3.3 Assessment of the Quality, Schedule and Cost
processing software, on the other extreme, only deals risks of a project
with local data and hence is deemed to be the least The level of risk of the project in relation to quality,
complex of the three and hence a rating of 1 is given schedule and cost can also be assessed separately
for this choice. based on the risk element probabilities obtained.
This is done by assigning different weights to the
Due to the scope of this paper, the questionnaire probabilities according to the impact of the
relating to the SRAM is given in Appendix 1 associated risk elements on quality, schedule and
without detailed explanation and justification of the cost respectively.
questions and choices.
For example, if the following table of impact is used,
The risk elements Complexity, Staff, Reliability and
Requirements have highest impact on quality of the are named P1 to P10. The projects are all related to
project hence the largest value of weight (say 3) is to mobile software and degree of impact of the risk
be assigned to the risk element probabilities of these elements on Quality, Schedule and Cost of the
risk elements. Monitoring and Tools have the least projects are similar. A common set of values of
impact on quality and hence the smallest value of weight is used. The values are those given in Table 1
weight (say 1) is to be used. Estimation, with HIGH=3, MEDIUM=2 and LOW=1.
Development Process and Usability have medium
impact on quality and hence the value of weight The overall project risk is obtained by taking the
assigned to the associated risk element probabilities average of the risk values calculated for Quality,
should be values between the two extremes (say 2). Schedule and Cost. These values and the Customer
The normalized value Rn computed using these Feedback Index of all 10 projects are tabulated in
values of weight and the risk element probabilities Table 2.
then gives an indication of the level of risk of the
project in relation to quality. Table 2 Values of Project Risk and Customer
Feedback Index
Table 1: Level of Impact of Risk Element on Project Quality Schedule Cost Overall Customer
Quality, Schedule and Cost Feedback
Risk Elements Quality Schedule Cost Index
Complexity HIGH HIGH HIGH P1 0.633 0.628 0.629 0.63 3
Staff HIGH HIGH HIGH P2 0.130 0.735 0.727 0.53 5
Reliability HIGH MEDIUM MEDIUM P3 0.228 0.209 0.217 0.22 10
Requirements HIGH MEDIUM MEDIUM P4 0.217 0.197 0.205 0.21 11
Estimation MEDIUM HIGH HIGH P5 0.570 0.571 0.569 0.57 4
Monitoring LOW MEDIUM LOW P6 0.452 0.466 0.470 0.46 6
Development MEDIUM MEDIUM MEDIUM P7 0.505 0.519 0.514 0.51 7
Process P8 0.376 0.399 0.396 0.39 8
Usability MEDIUM LOW LOW P9 0.388 0.384 0.399 0.39 7
Tools LOW LOW LOW P10 0.248 0.244 0.248 0.25 10

The level of risk in relation to schedule and cost can


be obtained in a similar fashion.
Calibration Chart
4. CALIBRATION OF SRAM
12
The relative level of risk of projects may be
compared using the project risk values. 10
Customer Feedback Index

Alternatively, by calibrating the project risk values


with the Customer Feedback Index using past 8
projects, customer satisfaction or outcome of a
6
project may be predicted based on the calculated
project risk value. Note that the Customer Feedback 4
Index is calculated by the customer department of
the company based on data such as call throughput, 2

system availability, number of defects and usability


0
of the released product obtained from customers. It is 0 0.2 0.4 0.6 0.8
an indication of the customer's satisfaction of the Pr oje ct Ris k V alue
project and the degree of success/failure of the
project. A value between 0 and 12 is obtained. A
value of 12 indicates the highest level of satisfaction
and a value 0 indicates the lowest level of Figure 1: Customer Feedback Index vs. Project
satisfaction. Risk Value

Ten projects of a multinational company are used to The Customer Feedback Index is plotted against the
corresponding value of project risk for all 10 projects
calibrate the SRAM. For simplicity, these projects
and is shown in Figure 1. A regression line is drawn 5. TESTING OF THE MODEL
through all the ten points. This line then serves as a The accuracy of prediction of the SRAM is evaluated
calibration chart between overall project risk and by applying the SRAM to ten other projects of
Customer Feedback Index. similar nature. The results are tabulated in Table 3.

From data of the past software projects in the The Predicted Customer Feedback Index is then
company, the Customer Feedback Indices for obtained from the calibration curve of Figure 1 for
unaccepted projects are found to be below 4. The each of the project using the calculated overall
Customer Feedback Indices for projects completed project risk. These are tabulated together with the
with extended schedule, low quality or cost overrun, actual Customer Feedback Indices in Table 4.
are between 4 and 8. The Customer Feedback Indices
for projects completed in time, within budget and Table 4: Comparison of Computed and Actual
according to specifications are between 8 and 12. Customer Feedback Indices
*
Project Predicted Actual Variation
From Figure 1, it can be seen that there is a close Customer Customer
correlation between the project risk value and the Feedback Feedback
Customer Feedback Index. Index Index
P11 6.8 7 +0.2/6.8 = +2.94%
If the risk level of a project assessed by SRAM is
P12 10.8 10 -0.8/Q.108 = -7.41%
above 0.6, the corresponding Customer Feedback
P13 5.8 6 +0.2/5.8 = +3.45%
Index is below 4. This indicates that it a high-risk
project. In this case, it is strongly advised to P14 8.5 9 +0.5/8.5 = +5.88%
discontinue or not to undertake the project. P15 9.4 9 -0.4/9.4 = -4.26%
P16 10.8 10 -0.8/Q.108 = -7.41%
If the risk level of a project assessed by SRAM is P17 11.4 11 -0.4/Q.114 = -3.51%
between 0.36 and 0.6, the corresponding Customer P18 9.7 10 +0.3/9.7 = +3.09%
Feedback Index is between 8 and 4. It is necessary to P19 7.9 8 +0.1/7.9 = +1.27%
reduce the likelihood and impact of the risk elements P20 7.6 7 -0.6/7.6 = -7.9%
through immediate risk containment procedures. The
project manager shall identify the weak areas that Variation listed in the table is defined as the
require urgent attention and resources. percentage of deviation of the actual Customer
Feedback Index varies from the Predicted Customer
If the risk level of a project assessed by SRAM is Feedback Index.
below 0.36, the corresponding Customer Feedback
Index is above 8, the project is a low risk project and The mean value of Variation is 0.9% and the
the chances of success are high. standard deviation is 5.0%. Results show that the
project risk value is able to predict reasonably well
Table 3: Values of Project Risk the Customer Feedback Index.
Project Quality Schedule Cost Overall
P11 0.444 0.433 0.435 0.437 6. CONCLUSION
P12 0.201 0.216 0.219 0.212 A new model for the assessment of risk of software
P13 0.485 0.488 0.484 0.486 projects, the Software Risk Assessment Model
P14 0.34 0.328 0.322 0.33 (SRAM), is described. The model focuses on nine
P15 0.285 0.282 0.283 0.283 risk elements and makes use of a set of questionnaire
P16 0.210 0.208 0.204 0.207 relating to these risk elements.
P17 0.168 0.161 0.159 0.163
P18 0.255 0.266 0.264 0.262 The risk levels of individual risk element may be
P19 0.364 0.373 0.366 0.368 computed. The model also allows levels of risk
P20 0.377 0.393 0.380 0.383 associated with Quality, Schedule and Cost of a
project to be calculated separately.

The model is tested using past projects of known


outcome. It is concluded that the model is able to
assess the level of risk of a software project and APPENDIX 1: QUESTIONNAIRE FOR SRAM
predict the possible outcome of the project with
reasonable accuracy. I. Complexity of Software
Q.1 What is the function of the software to be developed?
However it is to be noted that the success of a a. Data processing software
b. Service software (Communication software)
software product not only depends on the c. System software
development risk but also on the marketing risk. The Q.2 What is the memory limitation?
SRAM only addresses the development risk and does a. Allowed the use of any amount of memory
not assess the marketing risk. b. Some additional memory allowed
c. Restricted to a specific amount of memory
REFERENCES Q.3 What is the complexity of I/O devices?
[1] Roger S. Pressman, A Manager’s Guide to a. Standard I/O devices
b. Medium complexity I/O devices (e.g. scanner)
Software Engineering, McGraw-Hill 1993.
c. High complexity I/O devices (e.g. microphone)
[2] Dale, Walter Karolak, Software Engineering Q.4 What is the time requirements/limits for the
Risk Management, IEEE Computer Society functions?
Press 1996. a. No function requires time limits
[3] Alex Down, Machael Coleman and Peter b. Certain functions require time limits
Absolon, Risk Management for Software c. Many functions require time limits
Projects, McGraw-Hill Book Company 1994. Q.5 What is the hardware (or platform) dependency?
[4] Marian Myerson, Risk Management Processes a. Platform independent (e.g. Java programs)
for Software Engineering Models, Artech b. Run on some platforms only (e.g. GNU C programs)
c. Highly platform dependent (e.g. Tandem C programs)
House, Boston, 1996.
Q.6 What type of control operations is used in the
[5] Kim Caputa, CMM Implementation Guide, programs?
Addison - Wesley 1998. a. Straight line code, few nested structured programming
[6] Software Engineering Guide, Prentice Hall 1996. operations
[7] Scott E. Donaldson and Stanley G. Siegel, b. Simple nesting, some inter module control, simple
"Cultivating Successful Software Development", message passing
Prentice Hall 1997, Pg 55-60. c. Multiple resource scheduling, dynamically changing
[8] Dina Berkeley, Robert De Hoog and Patric priorities
Humphreys, Software Development Project Q.7 What type of computational operations is used in the
programs?
Management, Ellis Horwood Limited, Pg 23-34.
a. Evaluation of simple expressions
[9] E.M. Bennatan, "On Time, Within Budget", b. Use of standard mathematics and statistical routines,
McGraw-Hill Book Company 1995, Pg 68-74. basic matrix/vectors
[10] E.M. Bennatan, Software Project Management, c. Difficult and unstructured numerical analysis /
McGraw-Hill Book Company, Pg 13-24. stochastic data
[11] Barry W. Boehm, TRW Defence System Group, Q.8 What type of device dependent operations is used in
A Spiral Model of Software Development and the programs?
Enhancement", Software Management, IEEE a. Simple read / write statements with simple formats
Computer Society Press, 1997. b. I/O procession includes device selection / status
checking
[12] Ronald P. Higuera, Software Risk Management,
c. Device time dependent / micro programmed operations
Technical Report, Software Engineering Q.9 What type of data management operations is used in
Institute, Carnegie Mellon University, Pg 9-18. the programs?
[13] Roger L. Van Scoy, Software Development a. Simple array in main memory, simple database queries,
Risk: Opportunity, Not Problem, Technical updates
Report, Software Engineering Institute, b. Multiple input, single output, simple structural changes
Carnegie Mellon University, Pg 11-Q.15 c. Highly coupled, dynamic relational and object structures
[14] L. Hyatt and L. Rosenberg, A Software Quality Q.10 What type of user interface management operations
Model Metrics For Risk Assessment, Presented is used in the programs?
a. Simple input forms / reports generations
at European Space Agency Software Assurance
b. Use of widget sets
Symposium, Utah, April, 1996. c. Complex multimedia, virtual reality
[15] Post Mortem Reports - Singapore Software
Centre, Motorola.
II. Staff involved in the project III. Targeted Reliability
Q.1 What is the average experience of a member of the Q.1 Are there error handling conditions throughout the
staff with the company? program?
a. More than 5 years a. Error handling conditions for every possible instances
b. About 3 years b. Error handling conditions for some possible instances
c. Less than a year c. No error handling condition within the program
Q.2 What is the average coding experience of a member of Q.2 How are error conditions handled?
the staff? a. Processing continues for any error condition
a. Exceptional b. Processing continues for some error conditions
b. Average, at least one expert c. Processing discontinues upon any error condition
c. Beginners Q.3 Are error tolerance conditions defined for input and
Q.3 What is the motivation level of staff? output?
a. High, enjoys working, no complaints a. All error tolerance conditions are defined
b. Moderate, some complaints b. Some error tolerance conditions are defined
c. Low, a lot of complaints c. No error tolerance condition is defined
Q.4 What is the level of knowledge of staff on the Q.4 Are inputs checked for validity before processing?
application domain? a. All inputs are checked
a. Have worked on the whole process several times b. Some inputs are checked
b. Have worked on several portions of the system c. No inputs are checked
c. Have reading knowledge of the application Q.5 Are hardware faults detected and processed in the
Q.5 What is the average number of lines of code (C software?
equivalent) produced per person per day? a. All hardware faults are detected and processed
a. 30 lines b. Some hardware faults are detected and processed
b. 50 lines c. No hardware faults are detected and processed
c. More than 70 lines Q.6 Is the use of global data types minimized in the
Q.6 What is the variety or mix of software disciplines / software?
experiences? a. Few or no global data types are used
a. Good mix of all software disciplines b. Some global data types are used
b. Some disciplines are inappropriately represented c. Global data types are heavily used
c. Some disciplines are not represented Q.7 Are defect data collected during software integration?
Q.7 What is the experience level of software managers? a. All defect data are collected
a. Have similar management experience b. Some defect data are collected
b. Mix of new managers and experienced managers c. No defect data are collected
c. New managers with software engineering knowledge Q.8 Are defect data logged-in and closed-out prior to the
Q.8 Does each member of the staff have career plan and delivery?
goal setting? a. All defect data are logged in and closed out
a. All members have a goal and career plan reviewed b. Some defect data are logged in and closed out
regularly c. No defect data are logged in and closed out
b. Some members may have career plan but not followed Q.9 Are all the requirements tested?
closely a. All the requirements are tested
c. No career plan and goal setting b. Some of the requirements are tested
Q.9 What is the level of harmony among the staff? c. No requirements are tested
a. Harmonious and good teamwork Q.10 Is stress testing performed (for the changes in
b. Some unhealthy arguments at time codes)?
c. No compromises even in the fundamentals a. Stress testing is performed on all software
Q.10 Is every employee clear of his/her roles and b. Stress testing is performed for some software
responsibilities? c. No stress testing is performed
a. Clearly understands the roles and responsibilities Q.11 Who performs the system testing?
b. Fairly understands the roles and responsibilities a. Independent test team (other than the developers)
c. Confuses about the roles and responsibilities b. Independent test team and developers share the testing
Q.11 Is there a proper rewarding mechanism? c. Developers themselves do the testing
a. A good performance appraisal and rewarding scheme is Q.12 Does your company have similar past experience in
in place developing this type of software?
b. Adhoc performance appraisal and rewarding system a. Has track record producing similar products
c. No performance appraisal and rewarding system b. Some experience in similar products
c. No past experience
IV. Product Requirements V. Method of Estimation
Q.1 Are all the software requirements identified and/or Q.1 What is the estimation method used?
documented? a. Bottom-up
a. All requirements are identified and documented b. Analogy, Top-down
b. Some requirements are identified and/or documented c. Other techniques
c. No requirements are identified and documented Q.2 Is there any model used to compute the cost of the
Q.2 Is customer involved in the definition of project?
requirements? a. A suitable cost model is used
a. Customer is heavily involved b. A model is used for partial cost estimation
b. Customer is partially involved c. No cost model is used
c. Customer is not involved Q.3 Is estimation based on past software productivity
Q.3 Does the customer approve all requirements? metrics?
a. All requirements are approved by the customer a. Based on the past software productivity metrics (similar
b. Some of the requirements need customer approval projects)
c. The customer does not approve all requirements b. Partially based on the past software productivity metrics
Q.4 Are ambiguous requirements verified through the c. Not based on the past software productivity metrics
prototype? Q.4 Are schedule estimates based on the past software
a. Ambiguous requirements are verified through the projects?
prototype a. Based on similar past software project metrics
b. Some Ambiguous requirements are verified b. Partially based on similar past software project metrics
c. Ambiguous requirements are not verified c. Not based on similar past software project metrics
Q.5 Are requirements categorized as essential, nice-to- Q.5 How often are estimates revised?
have etc.? a. Estimates are updated monthly or on more frequent
a. All requirements are categorized and prioritized basis
b. Some requirements are categorized b. Estimates are updated at the end of the phases
c. Requirements are not categorized c. Estimates are never updated
Q.6 Are there any differences between the customer Q.6 How accurate are the past schedule estimates
requirements and software requirements (used by the compared to actual schedule?
developers)? a. Varies within (+/-)5% range of actual schedule
a. No differences i.e. customer requirements are intact b. Varies within (+/-)50% range of the actual schedule
b. Some refinements on requirements c. Varies more than (+/-)100% of the actual schedule
c. Many refinements on requirements Q.7 What is the level of participation of developers in the
Q.7 Are software requirements frozen before the estimation?
subsequent phase? a. All developers participated
a. All requirements are frozen before the next phase b. Some developers participated
b. Some requirements are expected to be changed c. Only managers prepare the estimate
c. Many requirements are expected to be changed Q.8 What are the resources allocated for the estimation?
Q.8 Are software requirements traceable to code? a. All required resources are allocated
a. All requirements are traceable to code b. Some resources are allocated but not sufficient
b. Some requirements are traceable to code c. No resources are allocated
c. No requirements are traceable to code
Q.9 Are software requirements traceable to test VI. Method of Monitoring
procedures? Q.1 Are there distinct milestones for each major software
a. All software requirements are traceable to test effort?
procedures a. Distinct milestones for each development phase
b. Some software requirements are traceable to test b. Not enough milestones
procedures c. No milestones
c. No software requirements are traceable to test Q.2 Is a detailed Work Breakdown Structure (WBS) used
procedures to track and report cost and budget for each phase of the
Q.10 Are all the open action items closed prior to delivery software development?
to the customer? a. Cost structure or WBS exists and properly tracked and
a. All the open actions items are addressed and reported
implemented b. Cost structure or WBS exist, but not tracked or reported
b. Some open actions items are addressed and c. No cost structure or WBS
implemented Q.3 Is there a monitoring system?
c. No actions items are addressed and implemented a. A monitoring system tracks cost, schedule and earned
value
b. A monitoring system exists but does not track schedule
and earned value
c. No monitoring system is used
Q.4 How often are the project progress reports created? Q.6 Are software quality functions performed?
a. Weekly a. All of the software quality functions are performed.
b. Monthly b. Some of the software quality functions are performed
c. No project report created c. No software quality functions are performed
Q.5 How often are cost, schedule and earned value reports Q.7 Is the current development methodology suitable for
updated? the project?
a. Monthly or more frequently a. It is tailored to the process
b. Updated less regularly b. It is a fixed process
c. Never updated c. No process is followed
Q.6 How frequent is the problem/action log updated? Q.8 Does the development methodology addresses
a. Weekly requirements, design, code reviews / walk-through /
b. Updated less frequently than a week inspections?
c. Problem log or action log system does not exist a. Yes, addresses all
Q.7 How often are the records for technical problems b. Not all of the above
updated? c. Does not address any of the above
a. Weekly or more frequently Q.9 Does the development methodology require test plans
b. Longer than a weekly basis and/or test procedures for all software functions?
c. No records are kept a. It requires test plans and/or test procedures for all
Q.8 How is the schedule plan developed? software functions
a. Bottom up, all members are involved b. It requires test plans and/or test procedures for some
b. Bottom up, some important members are not involved software functions
c. Top down, developed by one man c. It does not require test plans and/or test procedures
Q.9 What is the span of control? Q.10 Does the development methodology require
a. Each superior supervises not more than three documentation?
subordinates a. Development methodology requires documentation
b. Superior supervises between four and six subordinates b. Development methodology requires partial
c. Superior supervises more than six subordinates documentation
c. Development methodology does not require
VII. Development process adopted documentation
Q.1 Is software management planning document used for Q.11 Is regression testing performed?
the project? a. Regression testing is performed for all subsystems
a. Used and adhered closely b. Regression testing is performed for some subsystems
b. Used but not followed closely c. No regression testing is performed
c. Not used Q.12 Is there a documented organizational structure in
Q.2 Are software configuration management functions place?
performed? a. Documented organizational structure and operations are
a. All software configuration management functions are in place
performed b. No documented organizational structure but a clear line
b. Some software configuration management functions are of authority is in place
performed c. There is no documented organizational structure
c. No software configuration management functions are Q.13 Is the organizational structure stable?
performed a. No changes in the organizational structure
Q.3 Does communication exist between different b. Some organizational changes but not frequent
organizations supporting the development of the software c. Frequent changes
project?
a. Very good communication VIII. Usability of software
b. Reasonable communication Q.1 Will a user manual be written for the software
c. Poor communication product?
Q.4 Are software developers trained in the development a. User manual will be developed, tested, and delivered
methodology? with the product
a. All are trained b. User manual will not be verified against the software
b. Some are trained / all trained in some portion of functions
methodology c. No user manual will be provided for the software
c. None is trained in the development methodology product
Q.5 How closely is the software development methodology Q.2 Are there help functions for input or output screen?
followed? a. Help functions are provided for input or output function
a. It is closely followed by all b. Some help functions are provided for input or output
b. It is followed by some function
c. It is not followed c. No help functions are provided
Q.3 Is the user involved in reviewing prototypes or earlier Q.8 How stable is the compiler/linker/debugger?
version of the software? a. Stable with few or no problems
a. User is involved and feedback is solicited from the user b. Somewhat stable, but with some known problems
b. Some feedback is solicited from the user c. Not stable, with a lot of problems
c. User is not involved Q.9 Are required tools readily available to developers
Q.4 Is the user interface designed to industry standards or when needed?
to standards familiar to the user? a. All tools are available
a. Industry standards or standards familiar to the user are b. Some tools are available
followed c. No tools are available
b. Some aspects of standards are followed
c. No standards are followed End of Questionnaire
Q.5 Are user response times identified?
a. All user response times are identified
b. Some user response times are identified
c. No user response times are identified
Q.6 Is the design evaluated to minimize keystrokes and
data entry?
a. Entire design is evaluated to minimize keystrokes and
data entry
b. Some considerations are made to minimize keystrokes
and data entry
c. No design consideration is made to minimize keystrokes
and data entry

IX. Tools used for development


Q.1 Are software developers trained to use tools for
development?
a. All are trained to use tools
b. Some are trained to use tools
c. No one is trained to use tools or no tool is available
Q.2 Are automated software tools used for testing?
a. Automated tools are used in testing and they are
adequate
b. Automated tools are used in testing but they are not
adequate
c. No automated tools are used in testing
Q.3 Are automated tools used for code generation (i.e.
screen painters)?
a. Adequate automated tools are used for code generation
b. Some automated tools are used but not adequate
c. No automated tools are used
Q.4 Are automated tools used for test procedure
generation?
a. Adequate automated tools are used
b. Some automated tools are used but not adequate
c. No automated tools are used
Q.5 Are tools used for configuration management
functions?
a. Adequate configuration management tools are used
b. Some configuration management tools are used
c. No configuration management tools are used
Q.6 Are automated tools used for regression testing?
a. Adequate automated tools are used
b. Some automated tools are used, but not adequate
c. No automated tools are used
Q.7 Are automated tools used for re-engineering?
a. Adequate automated tools are used
b. Some automated tools are used, but not adequate
c. No automated tools are used

You might also like