Unit3 MU Newpdf 2024 01 23 08 44 49
Unit3 MU Newpdf 2024 01 23 08 44 49
• Metrics
• It is a quantitative measure of the degree that any attribute belongs to system,
component or process.
• It relates individual measures in some way.
• Ex., number of errors found per review.
Terminologies
• Indicators
• A metric or combination of metrics that provides insight into the software process,
project or the product itself.
• It enables the project manager or software engineers to adjust the process, the
project or the product to make things better.
• Ex., Product Size is an indicator of increased coding, integration and testing effort
Terminologies
• Direct Metrics
• Immediately measurable attributes.
• Ex., Line of Code (LOC), Execution Speed, Defects Reported
• Indirect Metrics
• Aspects that are not immediately quantifiable.
• Ex., Functionality, Quantity, Reliability
• Faults
• Errors - Faults found by the practitioners during software development.
• Defects - Faults found by the customers after release
Metric Classification Base
• Process
• Specifies activities related to production of software.
• Specifies the abstract set of activities that should be performed to go from user needs
to final product.
• Project
• Software development work in which a software process is used.
• The actual act of executing the activities for some specific user needs.
• Product
• The outcome of a software project.
• All the outputs that are produced while the activities are being executed.
Process Metrics
• Process indicators enable a software engineering organization to gain insight into the
effectiveness of an existing process (i.e., the paradigm, software engineering tasks, work
products, and milestones).
• They enable managers and practitioners to assess what works and what doesn’t.
• Process metrics are collected across all projects and over long periods of time. Their
intent is to provide indicators that lead to long-term software process improvement.
• Process Metrics are an invaluable tool for companies to monitor, evaluate and improve
their operational performance across the enterprise
• They are used for making strategic decisions
Process Metrics
• Ex., Defect Removal Efficiency (DRE) metric, Relationship between errors (E) and defects
(D)
The ideal is a DRE - - - DRE = E / ( E + D )
Process Metrics
• We measure the effectiveness of a process by deriving a set of metrics based on
outcomes of the process such as,
• Errors uncovered before release of the software
• Defects delivered to and reported by the end users
• Work products delivered
• Human effort expended
• Calendar time expended
• Conformance(agreement) to the schedule
• Time and effort to complete each generic activity
Project Metrics
• Project metrics enable a software project manager to,
• Assess the status of an ongoing project
• Track potential risks
• Uncover problem areas before their status becomes critical
• Adjust work flow or tasks
• Evaluate the project team’s ability to control quality of software work products
Project Metrics
• Metrics collected from past projects are used as a basis from which effort and time
estimates are made for current software work.
• As a project proceeds, measures of effort and calendar time expended are compared to
original estimates
• Project metrics are used to
• Minimize the development schedule by making the adjustments necessary to avoid
delays and mitigate (to reduce) potential (probable) problems and risks
• Assess (evaluates) product quality on an ongoing basis and guides to modify the
technical approach to improve quality
Product Metrics
• Product metrics help software engineers to gain insight into the design and
construction of the software they build
• By focusing on specific, measurable attributes of software engineering work products.
• Product metrics provide a basis from which analysis, design, coding and testing can be
conducted more objectively and assessed more quantitatively.
Types of Software Measurement
• Direct measure of the software process include cost and effort applied.
• Direct measures of the product include lines of code (LOC) produced, execution speed,
memory size, and defects reported over some set period of time.
• The cost and effort required to build software, the number of lines of code produced, and
other direct measures are relatively easy to collect and measure.
Types of Software Measurement
• Indirect measures of the product that include functionality, complexity, efficiency,
reliability, maintainability etc.
• The quality and functionality of software or its efficiency or maintainability are more
difficult
Which team
• Team A found : 342 errors is more
efficient ?
• Team B found : 184 errors
• It is depends on size or complexity (i.e. functionality) of the projects.
Software Measurement
• Metrics for Software Cost and Effort estimations
• Size-Oriented Metrics
• Function-Oriented Metrics
• Object-Oriented Metrics
• Use-Case–Oriented Metrics
Size-Oriented Metrics
• Derived by standardizing quality and/or productivity measures by considering the size of
the software produced.
• Thousand lines of code (KLOC) are often chosen as the normalization value.
• A set of simple size-oriented metrics can be developed for each project
• Errors per KLOC (thousand lines of code)
• Defects per KLOC
• $ per KLOC
• Pages of documentation per KLOC
Size-Oriented Metrics
Size-Oriented Metrics
Referring to the table entry for project alpha:
• 12,100 LOC were developed with 24 person-months of effort at a cost of $168,000. Effort and cost
recorded in the table represent all software engineering activities (analysis, design, code, and test), not
just coding.
• project alpha indicates that 365 pages of documentation were developed, 134 errors were recorded
before the software was released, and 29 defects were encountered after release to the customer
within the first year of operation. Three people worked on the development of software for project
alpha.
Size-Oriented Metrics
• In addition, other interesting metrics can be computed, like
• Errors per person-month
• KLOC per person-month
• $ per page of documentation
• Size-oriented metrics are not universally accepted as the best way to measure the
software process.
• Opponents argue that KLOC measurements
• Are dependent on the programming language that it penalize well-designed but short
programs
• estimation requires a level of detail that may be difficult to achieve
Function Oriented Metrics
• Function-oriented metrics use a measure of the functionality delivered by the
application as a normalization value
• Most widely used metric of this type is the Function Point
• FP metric can be used to
• estimate the cost or effort required to design, code, and test the software
• Predict the number of errors(testing)
• Forecast the number of components and/or the number of projected source lines in
the implemented system.
Computation of Function Points
Function Oriented Metrics
• Number of user outputs- Each user output that provides application oriented
information to the user is counted.
• In this context output refers to reports, screens, error messages, etc.
• Number of user inquiries- An inquiry is defined as an on-line input that results in the
generation of some immediate software response in the form of an on-line output. Each
distinct inquiry is counted. (i.e. search index)
• Number of files- Each logical master file (i.e. large database or separate file) is counted.
Function Point Parameters
• Number of external interfaces- All machine readable interfaces (e.g., data files on
storage media) that are used to transmit information to another system are counted.
Complexity Adjustment Values Factors
• Complexity Adjustment Values (Fi) are rated on a scale of 0 (not important) to 5 (very
important):
Example-1
• User Input: 3
• User Output: 2
• User Inquiry: 2
• Internal Logical File: 1
• External Interface File: 4
Example-1
• Used Adjustment Factors and assumed values are,
• Distributed processing = 5
• High performance= 4
• Complex internal processing = 3
• Code to be reusable= 2
• Multiple sites= 3
• Value Adjustment Factor (VAF)
= 5+4+3+2+3 = 17
Adjusted FP
= Count Total * [ 0.65 + 0.01 * ∑(Fi) ]
= Unadjusted FP * [ 0.65 + (0.01 * Adjustment Factor) ]
= 50 * [ 0.65 + (0.01 x 17) ]
= 50 x [ 0.82 ]
= 41
Function Oriented Metrics
• Advantages
• FP is programming language independent
• FP is based on data that are more likely to be known in the early stages of a project,
making it more attractive as an estimation approach
• Disadvantages
• FP requires some “sleight of hand” because the computation is based on subjective data
• Counts of the information domain can be difficult to collect
• FP has no direct physical meaning, it’s just a number
Object-Oriented Metrics
• Conventional software project metrics (LOC or FP) can be used to estimate object-
oriented software projects
• However, these metrics do not provide enough granularity (detailing) for the schedule
and effort adjustments that are required as you iterate through an evolutionary or
incremental process
• Lorenz and Kidd suggest the following set of metrics for OO projects
• Number of scenario scripts (use case)
• Number of key classes (the highly independent components)
• Number of support classes (not immediately related to the problem domain)
• Average number of support classes per key class.
• Number of subsystems.
Object-Oriented Metrics
• "Number of subsystems" is a metric for object-oriented projects that suggests breaking
down the software into smaller functional units.
• A subsystem is a group of classes supporting a user-visible function.
• For example, in an e-commerce project, subsystems could be Product Catalog, Shopping
Cart, User Account, and Order Processing.
• This breakdown helps in planning and scheduling work more efficiently, allowing
different teams to focus on specific subsystems simultaneously. It simplifies
development, testing, and integration, making it easier to manage and track project
progress during iterative processes.
Use Case Oriented Metrics
• Like FP, the use case is defined early in the software process, allowing it to be used for
estimation before significant (valuable) modeling and construction activities are initiated
• Use cases describe (indirectly, at least) user-visible functions and features that are basic
requirements for a system
• The use case is independent of programming language, because use cases can be created
at vastly different levels of abstraction, there is no standard “size” for a use case
• Without a standard measure of what a use case is, its application as a normalization
measure is suspect (doubtful).
Metrics for the Software Quality
• Goal of software engineering is to produce a high-quality system, application, or
product within a time frame that satisfies a market need.
• To achieve this goal, you must apply effective methods coupled with modern tools
within the context of a mature software process.
• Although there are many measures of software quality, correctness, maintainability,
integrity, and usability provide useful indicators for the project team. (elaborate below
points)
• Correctness
• Maintainability (MTTC – low – good – high – time consuming)
• Integrity & Usability
Integrating Metrics Within the Software Engineering Process
• Most software developers don't measure, and many resist starting due to cultural reasons.
Collecting measures can face resistance, with project managers and practitioners
questioning the need and relevance.
• In this section, arguments for software metrics and an approach for instituting a metrics
collection program within a software engineering organization is presented.
• Arguments for Software Metrics
• Establishing a Baseline
• Metrics Collection, Computation, and Evaluation
Integrating Metrics Within the Software Engineering Process
• Establishing a Baseline
• The metrics baseline (reference point) consists of data collected from past software
development projects and can be as simple as the table. (slide 25)
• Baseline data must have the following attributes:
• Accuracy: Avoid relying on guesses; data for past projects should be reasonably accurate.
• Comprehensive Data Collection: Gather data from as many projects as possible.
• Consistent Measures: Ensure consistency in measurement units across all projects (e.g.,
interpreting lines of code consistently).
• Relevance: Baseline applications should be similar to the work being estimated; don't use batch
information systems to estimate real-time, embedded applications.
Integrating Metrics Within the Software Engineering Process
• Engineering
• Implies that systematic and repeatable techniques should be used.
• Requirement Engineering
• It is a systematic approach to define, manage and test requirements for a software.
Basic concept of Requirement
• Requirement engineering provides the appropriate mechanism for understanding
• What customer wants
• Analyzing needs
• Assessing feasibility
• Negotiating a reasonable solution
• Specifying solution unambiguously
• Validating the specification
• Managing requirements
Elements of Requirements Model
• The intent of model is to provide a description of the required informational, functional,
and behavioral domains for a system. A snapshot of requirements at any given time.
Elements are as follows -
• Scenario-based elements
• Describe the system from the user's point of view using scenarios that are depicted
(stated) in use cases and activity diagrams
• Class-based elements
• Identify the domain classes for the objects manipulated by the actors, the attributes of
these classes, and how they interact with one another; which utilize class diagrams
to do this.
Elements of Requirements Model
• Behavioural elements
• Use state diagrams to represent the state of the system, the events that cause the
system to change state, and the actions that are taken as a result of a particular
event.(Sequence diagrams, State Diagram)
• Flow-oriented elements
• Use data flow diagrams to show the input data that comes into a system, what
functions are applied to that data to do transformations, and what resulting output
data are produced.(DFD Diagram)
Functional Requirements
• Functional Requirements
• Any requirement which specifies what the system should do.
• They are basically the requirements stated by the user which one can see directly in the
final product, unlike the non-functional requirements.
• How the system should react to particular inputs
Functional Requirements
• Example : Library Management System
• Registration
• Login
• Add/Delete/Update Books
• Search Books
• Issue Books
• Return Books
• Reserve Books
• Pay for Fine (if late returned)
• Add Books in inventory
Non - Functional Requirements
• Any requirement which specifies how the system performs a certain function.
• A non-functional requirement will describe how a system should behave and what limits
there are on its functionality.
• Example
1. Response time 6. Maintainability
2. Throughput 7. Interoperability
3. Availability 8. Scalability
4. Reliability 9. Capacity
5. Security 10. Manageability