0% found this document useful (0 votes)
8 views

Unit 3.pptx

The document discusses software metrics used for assessing quality and performance in software engineering, including various types of metrics such as code quality, reliability, performance, usability, and security. It emphasizes the importance of software quality assurance (SQA) and outlines key quality attributes like maintainability, flexibility, and interoperability. Additionally, it covers complexity metrics, size-based metrics, coupling and cohesion, defect metrics, and error metrics to provide a comprehensive understanding of software measurement.

Uploaded by

oliabhisha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Unit 3.pptx

The document discusses software metrics used for assessing quality and performance in software engineering, including various types of metrics such as code quality, reliability, performance, usability, and security. It emphasizes the importance of software quality assurance (SQA) and outlines key quality attributes like maintainability, flexibility, and interoperability. Additionally, it covers complexity metrics, size-based metrics, coupling and cohesion, defect metrics, and error metrics to provide a comprehensive understanding of software measurement.

Uploaded by

oliabhisha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Software Measurements, Metrics, and Modelling

ITE311T

Unit-3
Software Metrics for Quality and
Performance Assessment
► Software quality metrics are performance indicators that assess a software
product's quality. Agile metrics, such as velocity and QA metrics such as test
coverage, are typical examples.
► Metrics do not enhance development, but managers utilize them to
understand the production process better.
► In Software Engineering, Software Measurement is done based on some
Software Metrics where these software metrics are referred to as the measure
of various characteristics of a Software
Software Metrics for Quality and
Performance Assessment
► In Software engineering Software Quality Assurance (SQA) assures the quality of the software. A
set of activities in SAQ is continuously applied throughout the software process. Software
Quality is measured based on some software quality metrics.
► There is a number of metrics available based on which software quality is measured. But among
them, there are a few most useful metrics which are essential in software quality measurement.
They are –

1.Code Quality

2.Reliability

3.Performance

4.Usability

5.Correctness

6.Maintainability

7.Integrity

8.Security
1. Code Quality –
Code quality metrics measure the quality of code used for software project development.
Maintaining the software code quality by writing Bug-free and semantically correct code is very
important for good software project development. In code quality, both Quantitative metrics like
the number of lines, complexity, functions, rate of bugs generation, etc, and Qualitative metrics
like readability, code clarity, efficiency, and maintainability, etc are measured.

2. Reliability –
Reliability metrics express the reliability of software in different conditions. The software
is able to provide exact service at the right time or not checked. Reliability can be checked using
Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR).

3. Performance –
Performance metrics are used to measure the performance of the software. Each software
has been developed for some specific purposes. Performance metrics measure the performance of
the software by determining whether the software is fulfilling the user requirements or not, by
analyzing how much time and resource it is utilizing for providing the service.
4. Usability –
Usability metrics check whether the program is user-friendly or not. Each software is
used by the end-user. So it is important to measure that the end-user is happy or not by using
this software.

5. Correctness –
Correctness is one of the important software quality metrics as this checks whether
the system or software is working correctly without any error by satisfying the user.
Correctness gives the degree of service each function provides as per developed.

6. Maintainability –
Each software product requires maintenance and up-gradation. Maintenance is an
expensive and time-consuming process. So if the software product provides easy
maintainability then we can say software quality is up to mark. Maintainability metrics include
the time required to adapt to new features/functionality, Mean Time to Change (MTTC),
performance in changing environments, etc.
7. Integrity –
Software integrity is important in terms of how much it is easy to integrate with
other required software which increases software functionality and what is the control on
integration from unauthorized software’s which increases the chances of cyberattacks.

8. Security –
Security metrics measure how secure the software is. In the age of cyber
terrorism, security is the most essential part of every software. Security assures that
there are no unauthorized changes, no fear of cyber attacks, etc when the software
product is in use by the end-user.
Software Quality Attributes and
Metrics
Software Quality Attributes in software engineering can be categorized under
specific areas as mentioned below:

1) Design and Software Architecture Quality Attributes

► Conceptual Integrity
► Maintainability
► Reusability
► Correctness
Software Quality Attributes and
Metrics
2) Runtime Qualities

► Reliability
► Interoperability
► Scalability (Flexibility)
► Performance
► Security
► Availability
Software Quality Attributes and
Metrics
3) System Qualities

► Supportability
► Testability

4) User Qualities

► Usability
Software Quality Attributes and
Metrics
5) Non-runtime Qualities

► Portability
► Reusability
1) Reliability
Synonym for reliability is assurance. Reliability can be defined as the
degree to which a software system or its components or a service performs
specific functions under predefined conditions for a set period. It is the
likelihood of fault-free software operation for a specified period of time in a
specified environment.
It is the measure of the ability of a software application or service to
consistently maintain operations in a predefined condition. It is critical to verify
that the software is fully functional, especially when under maximum load.

Sub-attributes for reliability are listed below:

Maturity: It is the extent to which a software product or service meets


reliability standards during normal operation.
Availability: This refers to the ease with which a software application can be
accessed by users whenever required.
Fault Tolerance: It is the extent to which a software application or service
functions normally despite any hardware or software issues.
Recoverability: It defines the extent to which, in the event of an interruption
or failure, a software application can recover the data directly affected by the
failure.
2) Maintainability
It is the ease with which a software developer is able to fix flaws in the existing
functionality without impacting other components of the software system. It also
considers the ease with which a developer can add new features, update existing
features or upgrade the system with new technology.
Software maintainability is achieved through adherence to software architectural rules
and consistency across the application.
The purpose of having this attribute is to make the maintenance of software
applications easy and cost-effective.

The software industry typically uses some of the following common maintenance
types:

•Corrective: This is the most essential type, where in developer(s) analyses the
detected bug and implements a fix for it.
•Adaptive: In this case, the application might be migrated for the use of a different
database or technology (e.g. MS SQL to PostgreSQL; from C++ to C#).
•Preventive: Preventive bugs or failures might be a very difficult task. But by
implementing proper coding standards, documenting code, avoiding complex code,
and doing enough testing, certainly errors can be reduced.
•Perfective: This could involve code refactoring. The idea is to improve the existing
code and thereby the feature/functionality.
3) Usability
This attribute refers to the quality of the end user’s experience while interacting with
the application or service. Usability is concerned with effectiveness, efficiency, and
overall user satisfaction.
This attribute helps to measure the ease of use of any software application or service
(e.g., registering a new account and then signing in to the account)
A few of the following problems can arise if usability is not taken care of correctly:
•Complex sign-up process
•Inconsistency in UI
•Unclear navigation and at times complicated.
•Error messages not user friendly
Sub-attributes for usability are listed below:
•Simple & Intuitive Design: It describes the user’s ability to effortlessly understand the
design of the application and navigate without any conscious reasoning.
•Learnability: It refers to the ease and speed with which a user who has never seen the
user interface before can execute the most basic tasks.
•Usage Efficiency: It means how fast a user can accomplish tasks once he/she gets an
experience of the system.
•Memorability: It describes how well a user can use an application (site) effectively, after
not using it for a while.
•Error Frequency and Severity: It refers to how frequently users make mistakes while
using the system, how serious the errors are, and how users recover from these errors.
•Subjective Fulfillment: It describes how well the system is liked by the user.
4) Portability
It refers to the extent to which a system or its components can be migrated (transported) to other
environments consisting of different hardware and/or different software (operating system). The
importance of software portability should not be underestimated.

Sub-attributes for portability are listed below:

•Adaptability: It refers to how easily a software system can be adapted to new hardware,
software, or another environment.
•Replaceability: It refers to how effectively one software product can replace another software
solution designed for the same purpose. It is the capability of one software component to be
replaced by another one.
•Installability: It specifies the ease with which a software solution can be installed or uninstalled
in a given environment.
•Compatibility (Co-existence): It is the degree to which multiple, unrelated software systems
and/or components co-exist in the same environment.
5) Correctness

This refers to the ability of software products or services to perform tasks (e.g.
calculations, sign-ups, navigations) correctly as specified by the predefined requirements
when used under specified conditions.
It is a measure of whether a software system is correct or not, implying that it is an
all-or-nothing attribute.

6) Efficiency

It is one of the most important software quality attributes. It reveals the performance of
a software product concerning the number of resources utilized under predefined
conditions. It is measured simply in terms of the time required by the software system or
service to complete any given task.
Software application performance management is thus critical to any organization.
A software application can be said as inefficient from the user’s perspective when it
utilizes too many resources of the system (machine) that slow down other applications or
the entire machine.

7) Security

It is defined as the capacity of a system to fend off or thwart malicious or unauthorized


attempts to intrude and damage it while still allowing access to authorized users.
8) Testability

It defines the ease with which QA and other beta users can test the software
application and detect bugs. It also evaluates the ease with which the testing
personnel can design test criteria for the application and its various components.
Another criterion could be the number of test cases or features that can be
automated so that the testing process itself can be automated.
In simple terms, testability identifies how fast a system can be tested to ensure
quality against predefined specifications.

9) Flexibility (Modifiability)

It is the degree to which software applications can adapt to future changes (e.g.,
upcoming technologies, market trends, business rules, regulations, etc.)

A software application can be called flexible if it can run smoothly on any type of
device, hardware platform, and/or operating system. It should be easy to interface
with any other software products, third-party software, or libraries.
The ability to create new business opportunities is a clear advantage of flexible
software. You can gain a competitive advantage in your functional area and thus drive
more customers by enhancing your existing software with new features and
integrations.
10) Scalability

This attribute relates to the software system’s capability to handle the increased load
(usage) without degrading its performance. The industry (the software industry in
particular) should always consider the importance of software scalability.
There are two distinct ways in which the scalability of a software application can be
measured:
•Vertical Scalability: This can be achieved by adding more resources such as memory,
hard discs, and/or processors.
•Horizontal Scalability: This can be fulfilled by adding extra computer systems (e.g. load
balancers, servers) and distributing the load between these server systems.

11) Compatibility

It refers to the ability of the application to integrate seamlessly with other software
applications under prescribed conditions and perform its actions efficiently while sharing
the same hardware and software (e.g. Operating system). The app should be able to
work as expected on different devices, hardware, and software platforms.
It also considers the ability of software applications (in this case web applications) to
work correctly on different types of browsers and their several versions.
12) Supportability

It is the degree to which a software application can provide practically useful information
helpful for identifying and resolving issues when the application fails to work correctly.
The following factors are to be considered:
•Logging & Monitoring (Diagnosis): It consists of different kinds of logging which provide data
by monitoring and controlling the activity and performance of the application.
•Health Checks: These comprise a wide range of tools to measure & control project/build
compilation time, build size, deployment time, and several other technical aspects.

13) Reusability

It is the degree to which a software application or its components/modules can be reused in


other application(s) or the same application (e.g. common components, libraries, etc. can be
reused across the application). When it comes to the project budget, software reuse is
extremely important.
Reusable components (and any other such resources), also known as cost resources, are
extremely beneficial because they help to reduce overall development costs.
Dividing software application code base into common (also referred to as generic) components
(wherever possible) is used to facilitate reusability. This kind of development is often referred
to as component-based software development.
14) Interoperability

It refers to the application’s ability to seamlessly communicate or exchange data


between different operating systems, hardware platforms, databases, and protocols
(network and/or web protocols) conditions.

Some of the commonly observed interoperability problems are listed below:

•Legacy Systems (Old Code Structure and/or Technology)


•Data formats of similar (or other) external systems are quite different.
•API versions used in system interaction are different.
•No proper coding standards were used.
•Poor code quality.
Complexity Metrics

► The productivity, if measured only in terms of lines of code per unit of time, can
vary a lot depending on the complexity of the system to be developed. A
programmer will produce a lesser amount of code for highly complex system
programs, as compared to a simple application program. Similarly, complexity has a
great impact on the cost of maintaining a program. To quantify complexity beyond
the fuzzy notion of the ease with which a program can be constructed or
comprehended, some metrics to measure the complexity of a program are needed.
► A complexity measure is a cyclomatic complexity in which the complexity of a
module is the number of independent cycles in the flow graph of module . A number
of metrics have been proposed for quantifying the complexity of a program, and
studies have been done to correlate the complexity with maintenance effort. In this
article, we will discuss a few complexity measures. Most of these have been
proposed in the context of programs, but they can be applied or adapted for
detailed design as well.
Size-Based Metrics

► Size-Oriented Metrics concentrates on the size of the programme created.


Size-oriented metrics are used to collect direct measures of software output
and quality. We can calculate Size-Oriented Metrics by averaging out quality
and productivity.
► The size of the programme that has been generated is taken into account
by Point Metrics.
► For software initiatives, the organization creates a simple size measurement
record. It is based on previous organizational experiences. It's a metric for
determining how good a piece of software is.
Coupling and Cohesion

Coupling and Cohesion are two key concepts in software engineering that
are used to measure the quality of a software system’s design.

Coupling refers to the degree of interdependence between software


modules. High coupling means that modules are closely connected and
changes in one module may affect other modules. Low coupling means that
modules are independent and changes in one module have little impact on
other modules.

Cohesion refers to the degree to which elements within a module work


together to fulfill a single, well-defined purpose. High cohesion means that
elements are closely related and focused on a single purpose, while low
cohesion means that elements are loosely related and serve multiple
purposes.
Coupling: Coupling is the measure of the degree of interdependence between the modules.
A good software will have low coupling.

Types of Coupling :-
Cohesion: Cohesion is a measure of the degree to which the elements of the module are
functionally related. It is the degree to which all elements directed towards performing a
single task are contained in the component. Basically, cohesion is the internal glue that
keeps the module together. A good software design will have cohesion.

Types of Coupling :-
Defect Metrics :

► Defect metrics help engineers understand the many aspects of software


quality, such as functionality, performance, installation stability, usability,
compatibility, and so on.

► It measures defect ratio, speed taken to fix a defect, and complexity of a


defect come under defect metrics.
Error Metrics

An Error Metric is a type of Metric used to measure the error of a forecasting


model. They can provide a way for forecasters to quantitatively compare the
performance of competing models. Some common error metrics are: Mean
Squared Error (MSE)

It is important to quantify the performance of a model to use it as a feedback


and comparison. For this we have used one of the most popular error metric root
mean squared error. There are various other error metrics available.
Types of Error Matrics

► Mean Square Error :- It is the average of square of difference between the


predicted values and true values. Sklearn provides it as a function. It has the
same units as the true and predicted values squared and is always positive.
► Root Mean Square Error :- It is the square root of the mean square error. It is
also always positive and is in the range of the data.
► Mean Absolute Error :- It is the average of absolute difference between
predicted values and true values. It has the same units as predicted and true
value and is always positive.
► Mean Percentage Error :- It is the percentage of average of absolute
difference between predicted values and true values, divided by the true
value.
Reliability Metrics
Reliability metrics are used to quantitatively expressed the reliability of the software product. The
option of which metric is to be used depends upon the type of system to which it applies & the
requirements of the application domain.

Types of Reliability Metrics


1. Mean Time to Failure (MTTF)
2. Mean Time to Repair (MTTR)
3. Mean Time Between Failure (MTBR)
4. Rate of occurrence of failure (ROCOF)
5. Probability of Failure on Demand (POFOD)
6. Availability (AVAIL)
Performance Metrics

► Performance metrics are used to measure the behavior, activities, and


performance of a business. This should be in the form of data that measures
required data within a range, allowing a basis to be formed supporting the
achievement of overall business goals.
► Software development metrics are quantitative measurements of a software
product or project, which can help management understand software
performance, quality, or the productivity and efficiency of software teams.
► There are many different forms of performance metrics, including sales,
profit, return on investment, customer happiness, customer reviews, personal
reviews, overall quality, and reputation in a marketplace. Performance
metrics can vary considerably when viewed through different industries.
Benchmarking

► Every developed software application pass through functional testing and non functional testing to
ensure that it satisfies the business requirements. Not only business requirements are observed but
also all the performance standards like product behavior, speed, functionality, stability, scalability,
reliability, load capacity, and performance under stress etc. are monitored. Benchmark testing is a part
of the performance testing. So now let’s go little bit deep to know more about this benchmark testing.
► Benchmark Testing.:- Benchmark testing is considered as a part of Software Development Life Cycle
(SDLC) which compares performance testing results against performance metrics to determine current
performance and any changes needed to improve performance. It covers software, hardware, and
network performance. Mainly it focuses on present and future releases of an software product/service
to maintain high-quality standards.
► A benchmark must be repeatable as well as quantifiable in terms of determining performance of the
product/service. For example a product’s response time needs to be stable amongst different load
conditions which refers to repeatable of benchmark and user spending how much time on a product,
within how much time getting the actual service refers to quantifiable of benchmark.
Benchmarking

► Advantages of Benchmark testing :


► Performance Improvement
► Changed focus
► No extra cost incurred
► Identification of essential activities

► Disadvantages of Benchmark testing :


► Standard stability
► Increased dependency
Thank You

You might also like