tesi
tesi
Supervisor Candidate
Company Tutor
Francesco Floris
Abstract
Information Technology landscape is one of the fastest moving, with new products
released everyday made available to millions of users: it is then very important for
Technology companies to keep up with this pace if they want to be competitive. A
determining factor in the success of a software development company is the adopted
methodology: the trend is to switch from Waterfall and sequential methodologies,
which are slow and expensive, towards Agile and iterative methodologies, that
allow faster software development reducing the products’ time to market. Among
Agile methodologies, we will deep dive into DevOps: the aim of this strategy is to
break down the siloed organization between Development and Operations teams, by
composing instead cross-functional teams which have end-to-end responsibility for
the product lifecycle. This is achieved through processes automation, which is key
to improve the speed of development and release and also reduces human errors that
are introduced in lengthy and repetitive tasks: tools are used to implement CI/CD
(Continuous Integration/Continuous Delivery) of new software and functionalities,
which are released more frequently and are more maintainable.
Faster development and release of software must not come at the expenses of
quality and security of the product: security was often seen as an obstacle to the
release of the product to the market and the consequent revenue. If in traditional
sequential models the security team had the opportunity to block the release of
a product as part of the “secure by design” process, in Agile models, particularly
in DevOps, these security controls are more difficult to be put in place due to
CD, where software and its updates are released in an almost automatic way.
While Development and Operations team break down walls between them, the
Security team remains isolated in its siloed structure. A DevSecOps strategy is
then necessary: the aim is to make security an integrated part of DevOps processes,
by considering it in each step of the SDLC. The main target of DevSecOps is
not only to integrate security analysis tools in a CI/CD pipeline, but to have a
real cultural shift towards a more collaborative approach to software development,
mainly among Development, Operations and Security teams: the work done by one
team must not be “thrown over the wall” to the other team, but every individual
involved must take full responsibility on every aspect of his work, meaning that
developers and operations people must be responsible of security aspects, with the
support of proper professionals.
This thesis work has been carried out in the context of the Cyber Security team
of Vodafone Italia, one of the main players in the telecommunications industry
worldwide, which is now aiming at become a Tech Company with a multi-year
strategic program: part of this strategy includes the adoption of a DevSecOps
methodology across all of the software development teams, with the insourcing of
software development activities to develop products in a fast and secure way. The
aim of this work has been to study the current Company setup, tools and processes
and identify potential area of improvement to achieve Company DevSecOps targets.
The main achievement has been the definition of a DevSecOps Operating Model
made of people, processes and technologies which identify roles, responsibilities
and interactions among all of the involved entities with the final goal of improving
products security.
ii
Acknowledgements
I would like to thank Vodafone for giving me the opportunity to work on this
project, in particular my company tutor Francesco Floris who has followed and
supported me in this journey and believed in me since its beginning. Many thanks
also to the amazing Cyber Security team of Vodafone Italia, which welcomed me
in their big family in the best possible way.
I would like to thank my supervisor, prof. Luca Ardito, for giving me the opportunity
of writing this Master’s Degree thesis.
I would like to thank my parents, Maura and Giuseppe, who have never stopped
believing in me and have always been by my side, despite everything. Thanks also
to my brother and sisters, Matteo, Giorgia and Martina, who supported me even
in those times in which it was me who was supposed to support them.
I would like to thank Niccolò, my other half, the person with whom I spent these
years at Politecnico, the ones before them and, hopefully, the ones after them:
words cannot describe how thankful I am to have you by my side every day of my
life.
I would like to thank all of my friends, in particular Gianluca, Erika, Matteo,
Chiara and Alessandro, the ones that made those days and nights between one
study session and another a bit less boring. Thanks also to anyone I met during
this journey at Politecnico di Torino: every single one of you has made me who I
am today.
Last but not least, I would like to thank myself, for making it to this point, for
never giving up and for always having the strength to face what was put in front
of you and what the future will offer you.
ii
Contents
Acronyms x
Bibliography 109
v
List of Tables
vii
List of Figures
ix
Acronyms
ACL
Access Control List
ALM
Application Lifecycle Management
API
Application Programming Interface
AST
Application Security Testing
AWS
Amazon Web Services
BOM
Bill Of Materials
CI/CD
Continuous Integration and Continuous Delivery/Deployment
CLI
Command Line Interface
CM
Continuous Monitoring
CRUD
Create, Read, Update, Delete
x
CVSS
Common Vulnerability Scoring System
DAST
Dynamic Application Security Testing
DSL
Domain-Specific Language
DXL
Digital eXperience Layer
EUA
Effective Usage Analysis
IaC
Infrastructure as Code
IaaS
Infrastructure-as-a-Service
IAST
Interactive Application Security Testing
IDE
Integrated Development Environment
IDS
Intrusion Detection System
IoT
Internet of Things
IPS
Intrusion Prevention System
IT
Information Technology
xi
KPI
Key Performance Indicator
MAST
Mobile Application Security Testing
NVD
National Vulnerability Database
PaaS
Platform-as-a-Service
POM
Project Object Model
PPT
People, Processes, Technology
PT
Penetration Testing
QA
Quality Assurance
RASP
Runtime Application Self-Protection
SaaS
Software-as-a-Service
SAST
Static Application Security Testing
SBOM
Software Bill Of Materials
SCA
Software Composition Analysis
xii
SCM
Supply Chain Management
SDLC
Software Development Life Cycle
SLA
Service Level Agreement
SPDA
Security and Privacy by Design Assurance
SPOC
Single Point Of Contact
TDD
Test-Driven Development
UAT
User Acceptance Testing
VA
Vulnerability Assessment
VCS
Version Control System
xiii
Chapter 1
Introduction to thesis
context and target
In this first chapter we will have a glimpse about some general agile development
concepts which underlie below all of the conducted work and the motivations for
which this thesis is being developed. Then an insight about the context of Vodafone,
the company in which the work is being carried, and its project will follow. Finally
the final target of this work will be presented.
1.1 Motivation
In the world of software engineering, the current trend is to go towards adopting
Agile development methodologies.
What does Agile mean in this context? Agile software development is a term under
which fall all the frameworks and practices that follow the values and principles
defined in the Manifesto for Agile Software Development.
The ideas behind the Agile strategy are perfectly expressed in the Agile manifesto:
We are uncovering better ways of developing software by doing it and
helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value the items
on the left more.
1
Introduction to thesis context and target
This need for faster and more reliable development technologies is driven by the
demands of the businesses whose requirements are changing very fast nowadays:
it may be the case that a company that needs a software has some requirements
which could change from month to month, so the developers need to quickly adapt
to any requirement change and deliver fast to the client. That is in fact one of the
principles expressed in the Agile Manifesto.
The DevOps methodology falls under this concept of Agile development: the
term DevOps is made by the union of the Development and Operations words.
Historically these two worlds have been separated, the development teams are driven
by the needs and the requirements of the client that requests a given product, while
the operations teams are focused on the management of the production systems
and their reliability, availability and performance: thus, there was a significant gap
between those two realities.
Also, in order to further break down boundaries among different teams, cross-
functional teams are composed in order to facilitate communication among the
members and work better on continuous functional feature deliveries. Another
important point is the increasing complexity of the required infrastructure: this
is addressed by the Infrastructure as Code (IaC) approach, which allows an easy
management of almost any kind of infrastructure.
When we talk about DevOps we also talk about CI/CD: CI/CD stands for
Continuous Integration and Continuous Delivery and Deployment.
Continuous Integration mainly concerns the developers and it employs automa-
tion in order to compile, test and merge in a shared repository every change made
to the code, so that we do not have to deal anymore with heavy merging procedures
at the end of the development of some features.
2
Introduction to thesis context and target
Continuous Delivery and Deployment are two similar concepts, but with
some important differences: delivery still requires some manual intervention by the
operations team, while deployment allows the modifications by the development
team to reach in an automated way the production environment.
CI and CD are put together is what is known as a continuum, which means that
the phases that make up these processes are constantly repeated, when a cycle
ends, another one is immediately started, thus making a potentially never ending
process.
It is in this scenario that now we are trying to integrate security in the de-
velopment process, thus obtaining the DevSecOps methodology: the aim of
DevSecOps to deliver the most secure product possible by performing different
types of security checks during the existing phases, and not performing these checks
only at the end of the product development.
This is the so called “shift left” approach: if we imagine the product development
timeline as a straight line going from left to right as time passes by, shifting left
means performing security checks in the earliest development stages as soon
as possible. This brings several advantages, that in the end we can sum up with a
decreased cost of defect fixing, thus an economical advantage.
So we can imagine how important it is nowadays for companies to implement a
strategy such as DevSecOps that allows to develop and deliver secure products to
customers.
In particular we are dealing with the DXL product: DXL stands for Digital
eXperience Layer and it is a product which has been thought to provide a standard
layer for accessing frontend and backend services from many different types of
channels.
The fact of having a standard layer allows every entity that needs to develop
a product tailored according to the needs of a given market or customer to do
so without having to worry about developing a custom solution to access a set of
services which are common across the whole company, and allows also the customer
to have a smooth experience even when using different channels to access the data
in which he/she is interested.
3
Introduction to thesis context and target
The DXL product comes to life in 2018 to respond to this need: in order to be
easily maintainable and scalable, DXL adopts a microservices architecture, with a
set of exposed APIs that allow access to the backend services, both new and legacy.
The microservices that compose the product are deployed using containers and
proper orchestration tools, so that it is possible to have autoscaling of resources,
an efficient management of failures through self-healing and maximum optimization.
The current software development pipeline is based on some of the most popular
available tools, each phase has its dedicated product(s): we have products such as
GitHub and BitBucket for code repositories, Jenkins for automating the build and
deploy stages, Openshift for container orchestration and many more tools.
There are several possible enabling technologies for this strategy, two of the
tools on which I am going to work are two tools with different purposes:
• Mend SCA: the SCA open source solution provided by Mend (former White-
Source) is one of the most popular on the market. SCA stands for Software
Composition Analysis and the purpose of this tool is to find any vulnerability,
bug or flaw that may be present in the third-party open source libraries used
for developing a product. Eventually the tool can also look for available
patches and automate their update in the project affected by the vulnerability;
• SonarQube: this is a code review tool that statically analyzes the source
code of a project in order to provide code quality reports. Recently it also
implemented SAST functionalities: SAST stands for Static Application Se-
curity Testing and consists in the search for vulnerabilities that raise from
4
Introduction to thesis context and target
badly written source code. SonarQube provides detailed reports on the vul-
nerabilities present in the project, their severity and how could they possibly
be fixed.
Both tools can be easily integrated in the pipelines so that the executions of
their analyses are automated as the rest of the pipeline.
5
Chapter 2
The aim of ALM is to put together and integrate all those product management
methodologies and steps that were once considered one separate from the other,
such as requirements management, software development, testing and quality as-
surance, deployment and maintenance. This merging of activities in one single
workflow allows to have a better project management overall thanks to the enhanced
collaboration of all the involved teams.
7
Introduction to DevOps and DevSecOps
ALM supports agile methodologies and DevOps, thus allowing to have continu-
ous deployment of the software, having up to several deployments in a single day
instead of having releases every few months or once a year.
As Chappell reports in [1], we can identify three main components of the ALM:
• Governance: it is the only part that spans the whole life of the product, from
the requirements definition up to the maintenance and eventual dismission.
It includes the business case development, after which the project portfolio
management starts along with the development of the application. Finally
when the application is deployed and enters the organization’s portfolio, we
enter the application portfolio management phase in which it is needed to
manage all the different applications related to the company;
These three phases run all in parallel, as we can see in Figure 2.1, being
governance the most important part that runs from the initial concept to the end
of life of the product.
As we can see in the figure, there are three main steps that determine the
lifecycle of the product:
• The idea, which marks the beginning of the product’s life and its governance
phase;
• The deployment, which marks the end of the first development phase and
the start of the operations one;
• The end of life, which ends all of the phases regarding the given product.
8
Introduction to DevOps and DevSecOps
Each of the three main phases has a series of sub-tasks to be managed inside.
In Figure 2.2 we can see what these sub-tasks are and the relations between the
different phases.
As we previously said, the governance phase is crucial in order to manage
and orchestrate the work of the other two phases, but also the development and
operations phases must collaborate and coordinate themselves in order to provide
the best possible experience to the final user of the product.
For example, when the development of an update has been finished by the
developers team, coordination with the operations team is needed in order to
ensure to have a zero-downtime deployment of the update. Since the operations
team is also in charge of monitoring and collecting customers’ feedback, we need
collaboration with the development team to report the needed changes and develop
an update to respond to those needs.
problem is that this integration has to be carried out mainly in a manual way, since
we have isolated databases whose information needs to be put together in some way.
ALM 2.0 comes in order to face some of these issues by going towards a more
integrated approach: instead of choosing the best-of-breed standalone product
for the single phases, the main idea is to have a single vendor to provide the
entire suite of tools that are needed to manage the product lifecycle, so that the
integration between the artifacts of the different phases is made much simpler and
can be carried in an automated way, which is easier to manage and less prone to
errors. Integration is made even easier also thanks to the presence of a centralized
data repository where all the steps of the lifecycle store their data and results. In
Figure 2.3 we can see a graphical representation of the differences between the two
approaches.
It is very important for a company which wants to create and deliver successful
products that a proper set of products is chosen, being it either a collection of
standalone products for each phase, or, better, a suite of integrated products that
allows easy collaboration among the different teams by providing visibility to all
needed information in real time to everyone that needs it. Having the right tools
10
Introduction to DevOps and DevSecOps
to create new products to be placed in a given market can help a lot the company
in terms of productivity and increased revenues.
Meanwhile, an ALM 2.0 approach is much more preferrable for large companies,
since the integration effort that is saved by adopting an integrated suite makes
life much easier for the teams involved in the lifecycle of the products and that
translates to improved productivity.
Overall, adopting an ALM strategy has some important advantages [3] which
we can sum up in:
2.2 DevOps
The ALM process provides an overview about what are the phases of an applica-
tion’s lifecycle, how they are related to each other and what types of tools can
be used to implement it. DevOps enhances those definitions by providing an
approach to Agile development and operation of IT applications: this is done
by employing automation and strong collaboration between different teams
that must work together in order to develop and deploy high quality software
frequently.
The story of DevOps starts in 2009, when Patrick Debois comes up with this
term to define a new methodology to help companies having frequent releases but
still wanting to maintain high quality standards, in order to keep up with the
market’s demands and with the competitors.
The term DevOps stems from the union of the words Development and Op-
erations, making clear since the beginning what its main target is: this software
development methodology aims at breaking down the walls that historically exist
in companies that have siloed organizations where each team performs tasks for
which it is qualified and has very limited collaboration with other teams. In this
context, the difficult collaboration between development and IT operations teams
leads to an overall worse experience both for the final customer and for the involved
employees that find themselves in the situation of managing the contrasting needs
of the different teams: development teams are in fact driven by the requirements
of the customers to be implemented in the product, while operations teams are
more concerned about the management of availability and reliability of the IT
infrastructure, as well as containing the related costs.
This cultural shift brings several advantages, other than the productivity im-
provement: the fading boundaries among the involved teams and the adoption of
new tools imply that members of one team are somehow forced to acquire new
skills outside of their original scope. Developers must collaborate with testing
teams thus acquiring skills about Test-Driven Development (TDD) and Continuous
Integration (CI), testers must ensure automation of test cases and operations people
13
Introduction to DevOps and DevSecOps
The whole DevOps methodology is based on some best practices that enable
the adoption of this approach at the best of its possibilities: in the following we
will present some of them.
14
Introduction to DevOps and DevSecOps
This practice fits very well in the purpose of DevOps of having shorter develop-
ment cycles and more frequent releases: CI comes in to solve the problem of several
developers working independently on a project and then having to integrate their
works of possibly several months, making integration a not so frequent but very
time-consuming task that implied many reviews and reworks and adding significant
delays to the whole project.
The idea of DevOps is that these tasks that require a lot of time to be completed
have to be performed more frequently: committing code once a day allows reviews
to be much shorter and issues to be identified and resolved as soon as they are
created. This requires the complete automation of building and testing phaaes,
since they are time-consuming and error-prone activities if performed manually:
tools that help in this exist and should be adopted, such as CI servers. Collabora-
tion with the Testing and QA team is essential in this phase in order to be sure to
implement it effectively and improve productivity.
if it can be deployed to a larger set of users. Also this step can be automated by
introducing thresholds on these metrics that will determine either the continuation
of the deployment or the rollback of the update due to the presence of any issue.
Measurement of the metrics continues also in the monitoring phase, after the update
has been completely deployed, in order to be ready to respond to any bug that
slipped through the review process.
2.2.3 Microservices
As [7] reports, microservices are an architectural pattern that structures a software
application as a collection of services with given characteristics:
• Loosely coupled;
• Independently deployable;
The microservices architecture enables the rapid, frequent and reliable delivery of
large and complex applications, thus being a perfect fit for the DevOps methodology.
There are also some drawbacks and things to be carefully taken into consideration
when choosing to adopt a microservices-based architecture for an application:
related to an application.
Plan
This is the phase that takes place before developers start writing code, in which
the Product Owner and the involved stakeholders express their requirements: these
are then formalized into a product backlog which is used as a base to plan the
activities for the sprints, where tasks are allocated to the developers for them to
work on a particular feature.
Code
In this phase developers work on the assigned tasks using their tools, such as IDEs
integrated with proper plugins to support and make this step more efficient by
writing good quality code since the beginning of its development.
Build
The Build phase is the one in which the DevOps approach really starts to make a
difference with respect to other approaches: at the end of the Code stage, developers
commit their work to the common repository that is shared with the whole team
by opening a pull request to be reviewed. By employing CI tools, this step triggers
an automated build process including the execution of some unit, integration and
end-to-end tests: if any of the build or test steps fails, the pull request fails and
the developer is notified with the issue to be solved before opening another one.
20
Introduction to DevOps and DevSecOps
Test
After he build is completed successfully, it is automatically deployed into a testing
environment, where a series of manual and automatic tests are performed: the
first include User Acceptance Testing (UAT), where potential customers test the
application in order to point out possible corrections to be performed before
deploying it in production. Automated tests can include instead security testing
and load testing.
Release
This is the crucial step in which a system that passed the build and test phases
can finally be planned to be deployed in the production environment: in an ideal
DevOps process, also this step would be automatically performed by using CD tools
and IaC, making it possible to have up to several releases per day. If a company is
not ready to embrace total automation of this process, this step can still be subject
to a manual review and approval of the systems to be promoted into production.
21
Introduction to DevOps and DevSecOps
Deploy
When the moment comes for the approved build to be deployed, the proper
environment is set up and configured for the release to be available for users and
customers. It is important to ensure a deployment that affects as least as possible
the availability of the provided service, if for example we are deploying an update
of an existing application and there are techniques to do so, such as blue-green
deployment.
Operate
At this moment the system is up and providing its services to the customers, during
this phase the operations team makes sure that the infrastructure is properly
provisioned and adequately supports the application. Feedback from the customers
must also be collected to understand how the product can be further improved in
the following stages.
Monitor
In this final stage, the collected feedback from customers must be gathered in
the form of data and providing analytics about the performance, errors, users
satisfaction, and so on. The results of these analyses must be fed back to the
Product Owner and the development team who will then start working on the new
features and updates by reiterating the DevOps process.
CI/CD Pipelines
All of these stages are put together in practice in what we call CI/CD Pipelines:
these pipelines are composed by a set of tools, each of them responsible for one or
more of the phases explained above, and their main feature is of course the strong
usage of automation, that makes the process faster and less error-prone. There are
very popular tools, such as Jenkins, that allow defining such pipelines: in more
recent times, cloud-based pipelines are also becoming very used, due to their ease
of infrastructure provisioning, their use of containers and in general the fact that
many complex management factors are outsourced to companies that can provide
the required services in a fast and secure way.
2.3 DevSecOps
While the DevOps methodology aims at speeding up the SDLC still maintaining
high quality standards, another critical aspect of software products to be always
considered is their security: nowadays security incidents in IT systems can cause
22
Introduction to DevOps and DevSecOps
very significant problems, mainly in terms of sensitive data losses and leakages and
unavailability of provided services, which, depending on the context of application
of the system, could also imply financial penalties or even legal issues. In any case,
such events cause problems to the company in terms of economic revenues and
reputation.
• Build security in more than bolt it on: security must be thought and
implemented along with the product (security by design), the same care taken
into product functionalities has to be put into security ones;
2.3.1 Shift-left
One of the main principles on which the DevSecOps methodology is based on is
the shift-left principle: this concept has been first applied in the testing scope,
where applying tests only at the end of the development phase often resulted in
a bottleneck for the whole process, causing significant costs and delays for the
product release.
The shift-left approach in security is based on a very similar idea, in fact the aim
is to integrate security and related analyses and assessments in all of the phases of
the DevSecOps process.
As we can see in Figure 2.7 the aim is to maintain the structure and the efficiency
of the DevOps approach, with its phases and their sequence and integrate security
in all of them: in this sense automation must be a key feature of this process, there
25
Introduction to DevOps and DevSecOps
is no space for the manual and lengthy security reviews and audits in this process,
every analysis must be automated through proper tools in order to maintain the
speed and efficiency of the Agile software development process and to avoid errors
which are likely introduced by humans.
The target of this approach is not only achieved by adopting the proper technical
tools, but by implementing a real cultural shift in the company, how its software
development teams are organized and the roles and duties of every component of
the team. Following this idea, we can distinguish two main factors that are needed
to effectively implement a DevSecOps strategy:
• People: these are the human resources available to the company, those that
perform the tasks defined in the processes, possibly leveraging on the available
resources;
• Processes: they are the steps and actions to be performed in order to reach
a goal, they define how to achieve the final desired result and how people and
technology will interact to meet the business requirements;
• Technology: this includes all the tools and techniques that people have to
use to put the processes in place. Technology is the last step to be considered
in this framework, differently from what could be commonly thought: first of
26
Introduction to DevOps and DevSecOps
all the required processes must be defined and people to implement them are
to be hired, finally the proper technologies to support the process and that
can be used by the hired employees are to be acquired by the company.
To be compliant with the shift-left idea, the AST process has to be integrated
since the starting phases of the SDLC, through the techniques that mostly fit the
particular stage taken into consideration. Furthermore, it is very important that
all of the components of the software product are security tested, not only the
ones that are exposed to the external world, such as APIs: if attackers succeed in
getting through the external perimeter, the company’s systems will be in serious
danger if not properly protected.
There are other techniques which are not properly included in the AST ones,
but are still very important to assess the security of a system:
• Threat Modelling;
28
Chapter 3
The main reasons why an organization would like to have an on-prem infrastruc-
ture are security and privacy: if we are offering services that deal with sensitive
data, for example in the healthcare or military fields, there are regulations that
forbid these data from being transmitted outside of the borders of a given country
of region. In general, relying on a third-party service would imply sharing data
with the service provider and this increases the risk of a data breach.
Other reasons for which this approach could be chosen are customization and
performance: having full control on what hardware is selected and how it is
deployed could be a determining factor for the performance of the infrastructure.
30
Analysis and approach
For example, having a machine that performs a given critical task to be as close as
possible to the company premises could significantly improve the speed at which
that service is provided and thus the customers’ satisfaction.
From the application security point of view, the state of the art is that
the required analyses are performed in a manual way only at the end of the
development stage: this approach has several disadvantages, which translate most
of the times in a waste of time and resources. Applying security checks only after
finishing the development means that if any defect is found it may be necessary to
repeat the whole process that introduced the vulnerability and the effort that was
previously spent on developing that feature has been wasted. This can even mean
a delay in the release of the product and an increase of the time-to-market, which
can cause economical and reputational damage to the company: a study by IBM,
cited by [16], reports the relative costs of fixing defects in different stages of the
product lifecycle in Figure 3.1.
Here we can see that fixing a defect in the Testing phase costs 15 times more
than the cost in the Design phase, that is, for example, if a defect takes one hour
to be fixed after the Design stage, the same defect needs 15 hours to be fixed after
the Testing phase. What also is important is that the cost of fixing a defect while
the product has already been deployed to production costs 100 times more than in
Design phase.
31
Analysis and approach
IaaS is the most flexible among the three types of offered services, because it
offers just the basic resources upon which the company can deploy various kind of
products, thanks also to the virtualization features.
There are several factors to be taken into consideration while choosing a cloud
32
Analysis and approach
provider:
• Reduction of costs: not having to maintain a local data centre or any other
kind of system reduces significantly the costs, because there is neither the need
to buy any hardware nor to maintain it by placing it in proper rooms and
hiring dedicated personnel. Most importantly, the provided services are paid
based on what is effectively used, avoiding any kind of waste of economical
resources;
• Security: the provider must provide some guarantees about the security
measures in order to avoid incidents and data breaches, also there must be
some disaster recovery procedures in place so to ensure business continuity
also in case of problems;
33
Analysis and approach
The integration of security into each phase of the SDLC allows the people
involved in the process to find the defects in artifacts as soon as they are created
and fix them before they are released to subsequent phases and environment and
become source of further errors down the chain, which then result in higher cost
and effort required for fixes.
This approach allows to embed security into the software product, making it
a design constraint rather than something that is eventually later assessed: this
is the “secure by design” strategy whose aim is to ensure that a product is
secure from its foundations, by considering several security strategies since the
design stage and enforcing these constraints through suitable architectural choices.
The security patterns adopted in this approach are widely known and reusable
and provide solutions for every required security feature, such as authentication,
authorization, confidentiality, data integrity, privacy, accountability, availability,
safety and non-repudiation [18]. Some basic principles to follow in order to adopt
this strategy are:
• Expecting attacks: it must be assumed that attacks can be performed on
our system and we can use an approach such as domain-driven design to better
identify the possible threats and the proper countermeasures;
34
Analysis and approach
The trend for now and the next years is to gradually migrate to cloud-based
infrastructure: Gartner states that the IaaS market in the public cloud sector has
already increased of 41,4% in 2021 with respect to 2020, with a total revenue of
$90.9 billion and the top-5 providers accountable for more than 80% of the market
share, as we can see in Figure 3.3 from [20].
Figure 3.3: Major cloud providers and their 2021 and 2020 revenues compared.
This transition is driven by the advantages that the cloud-based approach offers
compared to the on-premises infrastructure:
• Paying only for the effectively used resources: this is an big advantage
especially for those small and medium enterprises that do not have enough
35
Analysis and approach
resources to face the setup of an on-premises data centre. IaaS allows them to
have a small initial investment and a periodic cost, which is always limited to
the resources that are effectively used;
• Security: this is another duty that is now up to the service provider, which
are usually big companies that can afford to have dedicated security teams
dealing with both the physical security of the machines and networks and the
security of the information stored inside them. This is a huge advantage for
those companies that cannot afford to have a dedicated IT security team.
The continuum phases are divided in two main blocks: first we have a CI block,
which includes the planning, coding, building and testing phases, the ones in which
the software product is actually devised and developed. Following the CI block, we
have the Release stage which brings to the CD block, where the application is
deployed and its functioning is continuously monitored, so that there can be proper
responses to incidents and planning of future improvements and updates.
• Conducting retrospective meetings to analyse what went well and what did
not and what are the lessons to be learned from the security point of view;
• Including in the backlog of the product user stories that include security
aspects;
• Complying with the Secure by Design guidelines and integrate any component
or control needed to mitigate security threats and enforce any required security
policy.
• Searching for vulnerabilities before the code is pushed into production and
following secure coding best practices to reduce the introduction of flaws and
vulnerabilities;
The things to consider from the security point of view during these stages are:
39
Analysis and approach
• Usage of Dynamic Application Security Testing (DAST) tools in the test and
staging environments to check pushed code for flaws and vulnerabilities;
• Usage of security-approved tools to perform analyses on the cloud environments
to verify configuration changes, check compliance, run performance and load
tests;
• Usage of container scanning tools to detect possible misconfigurations;
• Perform penetration tests (pentests) with the support of the Secure by Design
team;
• Usage of feature flags to turn on or off features that are ready or not to be
released to customers.
• The definition reported by the Operational Excellence Society [22] states that
“An operating model is a visualisation (i.e., model or collection of models,
maps, tables and charts) that explains how the organisation operates so as to
deliver value to its customers or beneficiaries.”.
What all the possible definitions agree on is the fact that an operating model is
required by any company that wants to deliver some value to their customers
and business partners: to do so it is necessary to have an adequate strategy in
place, made by some clear objectives that the organization wants to pursue. The
aim of the operating model is to combine the high-level company strategy with the
available people, processes and technologies, determine the roles and responsibilities
of each involved party and how they will interact to achieve the targets defined in
the strategy.
As said above, the operating model relies on the three pillars of the PPT
framework, which was introduced in Chapter 2 and will now be further analyzed
component by component.
People
The people pillar is the one that gets things done, by performing the tasks defined
in the processes, possibly leveraging on the available technology.
It is very important for a company to hire the right people, with the correct set
of skills that will help with to reach the defined objectives. Along with the technical
skills, it is crucial for the success of a project to communicate effectively among
team members and to the managers and stakeholders: we cannot get anything
done correctly if first it is not understood what it is required to the product or
service to be developed.
Roles and responsibilities must be clearly defined inside a team, it will be deter-
mining in those moments in which tasks are to be assigned to a given employee and
technology is to be selected to support the processes: most of times companies make
the mistake of thinking first about the processes to put in place and getting the
last generation technology to support them, without thinking about having proper
people that can implement and use them. This kind of error brings organization to
waste more resources that those that would actually be needed to put in place a
process to support a given function.
Once the correct people are found, it is still important to give them proper
training: especially in the area of cyber security, training is one of the least expensive
42
Analysis and approach
and most effective tools that we have in our hands. Security training is fundamental
for every employee of the company and in general for whoever has access to some
company asset: the Verizon 2022 Data Breach Investigation Report [23] states that
82% of breaches include human elements, including social engineering techniques,
errors and misuses of resources. Training is one of the most effective and least
expensive tools that we have in our hands to prevent security issues: employees
must be aware of the security threats that could affect the company and what
policies and procedures are in place both to prevent and to respond to security
incidents.
Processes
Processes define the how a given objective is reached, what actions are needed,
in which order, which people has to perform them and what technology is used
to support them. Theoretically, the result of a given process should be the same,
independently of who performs the single actions.
There are some key aspects to consider while defining a process [15]:
• Particular attention should be dedicated first to the key steps of the process,
those that impact the most on the value to be delivered and the overall efficiency.
Once these steps have been properly defined, details and other supporting
processes can be defined to improve the performance of the main phases;
Finally, these processes are a fundamental piece of the DevSecOps strategy, because
only by putting them in place we ensure that the application has been developed
in a secure way and the result is a robust product that delivers its functionalities
while not exposing the user to any security threat.
Technology
The last pillar upon which the DevSecOps Operating Model will be based on is
technology: companies too often make the mistake to invest lots of money on the
latest and most functional tools on the market, considering later the people that
will use them and the processes they are intended to support. The investment on
technology must be planned after having considered the available skills in terms
of people and what are the tasks to be performed by the employed tools inside of
the defined processes, otherwise more time and effort will be required to train em-
ployees to use the adopted tools and to adapt the process to the tools’ functionalities.
In general, there are several benefits that come from the adoption of an
Operating Model:
• Provides an overview about how the company operates in a given field, what
are the involved parties, their roles, responsibilities and how they interact;
• Consistency in the adopted practices, allowing to have excellent results with
efficient processes;
44
Analysis and approach
In the following we will analyze the tools, processes and people that will be
needed to define a proper DevSecOps Operating Model.
45
Chapter 4
• Assets: the target is to identify the most valuable assets and the threats
related to them;
• Attackers: we want to identify the possible attackers and what targets they
could aim to and gain value from, so that also the most valuable assets are
identified and protected;
There are several threat modelling methodologies that allow to follow a structured
process in order to identify the subject to be protected, the potential threats, the
actions that can be taken to mitigate those threats and the validation of such model
and taken actions: the most popular methodologies include STRIDE, PASTA,
Trike and VAST.
4.1.2 SAST
Static Application Security Testing (SAST) is a white-box testing technique that
focuses on the application source code and scans it in order to identify possible
security vulnerabilities.
This is a technique used in general early in the development process and helps
the developers that are usually focused only on delivering software that meets
the requested functional specifications. It can be applied at several levels, which
can be function, file/class or application level, what differentiates the levels is the
available context: a line of code that could potentially result into a vulnerability
in a function level analysis could result as non-vulnerable in an application level
analysis, meaning that it would have been a false positive.
SAST has several advantages and disadvantages, the first ones include:
48
Tools, resources and processes
4.1.3 DAST
Dynamic Application Security Testing (DAST) is a black-box testing technique
that aims at testing an executing application in order to find security weaknesses
and vulnerabilities. It can be carried both manually and through automated tools,
being the manual analysis more precise but time-consuming and the automated
one more efficient but less precise. Being a black-box technique, DAST tools do
not have access to the source code, but only at the runtime environment of the
application under test.
Unlike SAST tools, these tools do not depend on the particular language or
framework adopted to develop the application, since they analyze its runtime
behaviour, and scans can be constantly performed based on the most updated
vulnerabilities and attacks.
data into the production databases. Also coverage of DAST analyses could be not
optimal and the set of provided attacks and possible payloads by the tool may not
be the best one for the particular application under test, which would then need
particular attention for those test cases not considered by the DAST tool.
4.1.4 IAST
Interactive Application Security Testing (IAST) is a technique that brings together
features of SAST and DAST analyses: IAST tools perform analyses in which
they are able to correlate the vulnerabilities found in during dynamic testing to
particular lines of code, as static testing would do, thus making life very easy to
the developers that have to fix the source code in order to remove the vulnerability.
To do so, IAST tools clearly have access to both the application source code and
its runtime environment.
4.1.5 SCA
Software Composition Analysis (SCA) is an automated process that aims at identi-
fying the open-source components in a codebase, tracking down their vulnerabilities
and managing their licenses.
SCA tools analyze every component of a project, source code files, manifest
files, package managers, binary files, container images, and so on. The found open
source components are tracked down in the Bill of Materials (BOM) and they are
compared against the most popular vulnerability databases, such as NVD.
SCA is important because of the very strong usage of open source libraries
and dependencies into software products, and tracking them is not an easy task
to be performed manually: that is where SCA tools come into help, by finding
what known vulnerabilities are present in the developed products and which ones
are effectively used into the codebase, thus making it vulnerable and needing for
remediation which has to be planned based on the results of the analysis reported
by the tool.
4.1.6 VA and PT
Vulnerability Assessment (VA) and Penetration Testing (PT) are activities that
must be periodically performed on the company assets and systems in order to
find any vulnerability to be possibly exploited by attackers. A company could be
equipped with the best prevention and protection systems, such as last generation
firewalls, IDSs and IDPs, but even a small configuration error could compromise
50
Tools, resources and processes
the functioning of those systems: this is why periodic VAs are needed to check that
the company assets are still not vulnerable to attacks.
PTs can be classified based on various characteristics, such as the type of system
to be tested, whether it is performed manually or in an automated way and the
amount of information available to the attackers. Based on the latter, we have
three types of PT:
• Black-box: testers do not know anything about the internal structure of the
system, exactly as an external attacker would, so the system is probed for any
possible vulnerability;
• Gray-box: testers have some knowledge about the internal structure of
the system, such as architectural documents, source code, algorithms and
data structures, so particular test cases could be designed based on these
information;
• White-box: testers have access to all of the resources of the project, sometimes
including the servers on which the application run. This is the approach that
provides the highest amount of security in the smallest time.
Penetration tests are also designed specifically for the particular type of tested
system, being it an application, a network, a set of APIs, an IoT device, and many
more. They also usually include a series of defined steps, such as:
51
Tools, resources and processes
4.1.7 RASP
Runtime Application Self-Protection (RASP) is a technology that allows protection
of the system from external threats by leveraging on runtime data and context of
the application to be protected, differently from technologies like firewalls that can
rely only on network information to detect and block possible attacks. When a
possible threat is detected, RASP tools can perform several actions, such as termi-
nating a user’s session, sending a warning to the user and alerting the security teams.
The target of RASP tools is to close the gap between the AST techniques and
network perimeter controls, which do not have enough control of what really happens
in real-time in an application’s behaviour and thus do not provide protection against
those vulnerabilities that go through the review processes or those threats that
were not foreseen during design phases: RASP tools can be compared with IAST
tools, with the main difference that the latter’s target is to detect vulnerabilities
that are inside of the application, while RASP aims at blocking attacks while the
application is in its operation phase.
each with its own target, and in the set of tools dedicated to a particular type of
analysis there are a number of different tools from different vendors, it is important
to understand the environment in which we are developing our applications, how
they are developed and the level of security that we want and can achieve, so that
the proper tools are selected. Also note that the main target of Agile methodologies
and CI/CD is to speed up the development and deployment of new functionalities
and updates, and security analyses are an additional step that may be significantly
time-consuming: thus a balance between the security and efficiency needs must be
found.
To approach this phase we need to define some metrics and factors that will
influence our choices.
First of all we need to remind that the SDLC is a complex process composed by
many phases and each of them takes up some time: security checks, analyses and
verifications are additional steps that require additional time, so it is essential to
correctly identify the context of the product in order to find a balance between
security and performance by applying the correct measures to achieve a good level
of security without slowing down the development and deployment processes.
If for any reason we do not have access to the source code of the application
to be tested, we can use a DAST tool to perform some dynamic analysis on the
running application, trying to exploit possible vulnerabilities and verify if they are
there to be eventually exploited by a malicious user who can then perform some
kind of attack against our system.
53
Tools, resources and processes
Another factor that can be taken in consideration for almost every system to be
developed is the usage of third-party open source components and libraries: being
by definition the source code of these elements available to everyone, it is easy
for developers to scan those components in order to find if there are any known
vulnerabilities. On the other hand, it is also easy for attackers to find possible
ways to exploit weaknesses in those components that sometimes are used in many
different products developed by very important companies, see the Log4J case that
happened in December 2021. It is for this reason that it may also be very important
to use a SCA tool to identify and eventually update those libraries that have known
vulnerabilities inside them, before they can be exploited causing important damage
to the company that used it to implement its products.
Scanlon in [24] proposes a list of factors to keep into consideration while choosing
the proper security analyses and tools to perform them, spanning from factors
regarding the developed product’s nature to the restrictions that are common to
the selection of any kind of tool to be used by the company for reaching its business
targets.
As we can see in Figure 4.1 from [24], there are some factors related to the
product to be developed and its most suitable types of security analyses and there
are other factors related to the particular tool and vendor selection: those factors
are the same that are considered for any kind of technology tool selection and are
not related to the particular application to analyze.
54
Tools, resources and processes
It is clear that the best scenario would be the one in which all of the suitable
and possible security analyses are performed, but in case that for some reason
(budget, efficiency) it is possible to choose to use only one type of security tool we
must make sure to use the best tool given our context:
• If all or most of the application was developed in-house, we should choose a
SAST tool, which will perform the analysis that gives out the most accurate
and detailed results about security issues;
• Instead, if the application comes from a third party, a DAST tool could be
the choice to test the provided executable for vulnerabilities that could be
exploited at runtime. The usage of SCA tools should also be considered, if
many open-source external libraries have been used. Also, we may consider
the possibility of asking the third party that owns the source code to provide
some kind of certification of the fact that the code has been security tested
and does not contain any kind of flaw or vulnerability.
The below flowchart in Figure 4.2 represents the steps to consider when choosing
the proper security tools.
55
Tools, resources and processes
If source code is not available, we can rely mostly on DAST tools and techniques
such as fuzzing and negative testing:
• Fuzzing is a technique that aims at testing possibly all of the possible inputs
that an application can receive and the application’s behavior in response to
these inputs. Most of the times it is not feasible to test every single input
of the application because it would take too much time, so only significant
inputs are taken into consideration;
If source code is available, the best choice is to perform SAST analysis. If the
external factors allow to, it would be even better to apply both SAST and DAST
tools: in this case we may consider using IAST tools, which analyze the runtime
behavior of the application but can also correlate it with the proper part of the
source code, so that we have a better identification of the vulnerability and how
to fix it. Build-and-compare tools could also be a good option: with these tools
56
Tools, resources and processes
we can check if the delivered source code can be built to get an executable that is
identical to the provided executable so to ensure that the external party did not
inject anything after compiling and building the source code.
Third-party components
As previously mentioned, it is very common that an application makes use of
external libraries to add some new functionalities and features: those third-party
components could contain vulnerabilities that make the products which make use
of them exposed to different kinds of attacks, and if the vulnerable library is a
very popular and extensively used one then there could be very important problems.
If we consider also the factors mentioned up to this point, the suggestion would
be to use a combination of SAST and SCA tools if we have extensive access to the
used source code and several external libraries were used, or use DAST and SCA
tools if the application was developed by a third party and we want to make sure
that no vulnerable component has been included in our product.
Development Model
The methodology that was chosen to manage the lifecycle of the product also
impacts on the choice of the most suitable type of security analysis:
• In a standard waterfall SDLC model, with long and sequential phases, SAST
and DAST tools fit well and they should be integrated as soon as possible in
the development process;
Target Platform
The type of platform on which the product will be used also allows to select the
proper tools and techniques to perform AST:
Integration level
This factor relates to how early in the development process can AST tools be
integrated: what can be applied here as a general rule is the shift-left idea, that is,
when possible, we should integrate security analysis tools as early as possible in
the software lifecycle, so that bugs, flaws and vulnerabilities can be located and
resolved before proceeding to the subsequent phases and environments and they
become increasingly expensive to fix.
Compliance
Depending on the target of the developed product and its context of application
(e.g. healthcare, banking, etc.), it may be necessary to comply to some policies and
standards regarding information security and privacy: these include regulations
defined by the government bodies of the countries that companies work in, or
may be internal policies and guidelines defined by the company and to which the
employees and the products they work on must comply to.
For policies and regulations that are internationally known and adopted, AST
tools may have already some integrated features that allow developer companies to
be compliant to such regulations. Instead for complying to internal policies, some
more custom rules and controls may be needed and those need to be defined and
integrated in the overall process.
58
Tools, resources and processes
Since compliance mostly regards how data are elaborated and stored and this
is why database security scanners may be useful in this case. Also, there may be
cases in which the rules to be followed become very complex and numerous, so we
may need correlation and ASTO (AST Orchestration) tools to have a complete
overview of what is compliant and what needs to be fixed and how.
• If code reviews point out bad coding practices, consider adopting more strict
rules in SAST tools so that vulnerabilities embedded in source code are spotted
and resolved before that goes into production.
Summary
As already said, the factors which have been just presented have to be considered
altogether: of course an ideal solution would include all of the different types of
analysis in order to eliminate as many vulnerabilities, bugs and flaws as possi-
ble, always remembering that an absolute absence of security issues can never be
achieved, thus the adoption of the measures mentioned so far does not exclude the
adoption of further tools to ensure a higher level of security.
The considerations that we just made have been made to help prioritizing one
type of analysis over another in the case in which for any reason it is not possible
to integrate all of them in the development process, so that the security of the
product is always optimized to the best of possibilities. As a general rule it is
always better to first understand and integrate those tools that perform analysis
common to all types of application, independent of their target platform, that
is SAST, DAST and SCA tools, adding eventually IAST and hybrid tools that
help conduct these types of analyses. Then, on top of these tools, further tools
59
Tools, resources and processes
60
Tools, resources and processes
We must not forget that there are also many free and open source products
which should be tried out before choosing to buy a commercial solution: trying
these solutions could help in understanding what are the aspects to be prioritized
and if some of the budget could be saved by finding a solution that satisfies the
business requirements without having to buy an additional product.
It is also important that the selected tools can be easily integrated with the
development tools that are currently used in the different phases of the SDLC: for
example it could be very useful to have a SAST tool that scans code as soon as it is
pushed into the code repository and so it is important to have integration between
the chosen security tool and the already used code repository tool. A similar idea
can be applied to IDEs and their plugins: as the shift-left approach suggests, it
would be great to have a tool that points out vulnerabilities as soon as they are
introduced in the source code, this would help the developer both to not add flaws
into the application and to develop a more advanced security culture that in the
long run will help to develop more overall secure applications.
Technical Objectives
In the product requirements there may be some technical requirements such as
controls related to particular kinds of vulnerability like SQL injections, cross-site
scripting, input validation and sanitization, ACLs and access management, etc.: we
must make sure that the adopted AST tools meet the requirements of the developed
products.
that the initial implementation and integration of AST tools requires time and
effort by specialized people that need to learn how the tool works, how to properly
integrate it and tune it to the specific requirements of the product, in order to avoid
false positives and false negatives and always improve the finding and remediation
of vulnerabilities.
4.3 Tools
We will now present some of the tools that have been selected to be used in the
DevOps pipeline, starting from the already present on-premises tools, up to the
those that will be used in the cloud-based pipeline. Each phase of the DevOps
process has a set of tools that can support it and that we can pick one or more from,
on the other hand a single tool may support one or more phases of the development
process.
62
Tools, resources and processes
There are several hosting services that allow to create and manage Git reposito-
ries online, among which we find GitHub and GitLab.
Maven
Maven is a build automation tool produced by the Apache Foundation that offers
several functions to automate the build process making life easier to the developer
and the process itself less error prone.
Some of the phases that are handled by Maven are code compilation, binaries
packaging, running the tests on the source code, deployment on systems and finally
the project documentation. One of the tool’s main components is the Project
63
Tools, resources and processes
Object Model (POM) that is stored in the pom.xml file and defines the structure
of the project, its dependencies, tests, documentation, and so on.
Jenkins
Jenkins is an open-source tool written in Java that provides CI/CD features, it is
both distributed as an application to be executed on a web server and then accessed
remotely through a web browser and a Docker image to run in virtual environments.
A big plus of this tool is the wide selection of plugins developed by the commu-
nity that allow easy integration of many different tools in the development pipeline.
Nexus
Nexus Repository is a tool that allows build artifact management through a central
repository from which we can retrieve any kind of stored artifact, such as binaries,
components, libraries and containers, whenever they are needed, for example in
the build process of a product. The artifacts in the repositories are versioned as
in other types of repository managers and they provide a single point of truth for
anyone in the company.
This tool can be integrated with the most popular IDEs and CI/CD tools, such
as Eclipse, IntelliJ, Jenkins and Visual Studio.
Docker
Docker is an open-source tool that allows creating, testing and distributing applica-
tions in a very easy way by using containers: compared to virtual machines which
need an operating system each, Docker allows to deploy several containers upon
the same OS, what is only needed to be run is the Docker Engine, which will be
64
Tools, resources and processes
What Docker creates are Docker images: each image has everything that an
application needs to run, including its code and dependencies, and is a read-only file.
Containers are live running instances of Docker images that operate on temporary
data that exist only as long as the container exists.
Containers are much easier and faster to deploy than virtual machines, for this
reason they are a solution that is increasingly being used in Agile development
models.
Kubernetes
The main purpose of this tool is to manage distributed environments and their
resources making containerized applications always available, this is done through
a number of features, such as:
• load balancing on different containers if the traffic on one of them is too high;
The desired state for the containerized application is described into configuration
files, in YAML or JSON format, that are read by the machine and translated into
proper configurations, making the system go from the actual state to the desired
state with a series of transitions.
65
Tools, resources and processes
Openshift
Junit
Junit is a framework for unit testing of Java applications, it had a very important
role in the definition of the Test Driven Development (TDD), emphasising the
importance of tests in the development of stable software and reduction of time-
consuming debugging activities later on.
Ansible
66
Tools, resources and processes
Selenium
Selenium is a range of tools for functional testing in web browsers, it allows au-
thoring tests through Selenium IDE and simulating the behaviour of a real user
through Selenium WebDriver and Selenium Grid. It provides also a domain-specific
language (DSL) called Selenese to write tests for a number of popular programming
languages, including Java, Python, PHP, C and JavaScript.
JMeter
Amazon Web Services (AWS) is a company of the Amazon group that offers cloud
computing services, with over 200 offered products. Their services are offered on
an on-demand basis, with high availability, redundancy and security features: the
final cost of the used services is based on the combination of the type and quantity
of utilized resources, usage time and desired performance.
Terraform
67
Tools, resources and processes
Datadog
Datadog is a cloud infrastructure monitoring service, providing dashboards, alerting
systems and visualization of metrics. It supports the main cloud providers, including
AWS, Google Cloud Platform, Microsoft Azure and RedHat OpenShift.
Code analysis with SonarQube is one of the steps of the common development
process, as we can see in Figure 4.4.
The cycle starts with the developer writing code into the chosen IDE (possibly
equipped with the SonarLint plugin, see later). Code is then committed and pushed
onto the repository, and finally merged when the work is completed. The files are
then checked out by the continuous integration tool, the build of the project is
performed, unit tests are run and finally the SonarQube scan is run. The scanner
then publishes the results on the SonarQube server, so that they can be analysed
from the web interface and/or developers can receive notification through e-mail or
68
Tools, resources and processes
directly on their IDEs: based on these results, either some remediation action may
need to take place or the developer can continue his/her work on the source code.
We will now give an overview of the tool and its features by looking at the web
interface, starting from the Projects homepage.
As we can see in Figure 4.5, in this page we can see all the projects that have
been onboarded onto the platform and analyzed at least once and apply filters based
on some metrics, such as projects having a given rating for Reliability, Security,
Security Review and Maintainability, size of the project, coverage of the tests, and
so on. Project can of course be created mainly in two ways:
• manually, by choosing the desired CI platform in which we want to integrate
SonarQube, being it Jenkins, GitHub Actions, Bitbucket Pipelines, GitLab
CI, Azure Pipelines or other CI platforms. We have also the possibility to
69
Tools, resources and processes
If we go into the project details, we find a view like the one in Figure 4.6.
Here we find an overview of the results of the source code analysis performed on
the project: we have two views, one on the New Code, which reports the results of
the analysis on the code that has been added or modified since the last analysis,
and another on the Overall Code, which reports the results on all of the source
code of the project.
There are a few metrics and measures that are evaluated by SonarQube on the
project and for some of them a rating provided as a grade from A to E is given.
The evaluated metrics are:
• Bugs: evaluates the total number of bugs found in the source code, based on
this metric a rating about the Reliability of the project is given;
70
Tools, resources and processes
The ratings seen above are given based on precise thresholds for each of the
characteristics, as we can see in the below tables.
Rating Threshold
A 0 Bugs
B At least 1 Minor Bug
C At least 1 Major Bug
D At least 1 Critical Bug
E At least 1 Blocker Bug
71
Tools, resources and processes
Rating Threshold
A 0 Vulnerabilities
B At least 1 Minor Vulnerability
C At least 1 Major Vulnerability
D At least 1 Critical Vulnerability
E At least 1 Blocker Vulnerability
Rating Threshold
A At least 80% reviewed
B From 70% to 80% reviewed
C From 50% to 70% reviewed
D From 30% to 50% reviewed
E Less than 30% reviewed
Rating Threshold
A Ratio from 0 to 0.05
B Ratio from 0.06 to 0.1
C Ratio from 0.11 to 0.2
D Ratio from 0.21 to 0.5
E Ratio from 0.51 to 1
72
Tools, resources and processes
If we switch to the Issues tab of the project we will find a page like the one in
Figure 4.7.
Here we can see the details about all of the Bugs, Vulnerabilities and Code
Smells found in the project: on the left we see all of the filters that we can apply
on the issues.
• Resolution: an issue may have been resolved in different ways and depending
on its status may have different Resolutions. Closed issues may be Fixed or
Removed, while Resolved issues may be False Positive or Won’t Fix;
The two main components on which SonarQube analyses are based on are
Quality Profiles and Quality Gates.
Quality Profiles are sets of Rules, defined for each language, that are used
during analyses of a project: rule violations will raise Issues. SonarQube defines a
default Quality Profile for each supported language and ideally that profile should
be used for analyses of all project developed using that language, but custom
Quality Profiles can be defined based on the particular project’s features to be
analysed.
The page for defining Quality Profiles is the one we see in Figure 4.8.
Quality Gates are sets of conditions that will determine whether a project is
ready for release or not: each Quality Gate is composed by one or more threshold
on a given metric, if at least one of these thresholds is crossed, the Quality Gate
fails and the project build is stopped.
• A metric;
• An operator;
The example in Figure 4.9 is the SonarQube default Quality Gate, but custom
Quality Gates for our projects can be defined.
Conditions to the Quality Gate can be added by choosing the metric we want
to consider and the desired threshold, like in Figures 4.10 and 4.11 below. The
condition can apply either to the New Code only or on the Overall Code.
75
Tools, resources and processes
Mend SCA
Mend, formerly known as WhiteSource, is an application security company that
provides different kinds of solutions for securing applications, starting from their
development: indeed, it provides SAST, SCA and other solutions to be integrated
in the SDLC and the different adopted tools.
Here we are going to focus on the Mend SCA solution, which allows to perform
security analysis on the third-party and open-source libraries and components that
are integrated in a product: this tool will analyse our product and report if there
are any known vulnerabilities in the used libraries, to do so the Mend Vulnerability
Database is built and consulted: this database is built starting from the most
important vulnerability databases, with NVD above all, but Mend takes into
consideration the facts that:
• the open-source world is very decentralized and many times it could be very
difficult to find new vulnerabilities;
• about 20% of the publicly disclosed vulnerabilities are not included in the
NVD.
76
Tools, resources and processes
So, the research team at Mend came up with methods to identify vulnerabilities
in open-source components as soon as they are created, to include them in their
vulnerability database and finally allow the SCA solution users to know as soon
as possible if their projects’ external dependencies include any vulnerability that
could possibly need remediation in a very short time. This is made by indexing
many sources apart from NVD and analysing billions of open-source files, millions
of open-source libraries and many package managers, technologies, languages, and
so on.
To make life even easier for developers and security teams, Mend provides also
a feature called Effective Usage Analysis (EUA) that reports if vulnerabilities
present in the external libraries are also actually present in the source code of the
analysed project, thus excluding false positives and giving a better overview on
what are the needed remediation actions and how they need to be prioritized, in
order to avoid the product from being vulnerable to any kind of attack.
One of the main features on which the software composition analysis is based
on is the Bill of Materials (BOM), which in the case of software products becomes
the Software Bill of Materials (SBOM). The BOM is a concept that is
widely adopted in all manufacturing fields and it is a list of all the components,
materials, parts, etc. which are needed to produce the final product, it is a very
important element used in supply chain management (SCM): in the field of software
engineering, we do not have physical parts or components to be assembled to build
a machine, instead we have many software libraries and tools, that can be either
open-source or proprietary, which are being used together in order to produce the
final product. The SBOM is used to track every component used in the development
of the software product, and that turns to be useful both for the producers and
the customers:
• the producer can use the SBOM to easily identify the components which are
out of date or vulnerable and need to be updated;
• the consumer can use the SBOM to perform vulnerability and license analysis,
that can be useful for risk management.
SBOMs are used to prevent supply chain attacks, which are a major threat that
software producers have to deal with: a study from Argon [26] states that the
amount of supply chain attacks tripled in 2021 with respect to 2020, this is due to
mainly three factors:
• upload of bad source code into repositories, this reflects into bad quality
artifacts which are full of defects and require lots of time to be fixed, thus
decreasing productivity.
Mend provides a solution [27] for companies to easily produce a SBOM to include
in their products, which is also a US federal government requirement for every
software product: what is required is a formal, machine-readable inventory of all
components and dependencies used in building software. The Mend SBOM not only
lists all the components that a software is made of, but also provides remediation
paths for those components that need to be updated due to present vulnerabilities.
It is produced in the form of a SPDX document, which is machine-readable, and
provides an inventory of all the software components, their dependencies and their
hierarchical relationships.
The Mend SCA solution provides different types of interfaces and ways to be
integrated in the current development pipeline: it provides a web interface with
very good dashboards and reporting tools that provide any kind of data on the
analyses performed on the projects and it can be easily integrated with the most
used IDEs, such as Visual Studio, Eclipse and IntelliJ IDEA, and code repository
platforms, such as GitHub, GitLab and Bitbucket.
We now analyse the main features of the web interface, starting from the product
main page: each product onboarded on the Mend SCA platform has a dashboard
as the one we can see in Figure 4.12 providing a high level overview about the
results of the last analysis performed on it.
In this page we can find a very rich dashboard, reporting several kinds of
information about the product under different forms. In the bottom-left part we
see two tabs about the composition of the product:
• Project Summary: this tab reports all of the projects that compose the
product and the number of libraries included in each product;
• Libraries: here are reported the number and the list of all of the libraries
included in the product, with their name, the license(s), the number of projects
in which they are included and the project’s name.
78
Tools, resources and processes
In the Product Alerts tab, at the top-left, we see some numbers referring to
the alerts related to the overall product, what we are focusing on are the security
alerts, under which we find two measures:
• Per-Library Alerts: this number tells how many libraries included in the
product present one or more vulnerability;
• Per-Vulnerability Alerts: this number tells the overall sum of the vulnera-
bilities found in the libraries.
These vulnerabilities are divided into three levels of severity, depending on their
CVSS Score and following the CVSS v2.0 Ratings system. The CVSS v2.0 Ratings
and CVSS v3.0 Ratings classifications of vulnerability severity are reported in the
tables below.
79
Tools, resources and processes
• Effective (red flag): vulnerabilities which are actually present in the product’s
source code and thus need to be prioritized in the planning of remediation
actions;
• Ineffective (green flag): security alerts which are not actually present in the
source code and so are not to be prioritized, but still to be fixed in order to
avoid them to become effective.
Each of the bars representing effectiveness of the security alerts is then divided
into the colors that represent the severity of the vulnerabilities, so to know the
amount of vulnerabilities of each type per effectiveness.
If we go into the details of the vulnerability analysis, we can find a page like the
one in Figure 4.13.
Here we have a view by library of the security alerts, that is we have a list of
all of the libraries used in all of the onboarded products which have been found
to have a known vulnerability inside them: for each library we have the total of
the overall security alerts and the total vulnerabilities per severity. For example in
the figure above we see that the library “spring-core-2.5.jar” used in the product
80
Tools, resources and processes
“Demo Product” under the project “Demo Data” has three medium severity vulner-
abilities and one low severity vulnerability, with a total of four alerts for this library.
If we go into the details of one of these libraries, we find the details of the
vulnerabilities contained into this library, as in Figure 4.14.
Here we can find an overview about all the vulnerabilities of the library, a short
description of the problem and of the suggested remediations. Vulnerabilities are
uniquely identified by their Vulnerability Id, which is a code that can either start
with CVE, for vulnerabilities identified by the NVD, or WS, for vulnerabilities that
have been found by the Mend security research team. Each vulnerability reports
both the CVSS v2 and v3 scores, for the sake of completeness.
In the DAST world we find tools such as Netsparker, Acunetix and Indusface
WAS: each tool has its characteristics in terms of tested application types, integra-
tion and reporting capabilities, and many more.
IAST tools are often provided by companies that already provide SAST and
DAST solution, since this is a type of analysis that brings together several features
of the static and dynamic analysis. Indeed, among the providers of IAST solutions
we find Veracode, HCL (AppScan) and Checkmarx.
IDE plugins are also a very important tool to be integrated in the development
process: other than the technical advantages, having a tool that is directly used
and seen by the developer that points out the possible security flaws helps him to
broaden his security culture and in the future it may be more likely to consider se-
curity aspects since the beginning of the project. An example of this, as mentioned
above, is the SonarLint plugin by SonarSource: it is a free and open-source plugin
and it is available for the most popular IDEs, including Eclipse, IntelliJ IDEA,
Visual Studio and VS Code.
SonarLint is a tool that provides instant feedback about bugs and vulnerabilities
as code is written, so that the developer is made aware and is able to immediately
fix them by following the provided suggestions for remediation. This allows the
developer to know about the best coding practices and how to employ them and
not produce bad quality code, providing a long-term improvement on the involved
people.
4.4.1 Security
The security team plays for sure a central role in the DevSecOps process: while
the security culture should be as widespread as possible (for example by adopting
a Security Champions model, see later), the security team is in charge of verifying
83
Tools, resources and processes
that the required controls are put in place, that developed software meets the
security requirements, that remediation actions, if needed, are prioritized, planned
and executed, and many more tasks. It is fundamental that the team works with
developers to integrate security tools and practices seamlessly into the existing
SDLC.
4.4.2 Developers
Along with the security team, the development team is another pillar of the
DevSecOps process: developers are the ones that are in first place in charge of not
introducing bugs, flaws and vulnerabilities in the products’ source code and to do
so they must be adequately trained about the DevSecOps culture. The main issue
in the relationship between developers and security is that often the former see the
latter as an obstacle in their journey of delivering software in a fast way, the target
is to develop and deploy working software, eventually dealing with security issues
further in time or when more budget is available: what must be understood is that
not preventing security problem may lead to a number of problems in the worst
case scenario of a security breach, including lots of additional work to remediate,
reputational and economical damage for the company, and so on.
4.4.3 IT Operations
The IT Operations team is in charge of setting up and maintaining the infrastruc-
ture upon which the tools and the final developed products and applications will
be running: this task has to be performed keeping into consideration the business
requirements about service delivery, for all users.
4.4.4 DevOps
The DevOps team has the duty of effectively implement the homonym methodology
to develop and deploy software in an Agile way, through Continuous Integration
and Continuous Deployment. This team has several tasks whose aim is to reduce
distances between development and operations teams, by trying to automate as
much as possible the flow between the two environments in a seamless way through
proper tools.
• Security compliance: as all of the systems of the company, the DevOps ones
must also be security compliant to the organizational policies. Also CI/CD
requires security to be integrated starting from the planning phase in order to
have a secure product at the end of each release cycle, and that is the main
target of DevSecOps: this integration of security in DevOps environments
must also be automated, through integration of AST tools and scripts that
check at each delivery that the product is still security compliant and no
deviation has been introduced. This task is clearly conducted in collaboration
with the security team.
85
Tools, resources and processes
As we see in Figure 4.16, the top-most entity is the Tribe, along with its Tribe
leader: a Tribe is a group of cross-functional squads and the Tribe leader is in
charge of coordinating and overseeing the work of all the Squads.
The cross-functional team that has end-to-end responsibility for the development
of the project is the Squad: a Squad is composed usually by six to ten people, each
of them having different skills and giving his/her contribution to the final product.
Each Squad may have different figures in it, depending on the type of developed
product, but there are two which are mandatory in every team:
86
Tools, resources and processes
Security Champions are part of the Security Chapter of their Tribe: each
Security Chapter has a Security Chapter Lead, which, unlike the Security
Champion, is a full-time role for a security expert who is part of the Vodafone
Global Cyber Security Team and is in charge of:
• coaching the Security Champions about the company’s latest security policies
and the current best practices to be adopted in project development;
• giving recommendations about the best mitigation actions;
• reporting about risks and compliance about the products that the Tribe works
on;
• defining Secure by Design blueprints, patterns and design principles.
87
Tools, resources and processes
This model would ensure that a secure Agile development model is adopted
throughout the company to deliver products fast and in a secure way.
4.5 Processes
In the Vodafone definition of the DevSecOps operating model there are two main
processes to be considered and integrated in the product development phases: the
SDLC and the SPDA.
We have already defined the Software Development Life Cycle as the process
that manages the idea, design, implementation and maintenance of the software
product. We will later deep dive into the detailed phases that compose it.
The Security and Privacy by Design Assurance (SPDA) process has been defined
by Vodafone with the target of assuring that every new product, service and
operation meets the company requirements in terms of compliance to security and
privacy organizational policies and applicable external laws and regulations. This
process is composed by several phases, each of them mapping to one or more phases
of the product development, and can be integrated both into Waterfall and Agile
development models.
1. Idea/Concept: this is the very starting phase, in which the concept of the
new product to develop comes to life and we want to translate it in a concrete
solution to be designed, implemented and deployed;
• Economic: does the company have the required resources to develop this
product?
• Legal: based on the product’s scope, is the company able to guarantee
compliance to the organizational policies and external regulations?
88
Tools, resources and processes
• Operational: can the company satisfy the requirements given the defined
operational framework and workflows?
• Technical: what are the available technologies and human resources to
support the SDLC process?
• Schedule: can the product be delivered in time?
4. Design: the product architecture, functions and operations are here defined
in detail through screen layouts, business rules, process diagrams, pseudocode
and other documentation;
7. Deployment: the final product is put into production and made available to
the final users;
9. Disposal: in this final phase work is planned to dismiss the current system
and transition to a new one, by replacing the proper hardware and software
and archiving or destroying information depending on what are the privacy
requirements. This is a very sensitive step because it must be sure that
no obsolete system is left available for malicious users to exploit them and
eventually disclose information in an unauthorized way.
90
Chapter 5
5.1 Overview
As the Agile methodology requires, the main idea is to have a cross-functional
team composed by several types of figures, each with its own set of skills that will
give a significant contribution to the development of the product: this, as previously
defined, will be a Squad, a group of people that will have end-to-end responsibil-
ity for the new product, service or feature developed. Depending on the type
of product, proper professional figures will be identified and included in the Squad.
In the DevSecOps Operating Model each phase of the Continuum will be mapped
to the corresponding supporting processes, which are the SDLC and the SPDA
presented in the previous chapter, and technologies.
5.2 Plan
People
In this starting phase of the product lifecycle, everyone is involved in the definition
of the product characteristics, features and requirements, from the Product Owner
and stakeholders up to the developers that will implement the product. Of course
the Security and Privacy teams play a crucial role since the beginning, also by
employing the Security Champions.
92
DevSecOps Operating Model
Processes
The involved phases of the SDLC and SPDA processes at this stage of the model
are the ones that relate to the definition of the required product, starting from
the customer’s need to be satisfied and then coming up with the most suitable
solution for fulfilling these necessities: it is in this moment that the functional and
non-functional requirements are gathered from all of the involved stakeholders, they
are analysed and their feasibility is ensured. Finally the most proper architecture
for the product is selected among various possibilities and we define the needed
components and how they interact to offer their services.
The phases of the SDLC process which are then involved in this step are Idea/-
Concept (1), Analysis (2), Requirements analysis and definition (3) and
Design (4).
The results of the SDLC process must be assessed also from the security point
of view during the first two phases of the SPDA process: Idea/Concept and
Design. In the first phase the Security and Privacy teams receive as an input from
the Business team the concept of the product, for example in the form of a High
Level Design (HLD): given the provided information, an analysis is conducted to
identify the risks that are inherent to the product and, if the risks can be mitigated,
the controls that are needed to do so, otherwise the remaining risks need to be
assessed and understand if they can tolerable or not. Also, it is important to
identify possible flaws in the design to be eliminated prior to dedicating resources
to their development: all of this can be done by employing proper Threat Modelling
techniques.
Technologies
During the Plan phase there are no particular technologies that are necessary in
order to produce the requirements and design documents, of course there are
tools that help in having better results in an efficient way: for example planning
and issue tracking tools such as the Jira suite can help a lot in the definition of
the tasks and people assigned to them, the planning of the sprints and tracking
of the opened issues, to check whether they are still to be reviewed or someone is
93
DevSecOps Operating Model
In order to store and manage the produced documents, containing the require-
ments, user stories, design, etc., a VCS, such as Git, could be useful to track their
creation and changes, when they are performed and who did them, so it is easier to
make everyone that is involved responsible and accountable for what they produce.
As stated by the SPDA process, the Security and Privacy teams will be in charge
of ensuring that no requirement that cannot comply with security policies is intro-
duced and that the proper security controls are included based on the remaining
requirements.
The Security Champion of the Squad will also be in charge of including security
requirements in the defined documents, including security user stories.
5.3 Code
People
This is the phase in which the required functionalities and the components defined
in the architecture are implemented and coded by the developers, who must be
supported by the security team which supports the coding stage by providing
training on secure coding practices and about the usage of the proper code analysis
tools and interpretation of their results.
Processes
The phases of the processes that are involved at this stage are the ones that are
related to the development of the components that were defined in the design
documents.
For the SDLC process we have the Development (5) phase, while for the
SPDA process we have the Build stage, in which the security controls which were
identified in the previous stages are integrated into the developed component. If
there are changes during the Development phase with respect to what was defined
in the Design phase, a small iteration of the Design and Build phases of the SPDA
process must be repeated in order to ensure that the proper controls are still in
place and no further risk has been introduced.
94
DevSecOps Operating Model
Technologies
The required technologies in this stage are the ones enabling the developers to
write code and build the product components: these tools can be as simple as a
text editor, but in order to write complex programs more sophisticated tools must
be used, such as IDEs and necessary plugins. Depending on the type of developed
product, a particular IDE may be more suitable than another or it may even be
the case that the choice is forced by some requirement or design constraint: for
example, for developing a mobile application an IDE such as Android Studio would
be an excellent choice, but if we want to make an iOS version available we may be
forced to use the Xcode IDE.
The most used IDEs and code editors also provide a very large set of plugins
which can support almost any kind of necessity by the developer: in this Operating
Model two types of plugins that would be necessary are those that allow easy
integration with VCS and those that provide real-time code analysis to the devel-
oper. In the previous chapter we mentioned SonarLint, the IDE plugin provided
by SonarSource that performs exactly this task: it helps the developer to spot
possible bugs and vulnerabilities as they are coded, with the benefits that the
developer immediately realizes and fixes the problem and learns a lesson that will
help him/her in the future to hopefully not introduce any more flaws of that kind.
Code repositories and VCSs are also essential tools in this phase: the code
written by the developers is pushed onto the repositories, which may be hosted
on platforms like GitHub, GitLab or BitBucket if the chosen VCS is Git. Other
developers in the team can then see changes to the code just by pulling the latest
version on their local machine: each change is tracked both in terms of when it
was performed and who performed it.
Another task of the Security team is to support the developers in the adoption
of the secure coding practices and the usage of the code security tools: policies
95
DevSecOps Operating Model
and procedures must not be “thrown over” to the developers, the introduction of
security inside the coding stage must be a gradual process, otherwise developers
will just continue to see security as an additional step that delays the deliver of
“real” functionalities, which is what they want to achieve. This support will be also
given by setting up dedicated training sessions about secure coding, usage of code
review tools and how to interpret their outputs.
5.4 Build
People
In the Build phase the components that have been coded by the developers and
the third-party ones are built together to create the final system to be delivered
and deployed. Here also the DevOps team must be involved, in order to set up the
tools and the infrastructure needed to automate the build pipeline. The Security
team is also involved for analysing and suggesting possible security tools to be
integrated in the build pipelines.
Processes
The phases of the processes that are involved in this stage of the model are the
ones related to the build and integration of the system, which are the Integration,
Building and Testing (6) phase for the SDLC process and the Build stage for
the SPDA process.
This is the stage in which the build of the application is run and all of the
modules are integrated together: here further checks are performed on whether
the defined security controls have been implemented and no modification that can
potentially introduce further risks have been performed.
Technologies
There are several types of tools that can help in this stage: build automation
tools and CI/CD servers plus SAST and SCA tools for introducing security
analyses.
96
DevSecOps Operating Model
Build automation tools, such as Maven, are very useful for the compilation and
linking of source code files into binary code, packaging the produced executables
and, possibly, running automated tests.
From the security point of view, this is the stage in which tools for performing
SAST and SCA analyses can be performed: these can be either integrated in
the build pipelines or run manually on the available code, using tools such as
SonarQube and Mend SCA.
The role of the developers is to push the code they have written into the dedi-
cated repositories, so that the CI process can take place.
After having successfully implemented those tools in the pipelines, the results of
their analyses must be monitored through proper dashboards: if any critical issues
are found, such as flaws introduced in the source code or vulnerable components
used inside of the product, the Development team must be notified, possibly in
97
DevSecOps Operating Model
5.5 Test
People
This is the phase in which the artifacts produced by the developers are taken by
the testers and QA teams to test the product to be delivered against the defined
requirements and verify that it is ready for releasing. The DevOps and Security
teams are still involved in this phase to integrate tools in the CI/CD pipelines and
check the product compliance against the security policies.
Processes
The Test stage of the DevSecOps Continuum is still mapped to the Integration,
Building and Testing (6) phase of the SDLC process: here the build artifact
obtained by integrating all of the components undergoes various tests in a proper
testing environment, which is usually made as similar as possible to the produc-
tion environment. The tests are performed by a dedicated QA team, which then
reports the results of the tests and state if the product meets the functional and
non-functional requirements and delivers its functionalities without any problem.
Regarding the SPDA process, we enter in the Test and Sign-Off for Launch
phase: this is the phase in which the Security and Privacy teams verify that
the defined security controls have been implemented and that the product is
compliant to the company’s security policies. During this stage, dynamic analyses
are conducted on the product in execution and the environment in which it is
deployed.
Technologies
The tools employed in this phase are the ones allowing to run various kinds of tests:
unit, integration, system-level tests and also those types of tests that are applicable
only to particular types of products, such as tests on the graphical interfaces of
web and mobile applications.
Regarding the security point of view, here we use those tools aiming at assessing
the product security at runtime, that is to say DAST and IAST tools. If a
98
DevSecOps Operating Model
deeper security analysis is needed, then a complete pentest can be run on the given
product, but that is a more time-consuming activity that requires some preliminary
analysis, tests execution and results reporting.
To the results provided by the QA team, the Security team must add those
related to the compliance of the product to the security requirements of the company
and of the product itself: the two teams must work together in order to identify
what are the possible remediation actions to be suggested so that both the usability
and the security of the application are improved.
Processes
These stages correspond to the Deployment (7) phase of the SDLC process, the one
in which the infrastructure for hosting the application is set up and the product is
finally deployed in the production environment and made available to the customers.
On the SPDA process side, we are still in the Test and Sign-Off for Launch
phase: based on the results coming from the tests, a go/no-go decision is made
about the publication of the product and if this decision has a positive outcome,
the product is deployed.
Technologies
The technologies to be adopted in this phase depend on the type of infrastructure
that will be adopted to deploy the product: in case of an on-premises infrastructure,
the most proper tools to be employed are those allowing automatic configuration
99
DevSecOps Operating Model
of the involved systems, such as Puppet, Chef or Ansible, which was presented in
the previous chapter.
In case that the chosen deployment model involves the use of containers, a tool
like Docker could be very useful to create and deploy the product in a containerized
form.
Processes
These stages are related to the Operation, Maintenance and Monitoring
(8) phase of the SDLC process, the one in which the product is in operation and
100
DevSecOps Operating Model
providing its services to the customers: during this phase the product and the
infrastructure it runs on must be continuously monitored and checked for any
anomaly or suspicious behaviour to be further investigated. Also, analytics about
the user behaviour and experience can be collected to provide some metrics about
the product’s trend.
Regarding the SPDA process, we enter in the In-life phase: during this phase,
if any new feature is to be added or significant update is needed to be performed
on the product, the preceding phases of the SPDA process must be repeated in
order to ensure that the changes do not introduce any new risk that is not properly
managed.
Technologies
The tools that can be used during these stages are those for monitoring and main-
taining the health of the services and of the infrastructure they run on: in case
of containerized applications, a tool like Kubernetes could be used, thanks to
its orchestration features that allow having automatic allocation of resources and
containers to the applications, load balancing among the different containers and
self-healing capabilities in case of containers freezing or not working correctly.
Under the security point of view a very recent and innovative solution consists
in the usage of RASP tools, which are based both on data coming from the
executing application and from the external world in order to detect and block
possible attacks.
In these phases the Security and Operations teams must collaborate in the moni-
toring of the behaviour of the application and the services it offers: whenever an
unexpected event is detected, the teams must be alerted and the issue has to be
analysed. If the analysis results in the need for a component to be replaced or
updated related to the infrastructure, the Operations team must perform this task,
always under the supervision and approval of the Security team. The teams must
also be ready to respond to a possible security incident, which, even if one of the
main targets of this model is to avoid them, are never completely excluded from
happening.
101
DevSecOps Operating Model
5.8 Improve
People
This is the phase that restarts the DevSecOps cycle: based on the metrics and
feedback collected in the previous phases, during the operation of the product,
any necessary fix or improvement is planned and designed, exactly as it was done
during the planning phase and involving the same people.
Processes
This stage concludes the cycle by going back to the beginning: after improvements
have been identified, they must in fact be planned and designed exactly as the
starting product was defined. The SDLC process phases mapped to this stage are
then the Analysis (2), Requirements analysis and definition (3) and Design
(4) phases.
From the SPDA point of view we are still in the In-life phase, because it is the
one that manages the steps to be performed when a new feature or update has to
be introduced in the application and it needs to be ensured that it still meets the
security and privacy requirements.
Technologies
As this stage is very similar to the Plan stage, the tools that support it will be
the same ones, being them tools for creating requirements and design documents,
repositories in which they are stored and tracked and planning and issue tracking
platforms, such as Jira.
In this stage all of the parties that were involved in the definition of the initial prod-
uct requirements are again involved to take some decisions about the improvements
to be done on the product: based on the analytics collected during the product
operation, the stakeholders may want to perform some modifications that must
then be discussed with the Development, Security and Privacy teams in order to
ensure that these updates are feasible both in terms of required functionalities and
in terms of security compliance.
102
DevSecOps Operating Model
5.9 Decommissioning
Even if it is not properly a part of the DevSecOps Continuum, decommissioning
is a part of software lifecycle. Every system will eventually come to a point in
which the benefits that come from the functionalities it provides will be overcome
by the problems in its maintenance, which can be caused by several factors, such
as the size of the project that may become too large to be properly maintained
or the components that make the product up becoming legacy and not anymore
supported by their producer: these may be components of strategic importance
inside of the product which cannot be substituted without major refactoring of the
product. This is usually something that the companies to not want to deal with or
cannot afford in terms of time and resources.
The consequence of these problems is that the product comes at its end-of-life
stage and it must be substituted by a new one that still provides the functionalities
of the previous one, but using components which are either maintainable by the
developers or supported by the third-party provider they come from.
Also data that was stored and processed by these end-of-life system must be
properly treated in the ways that are stated by the regulations: depending on the
type of data, it may be needed to archive or even destroy them.
103
Chapter 6
Conclusions
In this final chapter we are going to draw some final considerations about the
performed work, what its outcomes have been and what are some possible future
developments to be performed to fully deploy a DevSecOps strategy.
This thesis work’s starting point was a set of technologies, processes and people
which were already in place for employing a DevOps software development method-
ology based on an on-premises infrastructure: the final aim of the work was to
identify a model in which all of the above mentioned, plus any further additional
resource that could be needed, could be integrated in order to deploy an effective
DevSecOps strategy for secure Agile software development on a cloud-based infras-
tructure.
This was a fundamental step in the process of the transition of Vodafone from
being a Telco company to a Tech company: adopting a DevSecOps software devel-
opment methodology will help in keeping up with the pace of the software and in
general of the IT market, where new features and functionalities are developed and
deployed every day and a company which wants to be competitive must be able to
105
Conclusions
quickly adapt to the required changes. In the same way, new security threats are
always behind the corner and the products that are offered to the market must
be resilient to such threats, in order to not lose the customers’ loyalty and not
incur in any legal issues, especially if we are talking about products and services
which are offered to millions of users and store and process potentially sensitive data.
However, it must be considered that the work carried out in these months is
just a part of a multi-year strategic program which has just been started to be
deployed, its implementation will carry on for the next few years.
At the moment, following the definition of the DevSecOps Operating Model, the
first tools to be integrated have been identified and studied, with the first training
sessions delivered to the teams involved in the process. The first Squads are also to
be defined and composed for the next projects to be developed following the Agile
model, especially including Security Champions that will have to attend dedicated
sessions to be constantly updated about the security threats and the company
policies and procedures to be protected against them.
106
Bibliography
[1] David Chappell et al. «What is application lifecycle management». In: Chap-
pell & Associates (2008) (cit. on p. 8).
[2] Eray Tüzün, Bedir Tekinerdogan, Yagup Macit, and Kürşat İnce. «Adopting
integrated application lifecycle management within a large-scale software
company: An action research approach». In: Journal of Systems and Software
149 (2019), pp. 63–82 (cit. on pp. 9, 12).
[3] Carol Donnelly. 7 Critical Benefits Of Application Lifecycle Management.
url: https://www.entranceconsulting.com/7-critical-benefits-of-
application-lifecycle-management/ (cit. on p. 12).
[4] DevOps. url: https://en.wikipedia.org/wiki/DevOps (cit. on p. 13).
[5] What is Continuous Integration? url: https : / / www . jetbrains . com /
teamcity/ci-cd-guide/continuous-integration/ (cit. on p. 14).
[6] What is Continuous Deployment? url: https : / / www . jetbrains . com /
teamcity/ci-cd-guide/continuous-deployment/ (cit. on p. 15).
[7] What are microservices? url: https://microservices.io/ (cit. on p. 16).
[8] Microservices. url: https : / / en . wikipedia . org / wiki / Microservices
(cit. on p. 16).
[9] Infrastructure as Code. url: https : / / www . hpe . com / it / it / what - is /
infrastructure-as-code.html (cit. on p. 18).
[10] Rileena Sanyal. What is Continuous Monitoring in DevOps? url: https:
//www.headspin.io/blog/what-is-continuous-monitoring-in-devops
(cit. on p. 19).
[11] Jakob Pennington. The Eight Phases of a DevOps Pipeline. url: https:
/ / medium . com / taptuit / the - eight - phases - of - a - devops - pipeline -
fda53ec9bba (cit. on p. 20).
[12] Larry Maccherone. The DevSecOps Manifesto. url: https://medium.com/
continuous - agile / the - devsecops - manifesto - 94579e0eb716 (cit. on
p. 23).
109
BIBLIOGRAPHY
[13] Parveen Bhandari. What is DevSecOps? | The Ultimate Guide. url: https:
//www.xenonstack.com/insights/what-is-devsecops (cit. on p. 24).
[14] Apply Shift-Left Testing Approach to Continuous Testing. url: https://
katalon.com/resources-center/blog/shift- left-testing- approach
(cit. on p. 24).
[15] People, Process, Technology: The PPT Framework, Explained. url: https:
/ / www . plutora . com / people - process - technology - ppt - framework -
explained/ (cit. on pp. 26, 43).
[16] Roger S Pressman. Software engineering: a practitioner’s approach. Palgrave
macmillan, 2005 (cit. on p. 31).
[17] What is IaaS? url: https://www.redhat.com/en/topics/cloud-computi
ng/what-is-iaas (cit. on p. 32).
[18] Secure by design. url: https://en.wikipedia.org/wiki/Secure_by_
design (cit. on p. 34).
[19] Eric Raymond. «The cathedral and the bazaar». In: Knowledge, Technology
& Policy 12.3 (1999), pp. 23–49 (cit. on p. 35).
[20] Gartner Says Worldwide IaaS Public Cloud Services Market Grew 41.4% in
2021. url: https://www.gartner.com/en/newsroom/press- releases/
2022-06-02-gartner-says-worldwide-iaas-public-cloud-services-
market-grew-41-percent-in-2021 (cit. on p. 35).
[21] Operating Model. url: https : / / www . gartner . com / en / information -
technology/glossary/operating-model (cit. on p. 41).
[22] Andrew Campbell. What is an operating model? url: https://opexsociety.
org/body-of-knowledge/operating-model/ (cit. on p. 42).
[23] 2022 Data Breach Investigations Report. url: https://www.verizon.com/
business/resources/reports/dbir/ (cit. on p. 43).
[24] T. Scanlon. Decision-Making Factors for Selecting Application Security Testing
Tools. Carnegie Mellon University’s Software Engineering Institute Blog. Aug.
2018. [Online]. url: http://insights.sei.cmu.edu/blog/decision-ma
king-factors-for-selecting-application-security-testing-tools/
(cit. on p. 54).
[25] SonarQube Documentation. url: https://docs.sonarqube.org/latest/
(cit. on p. 73).
[26] Software Supply Chain Attacks Tripled in 2021: Study. url: https://www.
securityweek . com / software - supply - chain - attacks - tripled - 2021 -
study (cit. on p. 77).
110
BIBLIOGRAPHY
111