CRC Press Practical Security For Agile and DevOps 2022
CRC Press Practical Security For Agile and DevOps 2022
Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume respon
sibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the
copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify
in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any
form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming,
and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the Copyright
Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC
please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification
and explanation without intent to infringe.
DOI: 10.1201/9781003265566
Trademarks Used in This Publication
This book is dedicated to the next generation of application security professionals to help alleviate the
struggle to reverse the curses of defective software no matter where it shows up.
vii
Contents
Dedication vii
Contents ix
List of Figures and Tables xix
Preface xxi
How This Book Is Organized xxii
About the Author xxv
ix
x Practical Security for Agile and DevOps
4.9 Performance 41
4.10 Portability 41
4.11 Privacy 41
4.12 Recoverability 42
4.13 Reliability 43
4.14 Scalability 44
4.15 Security 44
4.16 Serviceability/Supportability 46
4.17 Characteristics of Good Requirements 46
4.18 Eliciting Nonfunctional Requirements 47
4.19 NFRs as Acceptance Criteria and Definition of Done 48
4.20 Summary 48
Chapter Quick Check 49
Exercises 49
References 50
xix
xx Practical Security for Agile and DevOps
Figure 10.3 The Three Ways for DevOps—The Second Way: Amplify Feedback
Loops 131
Figure 10.4 The Three Ways for DevOps—The Third Way: Culture of Continual
Experimentation and Learning 132
Figure 11.1 OpenSAMM Model 141
Figure 11.2 Sample OpenSAMM Assessment Worksheet Extract 146
Figure 11.3 Sample OpenSAMM Scorecard 148
Figure 11.4 Excerpt of a Sample OpenSAMM Roadmap 149
Figure 11.5 The BSIMM Software Security Framework 151
Figure 11.6 BSIMM’s 12 Practices 154
Figure 11.7 BSIMM Average World Maturity Levels Across the 130 Participants
in BSIMM V11 155
Figure 11.8 BSIMM Average Maturity Levels Across Financial Services,
Insurance, and Healthcare 156
This book was written from the perspective of someone who began his software security career
in 2005, long before the industry began focusing on it. Making all the rookie mistakes one
tends to make without any useful guidance quickly turns what’s supposed to be a helpful pro
cess into one that creates endless chaos and lots of angry people. After a few rounds of these
rookie mistakes, it finally dawned on me that we’re going about it all wrong. Software security
is actually a human factor issue, not a technical or process issue alone. Throwing technology
into an environment that expects people to deal with it, but failing to prepare them technically
and psychologically with the knowledge and skills needed, is a certain recipe for bad results.
Think of this book as a collection of best practices and effective implementation recom
mendations that are proven to work. I’ve taken the boring details of software security theory
out of the discussion as much as possible to concentrate on practical applied software security
for practical people.
This is as much a book for your personal benefit as it is for your academic and organization’s
benefit. Professionals who are skilled in secure and resilient software development and related
tasks are in tremendous demand today, and this demand will increase exponentially for the
foreseeable future. As you integrate these ideas into your daily duties, your value increases to
your company, your management, your community, and your industry.
Practical Security for Agile and DevOps was written with the following people in mind:
xxi
xxii Practical Security for Agile and DevOps
• Project managers
• Application security auditors
• Agile coaches and trainers
• Instructors and trainers in academia and private organizations
• Chapter 13 closes the book with a call to action to help you gain access to professional
education, certification programs, and industry initiatives to which you can contribute.
Each chapter logically builds on prior chapters to help you paint a complete set of practical
steps that lead to secure and resilient application software and responsive, secure development
practices that predictably and reliably produce high-quality and resilient applications.
About the Author
Mark S. Merkow, CISSP, CISM, CSSLP, works at HealthEquity, Inc., in Tempe, Arizona,
helping to lead application and IT security architecture and engineering efforts in the office of
the CISO. In addition to his day job, Mark is a faculty member at the University of Denver,
where he works on developing and instructing online courses in topics across the Information
Security spectrum, with a focus on secure software development. He also works as an advisor
to the University of Denver’s Information and Computing Technology Curriculum Team for
new course development and changes to the curriculum.
Mark has over 40 years of experience in IT in a variety of roles, including application
development, systems analysis and design, security engineering, and security management.
Mark holds a Master of Science in Decision and Information Systems from Arizona State
University (ASU), a Master of Education in Distance Education from ASU, and a Bachelor of
Science in Computer Information Systems from ASU.
Mark has authored or coauthored 17 books on IT and has been a contributing editor to
four others. Mark remains very active in the information security community, working in a
variety of volunteer roles for the Phoenix Chapters of (ISC)2®, ISACA®, and OWASP. You can
find Mark’s LinkedIn® profile at: linkedin.com/in/markmerkow
xxv
Chapter 1
Today’s Software Development
Practices Shatter Old
Security Practices
CHAPTER OVERVIEW
Software development techniques and methodologies leapfrog over themselves constantly,
making efforts to secure them a moving target that won’t wait. To address this fact, it’s essen
tial that application security controls and their implementation are as agile as the development
processes they support. In this chapter, you’ll find some strategies to address these moving
targets, while applying tried-and-true security control implementations that remain standing
over time.
CHAPTER TAKEAWAYS
• Examine the changes in the software development lifecycle (SDLC) since its inception.
• Explain the Agile/Scrum Framework for modern day software development.
• Describe the Shift Left approach to implementing software development security controls.
• Understand the principles that apply to successful implementation of application security
programs.
In the decade since Secure and Resilient Software Development 1 was published, the world of
software development has flipped on its head; shed practices from the past; brought about
countless changes; and revolutionized how software is designed, developed, maintained, oper
ated, and managed.
These changes crept in slowly at first, then gained momentum, and have since overtaken
most of what we “know” about software development and the security tried-and-true methods
1
2 Practical Security for Agile and DevOps
that we’ve relied on and implemented over the years. Involvement of application security
(appsec) professionals—if it happened at all—happened WAY too late, after executive deci
sions were already made to supplant old practices and the ink was already dried on contracts
with companies hired to make the change.
This late (or nonexistent) involvement in planning how to address security hobbles appsec
practitioners who are forced to bargain, barter, or somehow convince development teams that
they simply cannot ignore security. Compound this problem with the nonstop pace of change,
and appsec professionals must abandon old “ways” and try to adapt controls to a moving target.
Furthermore, the risks with all-new attack surfaces (such as autonomous vehicles), reliance on
the Internet of Things (IoT), and software that comes to life with kinetic activity can place
actual human lives in real danger of injury or death.
Although we may have less work on our hands to convince people that insecure software is
a clear and present danger, appsec professionals have to work much harder to get everyone on
board to apply best practices that we are confident will work.
A decade ago, we were striving to help appsec professionals convince development organi
zations to—minimally—address software security in every phase of development, and for the
most part over the decade, we saw that far more attention is being paid to appsec within the
SDLC. Now, however, we’re forced to adapt how we do things to new processes that may be
resistant to any changes that slow things down, while the risks and impacts of defective soft
ware increase exponentially.
Here’s the definition of software resilience that we’ll use throughout the book. This defini
tion is an adaptation of the National Infrastructure Advisory Council (NIAC) definition of
infrastructure resilience:
Software resilience is the ability to reduce the magnitude and/or duration of disruptive events.
The effectiveness of a resilient application or infrastructure software depends upon its ability
to anticipate, absorb, adapt to, and/or rapidly recover from a potentially disruptive event.2
In this chapter, we’re going to survey this new landscape for these changes to update our
own models on how to adapt to the brave new world and maintain software security, resilience,
and agility.
New paradigms have rapidly replaced the Waterfall model of software development that we’ve
used since the beginning of the software age. Agile and Scrum SDLCs have all but displaced
the rigorous (and sometimes onerous) activities, and have most certainly displaced the notion
of “phase containment,” which appsec professionals have counted on as a reliable means to
prevent defects from creeping into subsequent phases.
This new landscape includes Agile/Scrum, DevOps, continuous integration/continuous
deployment (CI/CD), and the newest revolution working its way in, site reliability engineer
ing (SRE). To adapt to these changes, we need to understand how the rigor we’ve put into
Waterfall-based projects and processes has been swept away by the tsunami of change that
demands more software, faster and cheaper.
Today’s Software Development Practices Shatter Old Security Practices 3
Changes in the software development paradigm forces change in the software security para
digm, which MUST work hand-in-hand with what development teams are expected to do.
While we typically had a shot at inspecting software for security issues at the end of the
development cycle (because of phase containment), this control point no longer exists. The
new paradigm we had to adopt is called Shift Left, preserving the notion that there are still
phases in the SDLC, while recognizing the fact that there aren’t.
understand that changes in design, once an application is developed, will cost potentially hun
dreds of times more than if the defects were caught while architecture and engineering is
underway.
Developers are affected because they’re not given the luxury of time for extensive testing, as
they often had with former practices. Now, developers may release new code all day and see it
deployed within minutes, so it’s vital that these developers “own” the responsibility for securing
it, which means developing it using a defensive programming state of mind. Shifting Left in the
development activity involves active use, and appropriate response, with security checks built
directly into their integrated development environment (IDE)—for example, Visual Studio or
Eclipse. Although these checks are on incomplete segments of an overall application, coding pro
vides the first opportunity for security inspection and is needed to continue the cycle of appsec.
Testing presents a major challenge to appsec because tolerance for long-running tests has
all but disappeared. Although it’s true that a comprehensive (finished this time) application is
needed for comprehensive testing, product managers won’t wait anymore while security tests
are run, and vulnerable applications may be deployed (ship it now—fix it later). Shifting left
in this environment forces security testing to happen incrementally, in what we used to call
integration testing—the point in development at which all the elements come together to build
a new version of the software. If the implementation of security testing is done correctly and
responsively to the needs of the product managers, it can serve as a control to actually “break”
a build and force remediation of defects. We’ll discuss this at length in Chapters 10 and 11
where we focus on testing.
Taken together, shifting left in the appsec space makes it possible to gain the assurance we
need that our applications are appropriately secure, but it changes the role of appsec profes
sionals from “doing” appsec to needing to empower everyone who touches software in the
SDLC with practical and appropriate knowledge, skills, and abilities.
Although the names and accelerated pace have significantly changed how we deal with
appsec, the activities of software development, as we understood them in Waterfall method
ologies, are still present. Requirements are still being gathered, designs are still being built,
coders are still coding, testers are still testing, and operators are still deploying and managing
applications in production. We can apply what we know works to help secure applications in
development, but we have to step back and let those who are intimate with the application do
the heavy lifting and prove to us that they’ve done what they needed to do!
At the end of the day, software security is a human factors issue—not a technical issue—
and for appsec professionals to succeed in implementing application controls, it’s vital to treat
the human factor in ways we know work, rather than throwing more tools at the problem.
Before we dig into the details on how to create and maintain excellence in application security
programs, let’s cover some enduring principles that we need to live by in everything we do to
secure application software and the processes used to create it:
• Shift Left within the SDLC as much of the appsec work as you can—the more security
work that’s performed closest to the point at which defects are introduced is the surest way
of eliminating them and preventing “defect creep” from one phase or activity to the next.
• AppSec tools are great but are of questionable use if the people using them don’t understand:
○ What the tool is telling them
○ Why their code is vulnerable
○ How their code is vulnerable
○ What do to about the vulnerability
• People can only deal with so much change at one time—too many changes all at once to
their processes leads to chaos and, ultimately, rebellion.
• Automate everything that you can (scanning, remediation planning, retesting, etc.).
• There are only so many of us in the information security departments, but there are
thousands of development team staff who need accountability. Don’t treat security as a
punishment or barrier. Convince development team members that it empowers them—
makes them more valuable as employees and members of the development community—
and they’ll quickly learn that it does all these things!
1.5 Summary
In Chapter 1, we surveyed the modern-day landscape on how software is developed, oper
ated, and managed to understand the impacts these changes have imposed on how we design,
develop, and implement control mechanisms to assure software security and resilience. We’ll
begin to explore how appsec professionals can use Agile practices to improve Agile practices
with security controls and how baking in security from the very start is the surest way to gain
assurance that your applications can stand up and recover from chronic attacks.
Exercises
1. Everyone feels the effects of flawed or vulnerable software when it’s attacked, and you’ve
likely seen some of the consequences from these attacks. Who is (or should be) ulti
mately responsible for assuring that software that’s released into the wild of the Internet
is developed and operated with security in mind?
2. You can get software cheap and fast, but it won’t be very good. You can get it good and
fast, but it won’t come cheaply.
• What do you think the choices being made are in today’s international software develop
ment community?
• What do you think we can do to change this and end the scourge of bad software?
References
1. Merkow, M. S. and Raghavan, L. (2010). Secure and Resilient Software Development. Boca Raton
(FL): CRC Press.
2. Department of Homeland Security (DHS). (2009, September 8). Critical Infrastructure
Resilience Final Report and Recommendations. National Infrastructure Advisory Council
(NIAC). Retrieved June 11, 2019, from http://www.dhs.gov/xlibrary/assets/niac/niac_critical_
infrastructure_resilience.pdf
Chapter 2
Deconstructing Agile and Scrum
CHAPTER OVERVIEW
Chapter 2 provides an examination of the activities found in the Scrum Model and how to
implement security controls. The movement to Scrum and DevOps underscores the need for
changes related to People, Process, and Technology that moves an organization from silo-based
software development to community- and team-based marriages that ultimately form into
DevSecOps that “Build Security In” for all custom-developed software.
CHAPTER TAKEAWAYS
• Determine the placement and timing of security activities in Scrum sprints for architec
ture and development.
• Determine steps that lead to “Shifting Left” the performance of security controls to pre
vent impeding the natural flow of Scrum processes.
• Examine the DevSecOps environment in which automation of security controls improves
both the software development lifecycle (SDLC) process and the software that it produces.
For purposes of context setting and terminology, we’re going to deconstruct the Agile/Scrum
development methodology to discover areas in which appsec controls help in securing software
in development and also help to control the development methodology itself. We’ll look at
ways to use Agile to secure Agile.
Let’s revisit the overall scope of the Agile/Scrum process, shown in Figure 2.1 (originally
Figure 1.1).
There’s Agile/Scrum as a formal, strict, tightly controlled process, and then there’s Agile/
Scrum as it’s implemented in the real world. Implementation of Agile will vary from the fun
damentalist and purist views to various elements that appear as Agile-like processes, and every
thing in between. It’s less important HOW it’s implemented in your environment than it is to
understand WHAT your specific implementation means to your appsec efforts.
9
Figure 2.1 Agile/Scrum Framework (Source: Neon Rain Interactive, licensed under CC BY-ND 3.0 NZ)
Deconstructing Agile and Scrum 11
Scrum role team titles are only relevant in establishing each person’s specific expertise, but
they don’t lock those who are in that role into only performing that activity. Teams are
Figure 2.2 A Typical User Story and Its Lifecycle (Source: Stowe, M. Going Agile: The Anatomy of a User Story. Blog. Retrieved from
https://seilevel.com/requirements/the-anatomy-of-a-user-story. Used with permission of Seilevel.)
Deconstructing Agile and Scrum 13
self-organizing, so expertise is shared across the team as needed to meet their objectives. The
following are those roles of the team that you commonly find in Scrum:
• Scrum Master is the person who serves as conductor and coach to help team members
carry out their duties. Scrum masters are experts on Scrum, oversee the project through
out, and offer advice and direction. The Scrum master most often works on one project
at a time, gives it their full attention, and focuses on improving the team’s effectiveness.5
• Analysts work with the product owner and Scrum master to develop and refine the user
stories that are submitted for development within a development sprint.
• Architects work on the design of the application architecture and design to meet the
requirements described by the user stories. Design work is conducted in a design sprint,
sometimes called sprint zero.
• Designers work on the aspects of the product that relate to user interfaces and user expe
rience with the product. Essentially, designers are translators for the following aspects of
the product6:
○ Translate users’ desires and concerns for product owners.
○ Translate features for users—how the product will actually work and what it looks
like.
○ Translate experiences and interfaces for engineers.
• Engineer/Lead Engineer/Developers work to build (code) the features for the applica
tions based on user story requirements for each sprint.
• Testers/Quality Assurance (QA) Leads are those who work to determine if the applica
tion being delivered meets the acceptance criteria for the user stories and help to provide
proof for the DoD for those user stories and, ultimately, the product.
As you’ll see in Chapter 3, each of these roles require specialized application security train
ing to help them to gain the skills they need for ownership and responsibility of security for
their product.
the design stage, the development stage, the testing phases, and, ultimately, the acceptance
phase prior to deployment of that release.
As constrained user stories enter a sprint, business systems analysts will refine these into
specifications that are suitable for architecture and design work. As that work progresses and
a final draft of a system design is available, the process of threat modeling and attack surface
analysis will help to remove design defects that could lead to the most expensive and hard
est to remediate vulnerabilities. Performing this work while remaining in the design sprint
enables development teams to discover and fix the design or add controls that may be missed
and serves as a phase-containment control to prevent defect creep. Threat modeling and other
techniques for risk assessment are covered in Chapter 7.
Once an application design goes into the development activity, developers can use inte
grated development environment (IDE)-based security testing tools that can help them to
identify and remove unit-based defects, such as the use of insecure functions, failures to sani
tize inputs, etc.
As the product comes together from the various developers working on units of it, and these
units are collected for the application build, you find the first opportunity to perform static
code analysis testing (SAST). Scanning can be set up within sandboxes in the development
environment to help the team eliminate defects that can only be discovered in or after integra
tion steps. Teams should be encouraged to use the IDE-based checker and the sandbox testing
continuously as the application gains functionality. Open source components and libraries
used in the application can also be inspected for known vulnerabilities, using software compo
sition analysis (SCA), and updated as needed with newer versions that patch those issues. Once
static code scanning is complete, the latest, clean scan can be promoted as a policy gate scan as
proof the application meets at least one DoD for security acceptance criteria.
As the methodology describes, this process repeats with each new sprint until a function
ally complete, high-quality application is primed for release and deployment.
With the successful rise and proof of viability of Scrum to speed up software development,
further changes made to speed up HOW software is deployed came on the scene with the mar
riage of development and operations.
In the “old” days, development teams would prepare change requests to throw the applica
tion over the wall for deployment and operations. These change requests would go through
multiple levels of review and approvals, across multiple departments, to launch their software
into production or make it available to the world. This process alone often took weeks or
months.
Now, development teams and operations teams work together as partners for ongoing oper
ations, enhancements, defect removal, and optimization of resources as they learn how their
product operates in the real world.
As appsec professionals began integrating into planning for continuous integration/
continuous deployment (CI/CD), DevOps, and new models for data center operations,
DevOps began to transform into what we’ll call DevSecOps. It’s also referred to as Rugged
DevOps, SecDevOps, and just about any permutation you can think of.
16 Practical Security for Agile and DevOps
Essentially, DevOps (and DevSecOps) strives to automate as much as possible, leaving time
for people to perform quality-related activities that help to optimize how the application works.
From the time a version of software (a feature branch) is integrated and built (compiled
and packaged) for release, automation takes over. This automation is often governed by a
gatekeeper function that orchestrates the process, runs suites of tests on the package, and will
only allow the application to release if all the gate control requirements have been met. If a
test reports an outcome that the gatekeeper’s policy indicates as a failure, the gatekeeper func
tion can stop, or break, the build, necessitating attention and remediation from the product
team. Testing automation might include using a broad set of testing tools that perform a wide
variety of tests, such as functional tests, code quality and reliability tests, and technical debt.
This is also an opportunity to include security-related tests, but that testing in the CI/CD
pipeline must complete in a matter of seconds—or at worst, a few minutes. Otherwise, it won’t
be included as a gate for gatekeeper purposes, or worse, may not be run at all. Five minutes
maximum is a good rule of thumb for the amount of extra time you may be allotted to test in
the CI/CD pipeline. This specific constraint on testing is a primary driver of the Shift Left
paradigm to adapt security controls within the SDLC. Figure 2.4 is a simple depiction of how
Agile and DevOps work in unison.8
Figure 2.5 shows what the marriage of Dev and Ops teams looks like when comprehensive
security controls transform DevOps into DevSecOps.9
Throughout the rest of the book, we’ll look at how these controls can be implemented into
your own environment to operate seamlessly with your existing practices.
2.6 Summary
In Chapter 2, we took a deeper dive into the new and improved software development world
to see what’s changed and what’s stayed the same as we explore areas for opportunities to
Figure 2.5 DevSecOps Cycle (Source: Retrieved from https://published-prd.lanyonevents.com/published/rsaus20/sessionsFiles/17851
/2020_USA20_CXO-W09-The-Impact-of-Software-Security-Practice-Adoption-Quantified.pdf. Used with permission of L. Maccherone, Jr.)
18 Practical Security for Agile and DevOps
effectively implement security controls and practices. We examined the overall Agile/Scrum
SDLC, roles, activities, and responsibilities. Next, we saw how the marriage of development
and operations teams provide opportunities for appsec professionals to “ruggedize” how appli
cations are managed and operated to yield high quality and resilience every time.
3. Software security is most effective when it’s addressed in which SDLC activity?
a) Design sprint
b) Development sprint
c) Sprint planning
d) All Scrum activities
Exercises
1. Consider the merits of using User Story Acceptance Criteria as “guardrails” for how
a feature is designed and implemented. In addition, consider using the Definition of
Done (DoD) User Stories to require appropriate use of appsec development controls as
an implementation of Building Security In.
What argument can you make to development team managers that it’s the best chance
for end-to-end application security and operational security?
Deconstructing Agile and Scrum 19
2. What would you consider as some of the barriers to bringing about fully secured Agile
SDLC that feeds into a well-defined and well-secured DevOps practice? What are some
high-level suggestions on how these barriers or blockers can be overcome?
References
1. Trapani, K. (2018, May 22). What Is AGILE?—What Is SCRUM?—Agile FAQ’s. Retrieved from
https://www.cprime.com/resources/what-is-agile-what-is-scrum/
2. Cohn, M. (n.d.). User Stories and User Story Examples by Mike Cohn. Retrieved from https://
www.mountaingoatsoftware.com/agile/user-stories
3. Atlassian. (n.d.). User Stories. Retrieved from https://www.atlassian.com/agile/project-manage
ment/user-stories
4. User Story. (n.d.). Retrieved from https://milanote.com/templates/user-story-template
5. Understanding Scrum Methodology—A Guide. (2018, January 11). Retrieved from https://www
.projectmanager.com/blog/scrum-methodology
6. Tan Yun (Tracy). (2018, July 3). Product Designers in Scrum Teams? Part 1. Retrieved from https://
uxdesign.cc/design-process-in-a-scrum-team-part-1-d5b356559d0b
7. A Project Management Methodology for Agile Scrum Software Development. (2017, October 31).
Retrieved from https://www.qat.com/project-management-methodology-agile-scrum/
8. Agile vs DevOps: Demystifying DevOps. (2012, August 3). Retrieved from http://www.agile
buddha.com/agile/demystifying-devops/
9. Maccherone, L. (2017, March 19). DevSecOps Cycle [Diagram]. Retrieved from https://twitter.com
/lmaccherone/status/843647960797888512
Chapter 3
Learning Is FUNdamental!
CHAPTER OVERVIEW
Chapter 3 presents the foundational concepts for creating a role-based training program for
all members on a development team. You’ll learn the rationale for building an awareness and
education program for appsec and how to apply the principles to an effective program that
prepares the workforce to Build Security In on all development projects.
CHAPTER TAKEAWAYS
• Evaluate the principles of role-based training for development staff for effective design of
a culturally appropriate appsec training and education program.
• Develop a role-specific training curriculum for all roles on a traditional Scrum develop
ment team.
• Determine the appropriate methods for delivering and tracking progress of learners as
they progress through the curriculum.
As it turns out, throwing technology at defective software is likely the worst way to address
appsec and ignores the basic tenet—software security is a human factors issue, not a technical
issue. Tools are seductive because of their coolness factor, ease of acquisition and use, and abil
ity to produce quick results that—in fact—tell you that you do have an issue with software
security. Taking tools to the next step is where things quickly fall apart.
Suddenly, development teams are bombarded by reams of proof that their software is defec
tive, and with finger-pointing from security teams, they’re left in a state of upset and overall
chaos. Furthermore, these development team members often don’t understand what this proof
is telling them and are completely unprepared to address these defects in any meaningful way.
Agile leads to an environment in which the incentives for developing new applications are
found when the software is delivered quickly and as inexpensively as possible. Goodness or
21
22 Practical Security for Agile and DevOps
quality (or resilience or security) is not directly rewarded, and often the extra work required to
address software goodness isn’t given to development teams so they may address it.
Making matters worse, traditional college and private education that prepares programmers
and IT roles for new technologies, new languages, and new platforms don’t arm their students
with the skills they need to meet the demands of organizations that require resilient, high-
quality applications that can be constructed quickly at acceptable costs. Many development
team members may enter the workforce never hearing the term nonfunctional requirement.
Each organization then finds they own the responsibility to break old bad habits, instill
new good habits, and educate the workforce adequately to fill these gaps. To start the process,
awareness of software security as an institutional issue is needed to set the stage for everything
that follows. Awareness drives interest and curiosity and places people on the path to wanting
to learn more. This awareness greases the skids that enable smooth engagement in software
security education and ongoing involvement in appsec-related activities that “keep the drum
beat” alive throughout the year.
In this chapter, we’re going to explore ways to bootstrap an awareness program that leads
development team members into role-specific training to gain the right knowledge, skills,
and abilities to succeed as defensive development teams.
• Executive management sets the mandate. With management mandates for secure appli
cation development that are widely communicated, you’re given the appropriate license
to drive a program from inception forward to continuous improvement. You’ll need this
Learning Is FUNdamental! 23
executive support for establishing a program, acquiring an adequate budget and staff, and
keeping the program going in the face of setbacks or delays.
• Awareness and training must be rooted in company goals, policies, and standards
for software security. Establishing, then using, documented organizational goals, poli
cies, and controls for secure application development as the basis for your awareness and
training program creates a strong connection to developer actions that lead to compli
ance and “defense in depth” brought to life.
• Learning media must be flexible and be tailored to the specific roles within your
SDLC. Not everyone can attend an in-person instructor-led course, so alternatives should
be provided, such as computer-based training, recorded live presentations, and so forth.
• Learning should happen as close as possible to the point where it’s needed. A lengthy
course that covers a laundry list of problems and solutions won’t be useful when a spe
cific issue crops up and the learner can’t readily access whatever was mentioned related
to the issue.
• Learning and practicing go hand in hand. As people personally experience the “how
to” of learning new skills, the better the questions they ask and the quicker the knowl
edge becomes a regular practice.
• Use examples from your own environment. The best examples of security problems
come from your applications. When people see issues with code and systems they’re
already familiar with, the consequences of exploiting the code’s vulnerabilities hit close to
home and become more real and less theoretical. Furthermore, demonstrating where these
examples stray from internal standards for secure software helps people make the connec
tion between what they should be doing and what they’ve been doing.
• Add learning milestones into your training and education program. People are less
motivated to learn and retain discrete topics and information if learning is treated as a
“check box” activity. People want milestones in their training efforts that show progress
and help them gain recognition and demonstrate progress. As you prepare a learning cur
riculum for your various development roles, build in a way to recognize people as they
successfully advance through the courses and make sure everyone knows about it.
• Make your program company culture relevant. Find icons or internally well-known
symbols in your organization that resonate with employees and incorporate them into
your program or build your program around them.
• BOLO. Be On the Look Out for people who participate in your awareness and train
ing program who seem more enthusiastic or engaged than others. These people are your
candidates for becoming internal application security evangelists or application security
champions. People love thought leaders, especially when they’re local, and you can har
ness their enthusiasm and interest to help you advance your program and your cause.
When we’re honest with ourselves, we know that software security is not the most exciting
or engaging topic around. Apathy is rampant, and too many conflicting messages from prior
attempts at software security awareness often cause people’s eyes to glaze over, which leads to
even further apathy and disconnection from the conversation.
24 Practical Security for Agile and DevOps
Peter Sandman, who operates a risk communication practice, has identified a strategy for
communication that’s most appropriate for software security awareness, as well as other issues
where apathy reigns but the hazards are serious (e.g., Radon poisoning, employee safety). The
strategy, called Precaution Advocacy,1 is geared toward motivating people by overcoming bore
dom with the topic. Precaution Advocacy is used on high-hazard, low-outrage situations in
Sandman’s Outrage Management Model. The advocacy approach arouses some healthy out
rage and uses this attention to mobilize people to take, or demand, precautions.
Software security is a perfect issue for which it’s difficult to overcome apathy and disinfor
mation and motivate people to address the issues that only people can address and solve.
Precaution Advocacy suggests using four ways to getting people to listen—then learn:
The last thing you want to do is frighten people or lead them to believe the sky is falling,
but you do want to motivate them into changing their behavior in positive ways that improve
software security and contribute to the organization’s success. As your program progresses,
metrics can show how improvements in one area lead to reduced costs in other areas, simpler
and less frequent bug fixing, improved pride of code ownership, and, eventually, best practices
and reusable code components that are widely shared within the development community.
Learning Is FUNdamental! 25
Selecting one of these or a hybrid of strategies will depend on several factors that are specific
to your organization. These factors include geographical dispersion to where your development
teams are located, separation or concentration of groups who are responsible for mission-critical
applications, existing infrastructures for educating employees, number of people available to
conduct training, number of people needing training, etc. Learning programs come in all
shapes and sizes. Some programs are suited to in-person training, others to online, computer-
based training (CBT), or hybrids of the two. Gamification of learning has entered the field
and includes the use of cyber ranges (discussed later) and computer-based learning delivered in
a game-like environment.
lead to training-related issues. At other times, there are managers of teams who stand out as
excellent in how their teams complete the training in a reasonably quick period of time (3–4
months after it’s assigned).
What some of these managers do is set aside some time in the Agile process itself to run a
“training sprint,” in which the 2–4 weeks of time allotted for a sprint is used for everyone to
complete their training. Other managers set aside a work day and take their staff off-site for a
training day, bring in lunch, and make it easier for team members to concentrate on training,
and not the issue of the moment. You can even turn the effort into a type of friendly competi
tion, using learning modules based on gamification of training.
Conduct security training for staff that highlights application security in the context of each
role’s job function. Generally, this can be accomplished via instructor-led training in 1–2 days
or via computer-based training with modules taking about the same amount of time per per
son. For managers and requirements specifiers, course content should feature security require
ments planning, vulnerability and incident management, threat modeling, and misuse/abuse
case design. Tester and auditor training should focus on training staff to understand and more
effectively analyze software for security-relevant issues. As such, it should feature techniques for
code review, architecture and design analysis, runtime analysis, and effective security test plan
ning. Expand technical training targeting developers and architects to include other relevant
topics such as security design patterns, tool-specific training, threat modeling and software
assessment techniques. To rollout such training, it is recommended to mandate annual security
awareness training and periodic specialized topics training. Course should be available (either
instructor-led or computer-based) as often as required based on head-count per role.
At Level III maturity of Education and Guidance in OpenSAMM, the notion of certification
for development roles appears. These types of certifications may be available internally though
a custom-developed certification process, or in the marketplace through such programs as
ISC2®’s CSSLP®6 and/or SANS’ GIAC7 programs.
3.12 Summary
AppSec SETA Programs are an all-encompassing and difficult problem to address and solve
and require dedication, effort, patience, and time to build an effective program. Awareness
and education are vital for success and require a many-hats approach that includes psychology,
creativity, engaging materials, formal structures for learners to navigate, and a solid rooting
in how people learn and apply new skills in their jobs. As you apply these concepts and plan
activities, events, and media for your program’s ongoing communications, you will be well
on the way to building the best program possible for yourself, your development teams, and
your organization.
a) Provide context
b) Inform everyone about the program and roadmap
c) Provide common understanding and terminology
d) All of the above
2. The strategy for communication meant to overcome apathy to hazards that are signifi
cant is called:
a) Outrage management
b) Crisis communication
c) Precaution advocacy
d) Crisis management
Exercises
1. Since education and awareness are the cornerstones for a successful secure software
development lifecycle (SDLC), where would you begin to educate those who are charged
with designing and developing custom software for your organization? Who should be
trained, and what kinds of training should they receive?
2. Research some of the training providers for software security (list can be found in
Appendix B). Document the merits and disadvantages of using a commercial provider
of AppSec training for your role-based training program over developing and delivering
your own internally developed content. Can balance between the two types be achieved?
3. What are some ways, directly or indirectly, to measure the success of your role-based
training program? What changes do you expect to find after development teams com
plete their foundational and role-based courses? What metrics can be identified related
to the training that can be tracked over time to determine effectiveness?
References
1. Snow, E. (n.d.). Motivating Attention: Why People Learn about Risk . . . or Anything Else (Peter
Sandman article). SnowTao Editing Services. Retrieved from http://www.psandman.com/col
/attention.htm
2. Cognitive Dissonance. (2007, February 5). Retrieved from https://www.simplypsychology.org
/cognitive-dissonance.html
3. BITS Software Security Framework. (2012). Retrieved from http://www.bits.org/publications
/security/BITSSoftwareAssurance0112.pdf
4. Security Innovation. (n.d.). Rolling Out an Effective Application Security Training Program.
Retrieved from https://web.securityinnovation.com/rolling-out-an-effective-application-security
-training-program/thank-you?submissionGuid=8c214b9b-e3fe-4bdb-8c86-c542d4cf1529
32 Practical Security for Agile and DevOps
CHAPTER OVERVIEW
Security features don’t magically appear in custom-developed applications—they must be
specified first. Starting with vague requirements that state, “The system must be developed
securely,” won’t bring about the desired effect. While functional requirements state what the
software must do, nonfunctional requirements (NFRs) constrain how the software does what
it’s supposed to do and help to prevent the software from doing what it is not supposed to do.
Security requirements and other NFRs (e.g., maintainability, scalability, portability, etc.) must
be specified by the designers so there’s half a chance they’ll be considered during development.
CHAPTER TAKEAWAYS
• Describe the distinctions between functional and nonfunctional requirements.
• Examine the families of NFRs that need expressing in project Definition of Done (DoD)
and feature user stories as acceptance criteria.
• Determine the attributes of well-written and well-understood NFRs for acceptance crite
ria and verification.
Chapter 4 shifts the focus to the beginning steps for product development, in which fea
tures are selected and written up as user stories and added to the product backlog. We’ll first
examine the classes and families for constraints on the product that need to be specified for
purposes of resilience. We’ll then look for ways to apply these constraints as acceptance criteria
and DoD attainment.
33
34 Practical Security for Agile and DevOps
With a clear understanding of the nonfunctional requirements that constrain how a user
story feature will be designed, developed, and tested, all of those on the Scrum team are
working from the same playbook. By specifying these constraints up front, you’ve added key
ingredients to the product development lifecycle that not only Build Security In but also enable
various other desirable aspects, such as scalability, portability, reliability, and so on. Because
one of the Agile goals for user stories is a change from specifying what needs to be present to
talking about what needs to be present, and you can neatly include elements of performance,
reliability, uptime, security, and so forth.
We’ll examine 15 categories of nonfunctional requirements to help you to decide which
characteristics are essential or desirable as you discuss user stories. From there we’ll look at
some concrete examples on how to use NFRs as acceptance criteria and DoD.
• Operate it
• Maintain it
• Oversee the governance of the software development lifecycle (SDLC)
• Serve as security professionals
• Represent legal and regulatory compliance groups who have a stake in assuring that the
software is in compliance with local, state, and federal laws.
Although functional requirements state what the system must do, NFRs constrain how the
system must accomplish the what.
In commercial software, you don’t see these features or aspects of software advertised or
even discussed on the package or in marketing literature for the software. Developers won’t
state that their program is more secure than their competitor’s products, nor do they tell you
much about the environment under which the software was developed. As purchasers of soft
ware, we don’t tend to ask for the characteristics related to uptime, reliability, accuracy, or
speed. We simply assume those characteristics are present. But providing these features is not
free, cheap, or automatic. Someone has to build these in from the moment a user story is written!
Figure 4.1 illustrates what happens when requirements are ill-understood, poorly docu
mented, or just assumed by development and support teams. Although this comic has been
around for four decades, it’s as relevant today as when it first came out.
Figure 4.1 Software Development Pitfalls
36 Practical Security for Agile and DevOps
• Availability
• Capacity
• Efficiency
• Extensibility
• Interoperability
• Manageability
• Maintainability
• Performance
Product Backlog Development—Building Security In 37
• Portability
• Privacy
• Recoverability
• Reliability
• Scalability
• Security
• Serviceability
You may hear or see NFRs also called design constraints, quality requirements, or “ ilities,” as
referenced by the last part of their names. You’ll also see that there is some overlap with NFRs:
Some requirements address more than one aspect of quality and resilience requirements, and
it’s not important where this shows up, so long as it winds up as part of acceptance criteria or
DoD (or both), is accounted for in all development activities, and is tested to assure its presence
and correct operation.
Here we’ll examine these various areas and discuss some broad and some specific steps and
practices to assure their inclusion in the final product.
4.3.1 Availability
Availability shows up again later as a goal of security, but other availability requirements address
the specific needs of the users who access the system. These include maintenance time windows
at which the software might be stopped for various reasons. To help users determine their avail
ability requirements, experts recommend that you ask the following questions:
The answers to these questions can help you identify times when the system or applica
tion must be available. Normally, responses coincide with users’ regular working hours. For
example, users may work with an application primarily from 8:00 a.m. to 5:00 p.m., Monday
through Friday. However, some users want to be able to access the system for overtime or
telecommuting work. Depending on the number of users who access the system during off-
hours, you can choose to include those times in your normal operating hours. Alternatively,
you can set up a procedure for users to request off-hours system availability at least three days
in advance.
When external users or customers access a system, its operating hours are often extended
well beyond normal business hours. This is especially true with online banking, Internet ser
vices, e-commerce systems, and other essential utilities such as electricity, water, and commu
nications. Users of these systems usually demand availability 24 hours a day, 7 days a week, or
as close to that as possible.
How often can you tolerate system outages during the times that you’re using the system or
application? Your goal is to understand the impact on users if the system becomes unavailable
when it’s scheduled to be available. For example, a user may be able to afford only two outages
a month. This answer tells you whether you can ever schedule an outage during times when
38 Practical Security for Agile and DevOps
the system is committed to be available. You may want to do so for maintenance, upgrades,
or other housekeeping purposes. For instance, a system that should be online 24 hours a day,
7 days a week may still require a scheduled downtime at midnight to perform full backups.
How long can an outage last, if one does occur? This question helps identify how long the
user is willing to wait for the restoration of the system during an outage, or to what extent out
ages can be tolerated without severely affecting the business. For example, a user may say that
any outage can last for up to a maximum of only three hours. Sometimes a user can tolerate
longer outages if they are scheduled.1
Depending on the answers to the questions above, you should be able to specify which category
of availability your users require, then proceed with design steps accordingly:
The higher the availability requirements, the more costly the implementation will be to remove
single points of failure and increase redundancy.
4.4 Capacity
When software designs call for the ability for support personnel to “set the knobs and dials”
on a software configuration, instrumentation is the technique that’s used to implement the
requirement. With a well-instrumented program, variables affecting the runtime environment
for the program are external to the program (not hard coded) and saved in an external file
separate from the executing code. When changes are needed to add additional threads for pro
cessing, programmers need not become involved if system support personnel can simply edit
a configuration file and restart the application. Capacity planning is made far simpler when
runtime environments can be changed on the fly to accommodate changes in user traffic,
changes in hardware, and other runtime-related considerations.
4.5 Efficiency
Efficiency refers to the degree that a system uses scarce computational resources, such as CPU
cycles, memory, disk space, buffers, and communication channels.2 Efficiency can be charac
terized using these dimensions:
NFRs for efficiency should describe what the system should do when its limits are reached
or its use of resources becomes abnormal or out of pattern. Some examples here might be to
alert an operator of a potential condition, limit further connections, throttle the application,
or launch a new instance of the application.
4.6 Interoperability
Interoperability is the ability of a system to work with other systems or software from other
developers without any special effort on the part of the user, the implementers, or the support
personnel. Interoperability affects data exchanges at a number of levels: ability to communicate
seamlessly with an external system or trading partner, semantic understanding of data that’s
communicated, and ability to work within a changing environment of hardware and support
software. Interoperability can only be implemented when everyone involved in the develop
ment process adheres to common standards. Standards are needed for communication chan
nels (e.g., TCP/IP), encryption of the channel when needed (e.g., SSL/TLS), databases (e.g.,
SQL), data definitions (e.g., using XML and standard Document Type Definitions, JSON
objects), interfaces between common software functions and microservices (e.g., application
programming interfaces [APIs]), and so on. Interoperability requirements should dictate what
standards must be applied to these elements and how the designers and developers can get their
hands on them to enable compliant application software.
Interoperability is also concerned with the use of internal standards and tools for develop
ment. When possible, new systems under development should take advantage of any existing
standardized enterprise tools to implement specific features and functions—for example, sin
gle sign-on, cryptographic libraries, and common definitions of databases and data structures
for internal uses.
4.7 Manageability
Manageability encompasses several other areas of NFRs but is focused on easing the ability
for support personnel to manage the application. Manageability allows support personnel to
move the application around available hardware as needed or to run the software in a virtual
machine, which means that developers should never tie the application to specific hardware
or external nonsupported software. Manageability features require designers and developers
to build software as highly cohesive and loosely coupled. Coupling and cohesion are used as
software quality metrics as defined by Stevens, Myers, and Constantine in an IBM Systems
Journal article.3
4.7.1 Cohesion
Cohesion is increased when the responsibilities (methods) of a software module have many
common aspects and are focused on a single subject and when these methods can be carried
out across a variety of unrelated sets of data. Low cohesion can lead to the following problems:
• Increased difficulty in maintaining a system because logical changes in the domain may
affect multiple modules and changes in one module may require changes in related modules
• Increased difficulty in reusing a module because most applications won’t need the extrane
ous sets of operations that the module provides
4.7.2 Coupling
Strong coupling happens when a dependent class contains a pointer directly to a concrete
class that offers the required behavior (method). Loose coupling occurs when the dependent
class contains a pointer only to an interface, which can then be implemented by one or many
concrete classes. Loose coupling provides extensibility and manageability to designs. A new
concrete class can easily be added later that implements the same interface without ever having
to modify and recompile the dependent class. Strong coupling prevents this.
4.8 Maintainability
Software maintenance refers to the modification of a software application after delivery to
correct faults, improve performance or other attributes, or adapt the product to a modified
environment, including a DevSecOps environment.4 Software maintenance is an expensive
and time-consuming aspect of development. Software system maintenance costs are a substan
tial part of life-cycle costs and can cause other application development efforts to be stalled
or postponed while developers spend inordinate amounts of time maintaining their own or
other developers’ code. Maintenance is made more difficult if the original developers leave the
application behind with little or no documentation. Maintainability within the development
process requires that the following questions be answered in the affirmative:
1. Can I find the code related to the problem or the requested change?
2. Can I understand the code?
3. Is it easy to change the code?
4. Can I quickly verify the changes—preferably in isolation?
5. Can I make the change with a low risk of breaking existing features?
6. If I do break something, is it easy to detect and diagnose the problem?
4.9 Performance
Performance (sometimes called quality of service) requirements generally address three areas:
The end users of the system determine these requirements, and they must be clearly docu
mented if there’s to be any hope of meeting them.
4.10 Portability
Software is considered portable if the cost of porting it to a new platform is less than the cost of
rewriting it from scratch. The lower the cost of porting software, relative to its implementation
cost, the more portable it is. Porting is the process of adapting software so that an executable
program can be created for a computing environment that is different from the one for which
it was originally designed (e.g., different CPU, operating system, mobile device, or third-party
library). The term is also used in a general way to refer to the changing of software/hardware
to make them usable in different environments. Portability is most possible when there is a
generalized abstraction between the application logic and all system interfaces. When there’s a
requirement that the software under development be able to run on several different comput
ing platforms—as is the case with Web browsers, email clients, etc.—portability is a key issue
for development cost reduction, and sufficient time must be allowed to determine the optimal
languages and development environments needed to meet the requirement without the risk of
developing differing versions of the same software for different environments, thus potentially
increasing the costs of development and maintenance exponentially.
4.11 Privacy
Privacy is related to security in that many privacy controls are implemented as security con
trols, but privacy also includes nonsecurity aspects of data collection and use. When designing
a Web-based application, it’s tempting to collect whatever information is available to help with
site and application statistics, but some of the practices used to collect this data could become
a privacy concern. Misuse or overcollection of data should be prevented, with specific require
ments on what data to collect, how to store it, how long to retain it, and what’s permitted for
use of the data, as well as letting data providers (users in most cases) determine if they want
that data collected in the first place.
The US Federal Trade Commission offers specific guidance on fair information practice
principles that are related to four areas, along with other principles for collecting information
from children6:
42 Practical Security for Agile and DevOps
1. Notice/awareness
2. Choice/consent
3. Access/participation
4. Integrity/security
1. Notice/awareness—In general, a website should tell the user how it collects and handles user
information. The notice should be conspicuous, and the privacy policy should clearly state
what information the site collects, how it collects it (e.g., forms, cookies), and how it uses it
(e.g., Is information sold to market research firms? Available to meta-search engines?). Also, the
policy should state how the site provides the other “fair practices”: choice, access, and security.
2. Choice/consent—Websites must give consumers control over how their personally identi
fying information is used. This includes marketing directly to the consumer, activities such
as “purchase circles,” and selling information to external companies such as market research
firms. The primary problems found here involve collecting information for one purpose and
using it for another.
In 2018, the European Union (EU) General Data Protection Regulation (GDPR) took
effect as a legal framework that sets guidelines for the collection and processing of personal
information from individuals who live in the EU. Since the Regulation applies regardless of
where websites are based, compliance is mandated by all sites that attract European visitors,
even if they don’t specifically market goods or services to EU residents. The GDPR mandates
that EU visitors be given a number of data disclosures. The site must also take steps to facilitate
such EU consumer rights as timely notification in the event of personal data being breached.
Adopted in April 2016, the Regulation came into full effect in May 2018, after a two-year
transition period.7
Depending on the market for your product, you may need to pay much closer attention to
privacy than ever before.
4.12 Recoverability
Recoverability is related to reliability and availability but is extended to include require
ments on how quickly the application must be restored in the event of a disaster, unexpected
Product Backlog Development—Building Security In 43
Business impact analysis (BIA) can help to tease out these details, and when the process
is applied across the entire population of business units, applications, and systems, it helps a
company determine the overall priority for restoring services to implement the company’s busi
ness continuity plan.
Table 4.1 outlines one possible set of application criticality levels that can be used for plan
ning, along with some possible strategies for recovering applications for these levels.
4.13 Reliability
Reliability requirements are an entire field of study all on their own, but reliability generally
refers to a system’s ability to continue operating in the face of hostile or accidental impacts to
related or dependent systems. Reliability is far more critical when lives are at stake (e.g., aircraft
life-support software, autonomous vehicles, medical devices) than they might be for business
software. However, users and analysts need to consider and document how they expect the
software to behave when conditions change. Reliability may be defined in several ways:
• The ability of a device or system to perform a required function under stated conditions
for a specified period of time
• The probability that a functional unit will perform its required function for a specified
interval under stated conditions
• The ability of something to “fail well” (i.e., fail without catastrophic consequences)
Even the best software development process results in some software faults that are nearly unde
tectable until the software is tested.
4.14 Scalability
Scalability is the ability of a system to grow in its capacity to meet the rising demand for the
services it offers and is related to capacity NFRs.9 System scalability criteria might include the
ability to accommodate an increasing number of:
• Users
• Transactions per second
• Number of database commands that can run and provide results simultaneously
The idea behind supporting scalable software is to force designers and developers to create
functions that don’t prevent the software from scaling. Practices that might prevent the soft
ware from scaling include hard coding of usage variables into the program that require manual
modification and recompilation for them to take effect. A better choice is to include these con
straints in an editable configuration file so that developers do not need to get involved every
time their program is moved to a new operating environment.
4.15 Security
Security NFRs are needed to preserve the goals of confidentiality, integrity, and availability.
Confidentiality is concerned with keeping data secure from those who lack “need to know.”
This is sometimes referred to as the Principle of Least Privilege. Confidentiality is intended
primarily to assure that no unauthorized access is permitted and that accidental disclosure is
not possible. Common signs of confidentiality controls are user ID and password entry prior
to accessing data or resources.
Integrity is concerned with keeping data pure and trustworthy by protecting system data
from intentional or accidental changes. Integrity NFRs have three goals:
Availability is concerned with keeping data and resources available for authorized use when
they’re needed. Two common elements of availability as a security control are usually addressed:
Product Backlog Development—Building Security In 45
The following is a basic set of non-negotiable requirements that are needed for software
that’s expected to be secure and resilient:
• Ensure that users and client applications are identified and that their identities are prop
erly verified.
• Ensure that all actions that access or modify data are logged and tracked.
• Ensure that internal users and client applications can only access data and services for
which they have been properly authorized.
• Detect attempted intrusions by unauthorized persons and client applications.
• Ensure that unauthorized malicious programs (e.g., viruses) do not infect the application
or component.
• Ensure that communications and data are not intentionally corrupted.
• Ensure that parties to interactions with the application or component cannot later repu
diate (deny participation in) those interactions.
• Ensure that confidential communications and data are kept private.
• Enable security personnel to audit the status and usage of the security mechanisms.
• Ensure that applications can survive an attack or fail securely.
• Ensure that system maintenance does not unintentionally disrupt the security mecha
nisms of the application, component, or system.10
To ensure that these objectives will be met, you’ll need to document specific and detailed
security requirements for the following:
• Identification requirements
• Authentication requirements
• Authorization requirements
• Immunity requirements
• Integrity requirements
• Intrusion-detection requirements
• Nonrepudiation requirements
• Privacy requirements
• Security auditing requirements
• Survivability requirements
• System maintenance security requirements11
Clearly, there are hundreds of individual requirements or constraints that may be needed to
support these categories, and we’ll look at some examples later in this chapter. You can also
find a collection of 93 pre-written security-related functional and nonfunctional requirements
in Secure and Resilient Software: Requirements, Test Cases, and Testing Methods.12 You can easily
recast these as user stories, acceptance criteria, or DoD constraints. As you collect and docu
ment these, they’re easily and readily reusable on subsequent development projects.
46 Practical Security for Agile and DevOps
4.16 Serviceability/Supportability
Serviceability and supportability refer to the ability of application support personnel to install,
configure, and monitor computer software, identify exceptions or faults, debug or isolate faults
to perform root-cause analysis, and provide hardware or software maintenance to aid in solving
a problem and restoring the software to service. Incorporating serviceability NFRs results in
more efficient software maintenance processes and reduces operational costs while maintain
ing business continuity.
Some examples of requirements that facilitate serviceability and supportability include:
• Specific—Is it without ambiguity, using consistent terminology, simple, and at the appro
priate level of detail?
• Measurable—Can you verify that this requirement has been met? What tests must be
performed, or what criteria must be met, to verify that the requirement is met?
• Attainable—Is it technically feasible? What is your professional judgment of the techni
cal “doability” of the requirement?
• Realistic—Do you have the right resources? Is the right staff available? Do they have the
right skills? Do you have enough time?
• Traceable—Is it linked from its conception through its specification to its subsequent
design, implementation, and test?14
Product Backlog Development—Building Security In 47
Characteristic Explanation
Cohesive The requirement addresses one and only one thing.
Complete The requirement is fully stated in one place with no missing information.
Consistent The requirement does not contradict any other requirement and is fully
consistent with all authoritative external documentation.
Correct The requirement meets all or part of a business or resilience need as
authoritatively stated by stakeholders.
Current The requirement has not been made obsolete by the passage of time.
Externally observable The requirement specifies a characteristic of the product that is externally
observable or experienced by the user.
Feasible The requirement can be implemented within the constraints of the project.
Unambiguous The requirement is stated concisely, without unnecessary technical jargon,
acronyms, or other esoteric terms or concepts. The requirement statement
expresses objective fact, not subjective opinion. It is subject to one and only
one interpretation. Vague subjects, adjectives, prepositions, verbs, and
subjective phrases are avoided. Negative statements and compound
statements are not used.
Mandatory The requirement represents a stakeholder-defined characteristic or constraint.
Verifiable Implementation of the requirement can be determined through one of four
possible methods: inspection, analysis, demonstration, or test. If testing is the
method needed for verifiability, the documentation should contain a section
on how a tester might go about testing for it and what results would be
considered passing.
While some Agile and Scrum purists prefer user stories for every kind of function the prod
uct must perform, including performance needs, capacity needs, etc., it turns out that it’s more
practical to use these needs as constraints expressed as acceptance criteria for each user story.
This approach honors the principle of Building Security In, and it uses the Agile methodology
itself for built-in, focused attention to quality and resilience for all the “ilities.”
User Story: As a financial analyst, I want to see the monthly transactions for my customers so
that I can advise them on their financial health.
Acceptance criteria:
• System displays all transactions meeting the search parameters within 10 seconds of
receiving the request.
• Transactions for customers tagged as confidential are only displayed to users with Level 2
security.
Some NFRs are applicable across the entire product, so you may choose to express those
requirements in the team’s DoD for the product.
In this case, the DoD is a consistent set of acceptance criteria that applies to all backlog items.
It’s a comprehensive checklist indicating what “Done” looks like—both in terms of function
ality and NFR quality attributes, including accessibility, performance, security, or usability.15
In Secure and Resilient Software: Requirements, Test Cases, and Testing Methods,11 the 93
prewritten requirements are cast in the old-fashioned way for the families of security functional
and nonfunctional requirements. You can reuse these documented requirements (available as
MS Word documents) as a set of prewritten acceptance criteria and DoD constraints. This will
spare you from the time it takes to develop solid criteria for reuse across teams and products.
Furthermore, you can use the test cases tied to each requirement to help QA testers in their
efforts to verify acceptance criteria and DoD needs.
4.20 Summary
There’s no question that deriving nonfunctional requirements (NFRs) in software develop
ment projects is a daunting and enormous task that requires dozens of labor hours from a cross
section of people who have a stake in the computing environment. Although some people
may consider the exercise of gathering NFRs as wasted time, the fact remains that ignoring
NFRs or making a conscious decision to eliminate them from software designs only kicks the
problem down the road, where maintenance, support, and operational costs quickly negate any
benefits the software was planned to provide.
Product Backlog Development—Building Security In 49
In this chapter, we discussed 15 categories of NFRs that can serve as food for thought dur
ing the requirements-gathering and analysis phases. We covered some of the best practices for
eliciting requirements and found some effective ways of elaborating them for use in the earliest
stages of the project. The influence of NFRs on the entire SDLC cannot be overemphasized.
2. __________ __________ mandate the qualities of a system, describing how well the
program should do what.
a) Security requirements
b) Nonfunctional requirements
c) Performance requirements
d) Functional requirements
Exercises
1. Review the provisions of Section 404 of the Sarbanes-Oxley Act (SOX) as it relates to
requirements for information security when developing software for internal uses (see
https://www.sec.gov/info/smallbus/404guide/intro.shtml).
2. Review the provisions of Requirement 6: Develop and maintain secure systems and
applications of the Payment Card Industry Data Security Standard (PCI-DSS) as
it relates to requirements for information security when developing software for internal
uses (see https://www.pcisecuritystandards.org/documents/PCIDSS_QRGv3_1.pdf).
For the next two exercises, the templates found in Chapter 4 and the sample acceptance criteria
in Appendix B may be helpful.
3. Try your hand at documenting security requirements as acceptance criteria for an exter
nal user log in user story or other security-relevant function or process.
4. Try your hand at developing a Definition of Done (DoD) user story for a security cham
pion’s responsibility related to secure designs and threat modeling.
50 Practical Security for Agile and DevOps
References
1. Harris Kern Enterprise Computing Institute. (n.d.). Managing User Service Level Expectations.
Retrieved from http://www.harriskern.com/wp-content/uploads/2012/05/Managing-User-Service
-Level-Expectations.pdf
2. Chung, L. (2006). Non-functional Requirements. Retrieved from http://www.utdallas.edu/~chung
/RE/2.9NFR.pdf
3. Stevens, W., Myers, G., and Constantine, L. (1974). Structured Design. IBM Systems Journal, 13(2),
115–139.
4. Canfora, G. and Cimitile, A. (2000, November 29). Software Maintenance. Retrieved from https://
www.academia.edu/2740259/Software_maintenance
5. April, A., Hayes, J. H., Abran, A., and Dumke, R. (2005, May). Software Maintenance Maturity
Model (SMmm): The Software Maintenance Process Model. Journal of Software Maintenance and
Evolution: Research and Practice, 17(3), 197–223. Retrieved from https://onlinelibrary.wiley.com
/doi/abs/10.1002/smr.311
6. Federal Trade Commission. (1998, June). Privacy Online: A Report to Congress. Retrieved from
https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv
-23a.pdf
7. Investopedia.com. (2020, November 11). General Data Protection Regulation (GDPR). Retrieved
from https://www.investopedia.com/terms/g/general-data-protection-regulation-gdpr.asp
8. GIAC Certifications. (n.d.). GIAC Forensics, Management, Information, IT Security Certifica
tions. Retrieved from http://www.giac.org/resources/whitepaper/planning/122.php
9. Weinstock, C. B. and Goodenough, J. B. (2006, March). On System Scalability. Retrieved from
https://pdfs.semanticscholar.org/00d3/17340a32f2dace4686b7b988761abc1bfd43.pdf
10. Firesmith, D. Engineering Security Requirements. Journal of Object Technology, vol. 2, no. 1,
January–February 2003, pp. 53–68. Retrieved from: http://www.jot.fm/issues/issue_2003_01
/column6
11. Merkow, M. S. and Ragahvan, L. (2012). Secure and Resilient Software: Requirements, Test Cases,
and Testing Methods, 1st Edition. Boca Raton (FL): CRC Press/Taylor & Francis Group.
12. Ibid.
13. Fricker, S. A. and Schneider, K. (2015). Requirements Engineering: Foundation for Software Quality.
Retrieved from https://books.google.com/books?id=KLlnBwAAQBAJ
14. Oracle. (n.d.). Understanding Requirements. Retrieved from https://docs.oracle.com/cd/E13214
_01/wli/docs92/bestpract/requirementsappendix.html
15. Saboe, D. (2019, April 10). Non-Functional Requirements in Agile. Retrieved from https://
masteringbusinessanalysis.com/lightning-cast-non-functional-requirements-in-agile/
Chapter 5
Secure Design Considerations
CHAPTER OVERVIEW
To aid in designing new high-quality software, application security and resilience principles
and best practices are essential for developing solutions because there are no universal recipes
for sure-fire secure and resilient software development. Every environment is unique, with
unique practices and processes. Principles help designers and developers to “do the right
things,” even when they have incomplete or contradictory information. This chapter provides
details on some of these critical concepts related to Web application security and distills them
into 10 principles and practices (outlined below) that you can use to help design high-quality
systems and to educate others in their pursuit of secure and resilient application software.
CHAPTER TAKEAWAYS
• Understand the basic principles and rationale behind them for securing software and
applications
• Demonstrate how application security principles show up in designs and architectures for
secure Internet applications
• Demonstrate how application security principles are used for implementing defense in
depth strategy.
Up to this point we have examined the steps to enhance the Scrum process with security
activities that lead to secure and resilient application software. We have seen how new activities
must make their way into existing processes to account for deliberate actions that lead to high-
quality software. In Chapter 5 we’ll overlay basic principles and practices atop the acceptance
criteria and Definition of Done (DoD) from Chapter 4.
51
52 Practical Security for Agile and DevOps
• A locked gate
• A fence or high wall around the property
• A security guard at the entrance
• Security cameras
• Automated security monitoring and alarm systems
We see the security perimeter concept in our everyday lives—an airport is one good exam
ple. The US Transportation Security Administration (TSA) creates a security perimeter with
physical barriers and scans every person who crosses the security perimeter to enter into what
we call a “secure zone” or “sterile area.” The TSA scans not only people but also the objects they
carry (both hand and checked-in baggage) that wind up in the secure zone.
This concept of a trusted and secured zone and a security check for whatever enters that zone
is applicable to software and networks of today’s businesses. However, it is becoming increas
ingly difficult to define the ever-expanding and ambiguous nature of the security perimeter of
an organization because of several factors:
believed and behaved as if anything behind an enterprise’s firewalls was secure and trusted. We
were wrong then and we’re wrong now . . .
Let’s look at an example of a typical three-tier Web application. We can define a security
perimeter around the application only. The application has control over the elements that are
inside the application perimeter:
• Web server(s)
• Application server(s)
• Database server(s)
The application has no control over the elements outside the application perimeter:
• Web browsers
• Other applications
• External databases
The Web application is responsible for ensuring that the proper controls are in place to
protect itself from malicious activity, and it is the last line of defense.
User input coming from the user’s browser is not under any control by the application.
Data emanating from other applications or external databases is also beyond the control of
the application. In many cases, the application will have to verify and appropriately encode
the data coming from its own trusted database before it presents it to an end user. The bottom
line is that the application must assume that nothing outside its security perimeter or trust
boundary can be trusted. We’ll discuss this in more detail relative to how static-code scanners
operate in Chapter 8.
Enterprises today cannot afford to deploy IT resources like candy shells or eggs—hard on
the outside but soft and mushy on the inside. There are several zones of trust and security within
computer networks, operating systems, and applications. The Principle of Defense in Depth plays
a significant role in securing them appropriately, as we’ll discover later in this chapter.
• All the Web pages the attacker can access, either directly or forcibly
• Every point at which the attacker can interact with the application (all input fields, hid
den fields, cookies, or URL variables)
• Every function provided by the application
The exact attack surface depends on who the attacker is (internal versus external presence):
The attack surface is usually larger than a typical application developer or software architect
imagines. It can be exhaustively identified using attack surface mapping techniques. In the
case of a Web application, the following techniques are often used:
Research on attack surfaces in general and ways to quantify and reduce them is increasing.
An attack surface metric was proposed by researchers at Carnegie Mellon University during
research sponsored by the US Army Research office to measure the attack surface.1
Sometimes attackers target the implementation (typically in cryptosystems) rather than the
actual theoretical weakness in a system. These attacks, called side channel attacks, are critical
areas for designers and developers who are designing and deploying secure hardware and soft
ware systems.
A simple example of a side channel attack is the timing attack. A “smart card” that is used
for cryptographic purposes has embedded integrated circuits that can store and process data.
The cryptographic keys are stored securely in the card and never physically leave the card.
Some of them even have a physical booby trap that will zero the memory if the smart card
circuitry is physically tampered with to access the keys or force it into an insecure state. On
the surface, this seems to be a highly resistant and secure system to store keys and perform
cryptography operations. However, by watching (monitoring) the data movement in and out
Secure Design Considerations 55
of the smart card, some attackers have been able to reconstruct the key that is securely stored
in the smart card.
Here is a simple but odd, real-world example to help you understand this type of timing
attack. If a person is asked to pick and retrieve different items, one at a time, at a supermarket,
it’s possible to measure the time it takes for each item to be brought back to determine the
relative positions of the different areas of the store and to guess the location of other related
items in the store. Through iterative monitoring and analysis, a side channel attack can force
information to “leak,” increasing the likelihood of success with subsequent attacks.
With an understanding of the security perimeter and attack surface, we can begin to look
at security and resilience principles and practices that can help minimize problems related to
the frenzy of work that goes on to extend and enrich the experience of their Internet presence.
The Principle of Defense in Depth emphasizes that security is increased markedly when it is
implemented as a series of overlapping layers of controls and countermeasures that provide
three elements needed to secure assets: prevention, detection, and response.
Figure 5.1 Defense in Depth Illustrated
Secure Design Considerations 57
The positive security model that is often called “whitelisting” defines what is allowable and
rejects everything that fails to meet the criteria. This positive model should be contrasted with
a “negative” (or “blacklist”) security model, which defines what is disallowed, while implicitly
allowing everything else.
One of the more common mistakes in application software development is the urge to
“enumerate badness,” or begin using a blacklist. Like antivirus (AV) programs, signatures of
known bad code (malware) are collected and maintained by AV program developers and redis
tributed whenever there’s an update (which is rather often); this can cause massive disruption
of operations and personnel while signature files are updated and rescans of the system are
run to detect anything that matches a new signature. We all know that badness is infinite and
resists enumeration!
Whitelisting, on the other hand, focuses on “enumerating goodness,” which is a far easier
and achievable task. Programmers can employ a finite list of what values a variable may contain
and reject anything that fails to appear on the list. For example, a common vulnerability in
Web applications is a failure to check for executable code or HTML tags when input is entered
onto a form field. If only alphabetic and numeric characters are expected in a field on the form,
58 Practical Security for Agile and DevOps
the programmer can write code that will cycle through the input character by character to
determine if only letters and numbers are present. If there’s any input other than numbers and
letters, the program should reject the input and force a reentry of the data.
The positive security model can be applied to a number of different application security areas:
The benefit of using a positive model is that new attacks that have not been anticipated by
the developer—including zero-day attacks—can be prevented.
Handling errors securely is a key aspect of secure and resilient applications. Two major types
of errors require special attention:
It is important that these exceptions do not enable behavior that a software countermeasure
would normally not allow. As a developer, you should consider that there are generally three
possible outcomes from a security mechanism:
In general, you should design your security mechanism so that a failure will follow the
same execution path as disallowing the operation. For example, security methods such as
“isAuthorized” or “isAuthenticated” should all return false if there is an exception during pro
cessing. If security controls can throw exceptions, they must be very clear about exactly what
that condition means.
The other type of security-relevant exception is in code that is not part of a security control.
These exceptions are security relevant if they affect whether the application properly invokes
the control. An exception might cause a security method not to be invoked when it should, or
it might affect the initialization of variables used in the security control.
The Principle of Least Privilege recommends that user accounts have the least amount of priv
ilege required to perform their basic business processes. This encompasses user rights and
resource permissions such as:
Secure Design Considerations 59
• CPU limits
• Memory
• Network permissions
• File system permissions
Security by obscurity, as its name implies, describes an attempt to maintain the security of a
system or application based on the difficulty in finding or understanding the security mecha
nisms within it. Security by obscurity relies on the secrecy of the implementation of a system
or controls to keep it secure. It is considered a weak security control, and it nearly always fails
when it is the only control.
A system that relies on security through obscurity may have theoretical or actual security
vulnerabilities, but its owners or designers believe that the flaws are not known and that attack
ers are unlikely to find them. The technique stands in contrast to security by design.
An example of security by obscurity is a cryptographic system in which the developers wish
to keep the algorithm that implements the cryptographic functions a secret, rather than keep
ing the keys a secret and publishing the algorithm so that security researchers can determine
if it is bulletproof enough for common security uses. This is in direct violation of Kerckhoff’s
principle from 1883, which states that, “In a well-designed cryptographic system, only the key
needs to be secret; there should be no secrecy in the algorithm.”5 Any system that tries to keep
its algorithms secret for security reasons is quickly dismissed by the community and is usually
referred to as “snake oil” or worse.6
Keeping security simple means avoiding overly complex approaches to coding with what would
otherwise be relatively straightforward and simple code for someone to read and understand.
Developers should avoid the use of double negatives and complex architectures when a simpler
approach would be faster. Complexity is the enemy of security!
Keeping security simple is related to a number of other resilience principles, and using it as
a principle or guideline will help you to meet the spirit of several of the other principles.
One way to keep security simple is to break security functions and features down into these
discrete objectives:
1. Keep services running and information away from attackers—related to deny access by
default.
2. Allow the right users access to the right information—related to least privilege.
3. Defend every layer as if it were the last layer of defense—related to defense in depth.
60 Practical Security for Agile and DevOps
Sometimes you can detect a problem with software by reviewing the log entries that you can’t
detect at runtime, but you must log enough information to make that possible and useful. In
particular, any use of security mechanisms should be logged, with enough information to help
track down an offender. In addition, the logging functionality in the application should also
provide a method of managing the logged information to prevent tampering or loss.
If a security analyst is unable to parse through the event logs to determine which events are
actionable, then logging events provide little to no value. Logging provides a forensic function
for your application or site.
Respond to Intrusions
Detecting intrusions is important because otherwise you give the attacker unlimited time to
perfect an attack. If you detect intrusions perfectly, then an attacker will get only one attempt
before being detected and prevented from launching more attacks.
Should an application receive a request that a legitimate user could not have generated, it is
an attack, and your program should respond appropriately.
Never rely on other technologies to detect intrusions. Your code is the only component of
the system that has enough information to truly detect attacks. Nothing else will know what
parameters are valid, what actions the user is allowed to select, etc. These must be built into
the application from the start.
You’ll never know exactly what hardware or operating environment your applications will run
on. Relying on a security process or function that may or may not be present is a sure way to
have security problems. Make sure that your application’s security requirements are explicitly
Secure Design Considerations 61
provided through application code or through explicit invocation of reusable security functions
provided to application developers to use for the enterprise. We’ll cover this more in Chapter 7:
Defensive Programming.
Services can refer to any external system. Many organizations use the processing capabilities of
third-party partners who likely have different security policies and postures, and it’s unlikely
that you can influence or control any external third parties, whether they are home users or
major suppliers or partners. Therefore, implied trust of externally run systems is not warranted.
All external systems should be treated in a similar fashion.
For example, a loyalty program provider provides data that is used by Internet banking,
providing the number of reward points and a small list of potential redemption items. Within
your program that obtains this data, you should check the results to ensure that it is safe to
display to end users (does not contain malicious code or actions), and that the reward points
are a positive number and not improbably large (data reasonableness).
Every application should be delivered secure by default out of the box! You should leave it up to
users to decide if they can reduce their security if your application allows it. Secure by default
means that the default configuration settings are the most secure settings possible—not neces
sarily the most user friendly. For example, password aging and complexity should be enabled
by default. Users may be allowed to turn these two features off to simplify their use of the
application and increase their risk based on their own risk analysis and policies, but this doesn’t
force them into an insecure state by default.
5.6 Summary
In Chapter 5 we explored the critical concepts of security perimeter and attack surface, which
led to a list of design and development best practices for secure and resilient application software.
With these 10 best practices in mind, you can approach any system design and development
62 Practical Security for Agile and DevOps
Establish Secure
Security Model
Apply Defense
Avoid Security
Keep Security
Infrastructure
by Obscurity
Fail Securely
Use Positive
Don’t Trust
Don’t Trust
Intrusions
Privilege
in Depth
Defaults
Services
Simple
Detect
NFR
Availability X X X X X X X X X
Capacity X X X
Efficiency X X X X X X
Extensibility X X X
Interoperability X X X X X
Manageability X X X X X X X
Maintainability X X X X X X
Performance X X X X X X X X X X
Portability X X X X X X
Privacy X X X X X X X X X X
Recoverability X X X X X X
Reliability X X X X X X X X X
Scalability X X X X X X
Security X X X X X X X X X X
Serviceability X X X X X X X
problem and understand that security and application resilience—like many other aspects of
software engineering—lends itself to a principle-based approach, in which core principles can
be applied regardless of implementation technology or application scenario. These principles
will serve you well throughout the software development lifecycle (SDLC).
3. The term that describes all possible entry points that an attacker can use to attack an
application or system is called:
a) Security perimeter
b) Attack surface
c) Attack perimeter
d) Gateway
Exercises
1. Least privilege is a pervasive principle in Information Security and is vital for software
security. How does least privilege work when applied to applications (not people)? What
configurations are needed to assure that the application will implement least privilege
from end-to-end?
2. Positive security models attempt to help people gain confidence in a product or tool by
providing successive proof that the product was developed using security principles at
the very beginning of development and throughout the lifecycle. The negative security
model is based on testing the security of the product post-development until the testing
process is exhausted. Which model is most likely to produce secure products? What
would you do to convince an advocate of the negative testing model that the positive
security model produces more secure products?
3. Complexity is the enemy of security. Can you think of situations in the physical world
that prove this notion? Have you ever encountered a security process that seems so overly
complex it could never produce a secure outcome?
References
1. Manadhata, P. K., Tan, K. M., Maxion, R. A., and Wing, J. M. (2007, August). An Approach to
Measuring a System’s Attack Surface. Retrieved from http://www.cs.cmu.edu/~wing/publications
/CMU-CS-07-146.pdf
2. Category:Principle. (n.d.). Retrieved from https://www.owasp.org/index.php/Category:Principle
3. Ibid.
4. Sjouwerman, S. (n.d.). Great “Defense-in-Depth” InfoGraphic. Retrieved from https://blog
.knowbe4.com/great-defense-in-depth-infographic
64 Practical Security for Agile and DevOps
CHAPTER OVERVIEW
Topics you’ll find covered in Chapter 6 include details on how to design applications to
help meet (nonfunctional requirement) NFR constraints, how to perform application threat
modeling to expose design defects so that they are mitigated or countered with different design
choices and other security controls, and some rules of thumb that you can use to help you
decide where to focus your attention during the design phase. You’ll also find a handy check
list at the end of the chapter to encapsulate all the elements of desirable design attributes for
application software.
CHAPTER TAKEAWAYS
• Demonstrate the value of threat modeling and application risk analysis to identify and
remove the most expensive types of software defects.
• Prepare to conduct a threat modeling exercise using the Seven Step Model for Threat
Modeling and Application Risk Analysis.
• Determine the compensating controls and countermeasures needed to remediate vulner
abilities identified through threat modeling and application risk assessments.
In Chapter 5, you found 10 best practices and principles for secure and resilient application
software development that are used throughout the software development lifecycle (SDLC). In
this chapter, you’ll see how these principles and best practices are applied in the design efforts
of the SDLC, in which the constrained user stories from the earlier work become concrete ele
ments of an overall solution that meets both functional and nonfunctional requirements.
65
66 Practical Security for Agile and DevOps
Even when security and resilience requirements are determined and documented, they
often run the risk of being dropped from the backlog or being lost in translation owing to the
constraints of time and budget and/or a lack of understanding of their importance by the busi
ness or client. Product owners and Scrum masters should plan and allow for time and budget
to ensure that these constraints are included in the design work.
• Example #1. “As a hacker, I can send bad data in URLs, so I can access data and func
tions for which I’m not authorized.”
• Example #2. “As a hacker, I can send bad data in the content of requests, so I can access
data and functions for which I’m not authorized.”
• Example #3. “As a hacker, I can send bad data in HTTP headers, so I can access data
and functions for which I’m not authorized.”
• Example #4. “As a hacker, I can read and even modify all data that is input and output
by your application.”
The authors of the OWASP (Open Web Application Security Project) guidance suggest
sneaking these and others like them into the product backlog to spur discussion, but you
can achieve the same outcome using positive security engineering ideas that build security in,
rather than late-stage testing to successively work out defects. That is not to say that software
shouldn’t be tested—it means that “testing into compliance” is a poor approach for Agile, and
a terrible approach if your goal is secure and resilient applications representing your brand and
your reputation.
Security in the Design Sprint 67
The exercise of mapping how users might interact with an application provides a good
understanding of the potential for abuse. A thorough user interaction analysis will not only
identify normal use and security attributes but also uncover scenarios that the system may not
be designed to handle. This is especially useful when users of a system have malicious intent.
If potential abuses (or unsupported uses) are not properly considered, vulnerabilities will exist
and can be exploited. Use this knowledge you gain to add important information to the accep
tance criteria and Definition of Done (DoD).
For security architects and engineers, these scenarios, along with a detailed model for the
application’s design, are excellent starting points for developing a threat model. In addition,
testers can substantially benefit from conducting more robust security tests if they understand
all the potential uses of a system.
Another possibly useful tool is a requirements traceability matrix to assist in tracking the
misuse cases to the features of the application. This can be performed formally or informally
but is helpful in discovering common desirable elements that can be documented as reusable
snippets of acceptance criteria across user stories.
those newly uncovered threats can then be sent into a phase of planning countermeasures and/
or changing the design to remove defects.
An article from MSDN entitled “Lessons Learned from Five Years of Building More Secure
Software”3 underscores the fact that many software security vulnerabilities are actually design
defects. When people are exclusively focused on finding security issues in code, that person
runs the risk of missing entire classes of vulnerabilities. Security issues in design, such as busi
ness logic flaws, cannot be detected in code and need to be inspected by performing threat
models and application risk assessments during the design sprint.
Threat modeling is an iterative-structured technique used to identify the threats to the
software under design. Threat modeling breaks the software into physical and logical con
structs generating software artifacts that include data flow diagrams, end-to-end deployment
scenarios, documented entry and exit points, protocols, components, identities, and services.
Attack surface analysis, as you saw in Chapter 5, is a subset of threat modeling and can be
performed when generating the software context to zero in on the parts of the software that
are exposed to untrusted users. These areas are then analyzed for security issues. Once the
software context is generated, pertinent threats and vulnerabilities can be identified.
Threat modeling is performed during design work so that necessary security controls and
countermeasures can be defined for the development phase of the software. OWASP4 describes
the benefits of threat modeling:
SANS Institute Cyber Defense offers a handy seven-step recipe5 for conducting threat mod
eling and application risk analysis. The recipe is as follows:
Detailed directions for each of these steps is provided at the site, along with a spreadsheet tem
plate you can use for data collection and analysis. We’ll look at some of the details behind how
Steps 4, 5, and 6 are performed.
Step 4 of the recipe calls for brainstorming threats from your adversaries. One of the more
popular brainstorming techniques was popularized by Microsoft® and is called STRIDE.
STRIDE stands for:
• Spoofing
• Tampering
• Repudiation
• Information disclosure
• Denial of service
• Elevation of privilege
The central idea behind STRIDE is that you can classify all your threats according to one
of the six STRIDE categories. Because each category has a specific set of potential mitigations,
once you have analyzed the threats, categorized them, and prioritized them, you will know
how to mitigate or eliminate the defect that could lead to an exploit.
• Spoofing. A spoofing attack occurs when an attacker pretends to be someone they are
not. An attacker using DNS hijacking and pretending to be www.microsoft.com would
be an example of a spoofing attack.
• Tampering. Tampering attacks occur when the attacker modifies data in transit. An
attacker that modified a TCP stream by predicting the sequence numbers would be
tampering with those data flows. Obviously, data stores can be tampered with—that
is what happens when the attacker writes specially crafted data into a file to exploit a
vulnerability.
• Repudiation. Repudiation occurs when someone performs an action and then claims
that they did not actually perform it. Primarily this shows up on activities such as credit
card transactions: A user purchases something and then claims that they did not. Another
way that this shows up is in email—if I receive an email from you, you can claim that you
never sent it.
• Information disclosure. Information disclosure threats are usually quite straightfor
ward: Can the attacker view data that they are not entitled to view? When you are trans
ferring data from one computer to another over a network, if the attacker can pull the
data off the wire, then your component is subject to an information disclosure threat.
Data stores are also subject to information disclosure threats: If an unauthorized person
can read the contents of the file, it is an information disclosure issue.
• Denial of service. Denial of service threats occur when an attacker can degrade or deny
service to users. If an attacker can crash your component, redirect packets into a black
hole, or consume all the CPU on the box, you have a denial of service situation.
70 Practical Security for Agile and DevOps
• Elevation of privilege. An elevation of privilege threat occurs when an attacker has the
ability to gain privileges that they would not normally have. One of the reasons that clas
sic buffer overflows are so important is that they often allow an attacker to raise their
privilege level—for instance, a buffer overflow in any Internet-facing component allows
an attacker to elevate their privilege level from anonymous to the local user.
The STRIDE process is really a brainstorming activity conducted in person or via media-
sharing channels (e.g., WebEx® or ZOOM® or MS Teams). Participants use the documentation
to identify assets in the architecture, their purpose, and their relative value. These become the
targets for would-be attackers.
Next, identify possible attackers and what they would want from the system:
As you collect these across projects, you can also build a reusable catalog of attack profiles
with enough detail to make them suitable for anyone who wants to conduct threat modeling.
Next, imagine that you are one of those adversaries and try to see your network through
their eyes. You know what you want; how would you try to get at it by misusing the application?
Just like the product backlog and user stories, a threat model is a living document—as
you change the design, you need to go back and update your threat model to see if any new
threats appear.
0 = Nothing.
3 = Individual user data is compromised or affected, or its availability denied.
5 = A subset of data is compromised or affected, or its availability denied.
7 = All data is compromised or affected, or its availability denied.
7 = Availability of a specific component/service is denied.
Security in the Design Sprint 71
6.5.2 Reproducibility
6.5.3 Exploitability
6.5.5 Discoverability
Table 6.27 gives an example of what this might look like for a tampering and privilege
escalation threat scenario.
You use this approach for each of the threats you identified in Step 4, then sort the out
comes in descending order of DREAD score to address those risks, with the highest risks first.
• Threats and vulnerabilities that exist in the project’s environment or that result from
interaction with other systems.
• Code that was created by external development groups in either source or object form.
It’s vitally important to carefully evaluate any code from sources external to your team,
including purchased libraries and open source components and libraries that may be
present. Failing to do so might cause security vulnerabilities the team does not know
about or learns about too late.
• Threat models should include all legacy code if the project is a new release of an existing
program. Such code could have been written before much was known about software
security, and therefore it likely contains vulnerabilities.
• A detailed privacy analysis to document your project’s key privacy aspects. Important
issues to consider include:
○ What personal data is collected?
○ What is the compelling customer value proposition and business justification?
○ What notice and consent experiences are provided?
○ What controls are provided to both internal and external users of the application?
○ How is unauthorized access to personal information prevented?
• If the data has not crossed a trust boundary, you do not really need to care about it.
• If the threat requires that the attacker is already running code on the client at your privi
lege level, you do not really need to care about it.
• If your code runs with any elevated privileges, you need to be concerned.
• If your code invalidates assumptions made by other entities, you need to be concerned.
• If your code listens on the network, you need to be concerned.
• If your code retrieves information from the Internet, you need to be concerned.
• If your code deals with data that came from a file, you need to be concerned.
• If your code is marked as safe for scripting or safe for initialization, you need to be
concerned.
• Assurance that users and client applications are identified and that their identities are
properly verified
• Assurance that users and client applications can only access data and services for which
they have been properly authorized
• Ability to detect attempted intrusions by unauthorized people and client applications
• Assurance that unauthorized malicious programs (e.g., viruses) do not infect the applica
tion or component
• Assurance that communications and data are not intentionally corrupted
• Assurance that parties to interactions with the application or component cannot later
repudiate those interactions
• Assurance that confidential communications and data are kept private
• Ability for security personnel to audit the status and usage of the security mechanisms
• Assurance that applications can survive attack or operate in a degraded mode
• Assurance that system maintenance does not unintentionally disrupt the security mecha
nisms of the application or any of its components
Once your threat identification and prioritization steps are completed, you should have the
following sets of information available for the next steps of identifying different design choices,
countermeasures that should be added, and improvements in the design based on the reviews.
Completed threat-model documentation should include:
• Confirm that threat model data and associated documentation (functional/design speci
fications) are stored using the document control system used by the development team.
• You should consider reviews and approvals of threat models and referenced mitigations
reviewed by at least one developer, one tester, and one program or project manager. Ask
architects, developers, testers, program managers, and others who understand the soft
ware to contribute to the threat models and to review them. Solicit broad input and
reviews to ensure that the threat models are as comprehensive as possible.
And remember that threat modeling is never complete as long as the application continues to
gain features, is ported to other operating environments (e.g., the cloud), or is rewritten as Web
services and microservices to take advantage of modern computing practices.
Figure 6.1 OWASP Top 10 Proactive Controls (Source: OWASP, licensed under Creative
Commons BY-SA)
The document is delivered as a PDF that’s useful for the Scrum team in helping to formu
late alternative design choices to reduce risks or add compensating controls that mitigate the
threat, should it actually be exploited. The document serves as guidance and advice that can
speed up the process of researching controls to counter threats. Figure 6.2 shows the structure
and content for each of the controls in the catalog.
Security in the Design Sprint 75
Figure 6.2 Structure of OWASP Top 10 Proactive Controls Documentation (Source: OWASP,
licensed under Creative Commons BY-SA)
Figure 6.3 OWASP Secure Application Design Project Checklist (Source: OWASP, licensed under Creative Commons BY-SA)
• Authentication
• Authorization
• Configuration management
• Sensitive data
• Session management
• Cryptography
• Parameter manipulation
• Exception management
• Auditing and logging
Use this checklist (Figure 6.3) to help you conduct architecture and design reviews to
evaluate the security of your Web applications and to implement the design guidelines we
described in Chapter 5.
This checklist should evolve based on the experience you gain from performing reviews
and may need to be extended now and then to accommodate new approaches to application
development and operations, including microservices, Web services, IoT, and cloud migrations.
6.11 Summary
Chapter 6 offers a number of recommendations and tools to use for software design to help
meet NFRs related to security and resilience. You were also offered some reasons and tips on
how to conduct threat modeling and application risk analysis, along with its process steps and
tools for exercises. Finally, you were provided a useful checklist to use when conducting archi
tecture and design analysis activities.
In Chapter 7, we will use the outcomes from these now-secured design models and patterns
and choices as the basis for developing secure and resilient consistently.
2. Risk is calculated for each identified threat to prioritize findings and needs for remedia
tion using which technique?
Security in the Design Sprint 79
a) RISKCOMPUTE
b) DANGER
c) DREAD
d) THREATORDER
3. Benefits derived from threat modeling include all but which of the following?
a) Identifies threats and compliance requirements and evaluates their risk
b) Defines the need for required controls
c) Balances risks, controls, and usability
d) Guarantees that secure designs are sent for development
Exercises
1. Sometimes a defined countermeasure or added control would be prohibitively expensive
to implement. In this case, what can security architects do to help reduce the risk to an
acceptable level if a missing control cannot be implemented?
2. The example in Figure 6.4 depicts a high-level, simplified data flow diagram (DFD) for
making an omelet. Use this diagram to complete the three threat modeling steps, a), b),
and c), below:
Figure 6.4 Simplified Data Flow Diagram (Source: Stack Overflow, licensed under Creative
Commons BY-SA)
a) Using the STRIDE Model, brainstorm on some of the possible threats in the process
of moving eggs from a chicken to an omelet for a hungry person. What can get in
the way of this flow and what kind of damage can they create?
b) Using the DREAD Model, prioritize the possible threats you exposed above into the
most to the least dangerous.
80 Practical Security for Agile and DevOps
c) Using the OWASP Proactive Controls as a reference, determine which controls may
be appropriate to mitigate or counter the threats you identified and prioritized. Can
you identify any changes to the design to reduce the threats?
References
1. Paul, M. (2009, January). (ISC) 2: Software Security: Being Secure in an Insecure World. White
paper. Global Security Magazine. Available at https://www.globalsecuritymag.fr/Mano-Paul-ISC
-2-Software-Security,20090122,7114.html
2. Agile Software Development: Don’t Forget EVIL User Stories. (n.d.). Retrieved from https://www
.owasp.org/index.php/Agile_Software_Development:_Don%27t_Forget_EVIL_User_Stories
3. Threat Modeling Again, Pulling the Threat Model Together. (2007, September 14). Retrieved
from http://blogs.msdn.com/larryosterman/archive/2007/09/14/threat-modeling-again-pulling
-the-threat-model-together.aspx
4. Application Threat Modeling. (n.d.). Retrieved from https://www.owasp.org/index.php/Applica
tion_Threat_Modeling
5. Cyber Defense. (2009, July 11). Retrieved from https://cyber-defense.sans.org/blog/2009/07/11
/practical-risk-analysis-spreadsheet/
6. Security/OSSA-Metrics. (n.d.). Retrieved from https://wiki.openstack.org/wiki/Security/OSSA
-Metrics#DREAD
7. Ibid.
8. OWASP Proactive Controls. (n.d.). Retrieved from https://www.owasp.org/index.php/OWASP
_Proactive_Controls
Chapter 7
Defensive Programming
CHAPTER OVERVIEW
You’ve seen how to select and apply concepts and principles of security and resilience from
the very start of product development. You saw how to map the best practices to nonfunc
tional requirements (NFRs) to prove that minding the security of an application brings along
for the ride most of the other characteristics you find desirable in high-quality software. In
Chapters 5 and 6, you saw how to apply these practices in the design work of the software
development lifecycle (SDLC) to set the stage for programming best practices and techniques
found in this chapter.
CHAPTER TAKEAWAYS
• Understand threat and vulnerability taxonomies that are created, maintained, and used
throughout the worldwide software development community.
• Standardize the use of the Common Weaknesses Enumeration (CWE™) as the lingua
franca for software security defects.
• Be able to recognize potential security vulnerabilities in code and how to prevent them.
Defensive programming is exactly what it sounds like. Before you’re handed the keys to your
first car (one would hope), someone insisted that you take a driver’s education and responsibili
ties class, usually in a simulator, and when on the road, always with an instructor who had a
steering wheel and brake to keep people from getting killed while you’re learning how to drive
in the real world. When you’re ticketed for relatively minor offenses, you’re usually offered the
chance to make the ticket go away if you successfully complete a defensive driving course to
remind you of traffic laws and reinforce your duty and responsibility to drive defensively at
all times.
81
82 Practical Security for Agile and DevOps
Programming is not that much different. Programs today are under regular attack if they
happen to have any Internet access. Programs cannot defend themselves unless they’re taught
how to from the very first line of code typed. Role-based education in defensive program
ming techniques, as you saw in Chapter 3, is the avenue to reliably gaining those skills, and its
importance cannot be overemphasized!
Programmers today have an awesome responsibility to get it right the first time, because
their work could adversely affect life and limb. Not that long ago, before software was used to
control millions of kinetic machines and devices on our streets, in our homes, and in our bod
ies, when an application crashed or its control was lost, a restart usually fixed the issue. Today
when software crashes, it could indeed lead to a real crash that kills real people.
Ethics also play an important role in today’s software development world. In the case of
autonomous vehicles, how should a program work in the face of an imminent threat that’s
unavoidable, protect the passenger in the vehicle, or minimize the damage outside the vehicle?
Choices made in software dictate this type of behavior. Will consumers and users of self-
driving cars ever know what those choices are before it’s too late?
With that sense of awesome responsibility in mind, Chapter 7 offers guidance and para
digms for secure programming practices that improve software quality while enhancing its
resilience features. This chapter is primarily intended for development (coding) staff, but it is
useful for appsec architects to use as guidance to Scrum teams who need specific details on
recognizing and remediating security-related programming defects and to supplement their
formal training.
Intended for both the development community and the community of security practitioners,
Common Weakness Enumeration (CWE)4 is a formal list or dictionary of common software
weaknesses that can occur in a software’s architecture, design, code, or implementation and
can lead to exploitable security vulnerabilities.
CWE was created to serve as a common language to describe software security weaknesses;
serve as a standard measuring stick for software security tools targeting these weaknesses; and
provide a common baseline standard for weakness identification, mitigation, and prevention
efforts.
Software weaknesses are flaws, faults, bugs, vulnerabilities, and other errors in software
implementation, code, design, or architecture that, if left unaddressed, could result in systems
and networks being vulnerable to attack.
Some example software weaknesses include:
• Buffer overflows
• Format strings
• Structure and validity problems
• Common special element manipulations
• Channel and path errors
• Handler errors
• User interface errors
• Pathname traversal and equivalence errors
• Authentication errors
• Resource management errors
• Insufficient verification of data
• Code evaluation and injection
• Randomness and predictability
What’s important to know about CWE at this point is that it’s the “language” that you’ll
most likely encounter as you work with software security scanners and other tools. CWE is
not only an awareness tool, it’s also a set of recommended practices to prevent or remediate
the defect it refers to. Think of CWE as a superset of known weaknesses in programming and
design, collected as a dictionary. We’ll talk much more about CWEs in Chapters 8 and 9 in
the sections on testing.
84 Practical Security for Agile and DevOps
The Open Web Application Security Project (OWASP) is an open community dedicated to
enabling organizations to develop, purchase, and maintain applications that can be trusted.
“Community” includes corporations, educational organizations, and individuals from around
the world with the focus on creating freely available articles, open methodologies, documenta
tion, tools, and technologies to improve Web software security.
The OWASP Top 105 is a list of the 10 most severe Web security issues as defined and
regularly updated by the OWASP community:
The OWASP Top 10 is a powerful awareness document for Web application security.
The current version (at the time of writing this chapter) is the 2017 OWASP Top 10. The
OWASP Top 10 has always been about risk, but the 2017 update is clearer than previous edi
tions and provides additional information on how to assess these risks in your applications.
The 2017 OWASP Top 10 Most Critical Web Application Security Risks are:
• A1: 2017-Injection. Injection flaws, such as SQL, NoSQL, OS, and LDAP injection,
occur when untrusted data is sent to an interpreter as part of a command or query. The
attacker’s hostile data can trick the interpreter into executing unintended commands or
accessing data without proper authorization.
• A2: 2017-Broken Authentication. Application functions related to authentication
and session management are often implemented incorrectly, allowing attackers to com
promise passwords, keys, or session tokens or to exploit other implementation flaws to
assume other users’ identities temporarily or permanently.
• A3: 2017-Sensitive Data Exposure. Many Web applications and Application Program
Interfaces (APIs) do not properly protect sensitive data, such as financial, healthcare, and
Personally Identifiable Information (PII). Attackers may steal or modify such weakly
protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data
may be compromised without extra protection, such as encryption at rest or in transit,
and requires special precautions when exchanged with the browser.
• A4: 2017-XML External Entities (XXE). Many older or poorly configured XML pro
cessors evaluate external entity references within XML documents. External entities can
be used to disclose internal files using the file URI handler, internal file shares, internal
port scanning, remote code execution, and denial-of-service (DoS) attacks.
• A5: 2017-Broken Access Control. Restrictions on what authenticated users are allowed
to do are often not properly enforced. Attackers can exploit these flaws to access
unauthorized functionality and/or data, such as access other users’ accounts, view sensi
tive files, modify other users’ data, change access rights, etc.
Defensive Programming 85
Notice that the Top 10 is just a small subset of CWEs mentioned earlier. The OWASP Top
10 addresses the most impactful application security risks currently facing organizations. It’s
based primarily on over 40 data submissions from firms that specialize in application security,
and an industry survey that was completed by over 500 individuals. This data spans vulner
abilities gathered from hundreds of organizations and over 100,000 real-world applications and
APIs. The Top 10 items are selected and prioritized according to this prevalence data, in com
bination with consensus estimates of exploitability, detectability, and impact.
Although static code scanners provide filters to filter out defects (CWEs) that are not rep
resented in the Top 10, these scanners are NOT looking specifically for these vulnerabilities.
What the scanners do is attribute the vulnerabilities it finds back to the common taxonomies,
such as the CWE and OWASP Top 10, but they will find most known vulnerabilities present
in the code. Scanner output is made manageable and usable by filtering and sorting to find
those vulnerabilities you care about the most and need to address.
With a language and taxonomy to talk about defects and vulnerabilities, conversations lead
to common understanding that, in turn, leads to positive discussions on addressing and treat
ing defects. Programmers are well advised to gain a good understanding of CWE and OWASP
Top 10 as they begin to address appsec concerns.
86 Practical Security for Agile and DevOps
In general, you should consider any data coming from outside the application security perim
eter (trust boundary) as a potential threat. This includes anything coming directly from the user’s
browser and anything coming from other applications or external databases or files because the
security of these elements is beyond the application’s control. Even if data coming from external
sources could be trusted to a certain degree, the “fail-safe” approach is to validate all input data.
Defensive Programming 87
In general, the term input handing is used to describe functions such as validation, cleansing,
sanitizing, filtering, and encoding and/or decoding of input data. Applications receive input
from various sources, including human users, software agents (browsers), files, and network/
peripheral devices, to name a few. In the case of Web applications, input can be transferred in
various formats (name value pairs, JavaScript Object Notation [JSON], Simple Object Access
Protocol [SOAP], Web services, etc.) and obtained via URL query strings, POST data, HTTP
headers, cookies, etc. We can obtain non–Web application input via application variables,
environment variables, the registry, configuration files, etc. Regardless of the data format or
source/location of the input, all input from outside the application’s security perimeter or trust
boundary should be considered explicitly untrusted and potentially malicious. Applications
that process untrusted input may become vulnerable to attacks such as buffer overflows, SQL
injection, OS commanding, and denial of service, just to name a few.
One of the key aspects of input handling is validating that the input satisfies a certain
criterion. For proper validation, it is important to identify the form and type of data that is
acceptable and expected by the application. Defining an expected format and usage of each
instance of untrusted input is required to accurately define restrictions.
Validation can include checks for variable-type safety (e.g., integer, floating point, text)
and syntax correctness. String input should be checked for length (minimum and maximum
number of characters) and “character set” validation, by which numeric input types such as
integers and decimals can be validated against acceptable upper and lower bounds of values.
When combining input from multiple sources, validation should be performed on the concat
enated result and not only against the individual data elements alone. This practice helps avoid
situations in which input validation may succeed when performed on individual data items but
fail when done on a concatenated string from all the sources.8
88 Practical Security for Agile and DevOps
Figure 7.1 Burp Suite Features (Source: PortSwigger Web Security. Reproduced with
permission.)
Defensive Programming 89
A common mistake, mentioned earlier, that developers make is to include validation routines
in the client side of an application using JavaScript functions as a sole means of performing
bounds checking. Validation routines that are beneficial on the client side cannot be relied
upon to provide a security control because all data accessible on the client side is modifiable by
a malicious user or attacker, as you saw with the browser proxy tool. This is true of any client-
side validation checks in JavaScript and VBScript or external browser plug-ins such as Flash,
Java, or ActiveX.
The HTML Version 5 specification has added a new attribute “pattern” to the INPUT tag
that enables developers to write regular expressions as part of the markup for performing vali
dation checks.9 Although this feature makes it even more convenient for developers to perform
input validation on the client side without having to write any extra code, the risk from such
a feature becomes significant when developers use it as the only means of performing input
validation for their applications. Relying on client-side validation alone is bad practice.
Although client-side validation is great for user interface (UI) and functional validation, it
is not a substitute for server-side security validation. Performing validation on the server side
is the ONLY way to assure the integrity of your validation controls. In addition, server-side
validation routines will always be effective, regardless of the state of JavaScript execution on
the browser.
Sanitizing input can be performed by transforming input from its original form to an accept
able form via encoding or decoding. Common encoding methods used in Web applications
include HTML entity encoding and URL encoding schemes. HTML entity encoding serves
the need for encoding literal representations of certain meta-characters to their correspond
ing character entity references. Character references for HTML entities are predefined and
have the format “&name”, where “name” is a case-sensitive alphanumeric string. A common
example of HTML entity encoding is where “<” is encoded as < and “>” is encoded as >.
URL encoding applies to parameters and their associated values that are transmitted as part
of HTTP query strings. Likewise, characters that are not permitted in URLs are represented
using their Unicode character set code point value, where each byte is encoded in hexadecimal
as “%HH”. For example, “<” is URL-encoded as “%3C” and “>” is URL-encoded as “%CE”.
There are multiple ways that input can be presented to an application. With Web applica
tions and browsers supporting multiple character encoding types, it has become common
place for attackers to try to exploit inherent weaknesses in encoding and decoding routines.
Applications requiring internationalization are a good candidate for input sanitization. One
of the common forms of representing international characters is Unicode. Unicode transfor
mations use the Universal Character Set (UCS), which consists of a large set of characters to
cover symbols of almost all the languages in the world. From the most novice developer to
the most seasoned security expert and developer, rarely do programmers write routines that
inspect every character within a Unicode string to confirm its validity. Such misrepresenta
tion of characters enables attackers to spoof expected values by replacing them with visually or
semantically similar characters from the UCS.
90 Practical Security for Agile and DevOps
7.4.3 Canonicalization
A buffer overflow may occur when the length of the source variable input is not validated
before being copied to the destination variable that’s not set to accommodate it. The weakness
is exploited when the size of “input” (source) exceeds the size of the destination, causing an
overflow of the destination variable’s address in memory. Sometime a buffer overflow (or over
run) error can force an application to stop operating and yields information about the error
that can help an attacker formulate more effective future attacks that will succeed.
7.5.2 OS Commanding
http://example/cgi-bin/showInfo.pl?name=John&template=tmp1.txt
Defensive Programming 91
http://example/cgi-bin/showInfo.pl?name=John&template=/bin/ls|
The following two code snippets demonstrate how to validate a variable named gender against
two known values:
Java example:
else if(gender.equals(“Male“))
return true;
else
return false;
}
.NET example:
• Whitelist validation.
• Data is validated against a list of allowable characters.
• Requires the definition of all characters that are accepted as valid input.
• Typically implemented using regular expressions (regex) to match known good data
patterns.
The following code snippets demonstrate how to validate a variable against a regular expres
sion representing the proper expected data format (10 alphanumeric characters):
Java example:
import java.util.regex.*;
static boolean validateUserFormat(String userName){
boolean isValid = false; //Fail by default
try{
// Verify that the UserName is 10 character alphanumeric
if (Pattern.matches(“^[A-Za-z0-9]{10}$”, userName))
isValid=true;
}catch(PatternSyntaxException e){
System.out.println(e.getDescription());
}
return isValid;
.NET example:
using System.Text.RegularExpressions;
Defensive Programming 93
• Blacklist validation (think in term of signatures for previously identified malware and
viruses).
• Data is validated against a list of characters that are deemed to be unacceptable.
• Requires the definition of all characters that are considered dangerous to the application.
• Useful for preventing specific characters from being accepted by the application.
• Highly susceptible to evasion using various forms of character encoding.
• Is the weakest method of validation against malicious data.
The following code snippets demonstrate how to validate a variable against a regular expres
sion of known bad input strings:
Java example:
import java.util.regex.*;
static boolean checkMessage(string messageText) {
boolean isValid = false; //Fail by default
try{
Pattern P = Pattern.compile(“<|>”, Pattern.CASE _ INSENSITIVE |
Pattern.MULTILINE);
Matcher M = p.matcher(messageText);
if (!M.find())
isValid = true;
}catch(Exception e){
System.out.println(e.toString());
}
return isValid;
}
.NET example:
using System.Text.RegularExpressions;
static boolean checkMessage(string messageText){
bool isValid = false; //Fail by default
// Verify input doesn’t contain any <, >
isValid = !Regex.IsMatch(messageText, @”[><]”);
return isValid;
}
94 Practical Security for Agile and DevOps
Once you detect bad input using any of the above techniques, there are a couple of ways to
handle them, again with varying levels of security, as illustrated in Figure 7.3.
• Escaping bad input: The application attempts to fix the bad input data by encoding the
malicious data in a “safe” format.
• Rejecting bad input: The application rejects (discards) the input data and displays an
error message to the user.
○ Rejecting bad input is always considered better than escaping.
Detailing all the other specifics on how to remediate the vulnerabilities in the OWASP Top
10 is beyond the scope or intent of this book—there are plenty of secure programming books
on the market specific for popular languages—but it’s important to understand the fundamen
tal problem and resolutions for unsanitized and unvalidiated inputs that address several of the
injection-related Top 10 vulnerabilities. Because languages and platforms are changing all the
time, refer to language-specific guidance you can find for the programming platforms in use
at your organization.
• Table of contents
• Introduction
• Software Security Principles Overview
Defensive Programming 95
7.8 Summary
Chapter 7 covered the importance of secure application development and programming best
practices. We examined some of the most pernicious programming issues—injection attacks—
and recommended a number of defensive programming techniques to protect applications
from those attacks.
In Chapter 8, we turn the attention to security testing activities based on static code analy
sis. And in Chapter 9, we’ll look at dynamic testing that mimics how an attacker might try to
attack your product.
2. What is the attack technique used to exploit websites by altering backend database queries
through inputting manipulated queries?
a) LDAP Injection
b) XML Injection
c) SQL Injection
d) OS Commanding
3. Which attack can execute scripts in the user’s browser and is capable of hijacking user
sessions, defacing websites, or redirecting the user to malicious sites?
a) SQL Injection
b) Cross-site scripting (XSS)
c) Malware uploading
d) Man in the middle
4. What is the type of flaw that occurs when untrusted user-entered data is sent to the
interpreter as part of a query or command?
a) Insecure Direct Object References
b) Injection
c) Cross Site Request Forgery
d) Insufficient Transport Layer Protection
96 Practical Security for Agile and DevOps
Exercises
1. Pick a threat, vulnerability, or programming error from the ones you read about and
consider what can be done to counter the threat, remediate the vulnerability, or prevent
the error.
2. Unvalidated input is the root of some of the worst software vulnerabilities. Develop a
list of Best Practices for validating input on each platform, Web, mobile, and Internet
of Things (IoT) applications. You may find this article useful for your research, “The
Importance of Multistage Validation to Successful IoT Solutions Development,” from
IoTforall.com.
References
1. An Illustrated Guide to the Kaminsky DNS Vulnerability. (2008, August 7). Retrieved from
http://unixwiz.net/techtips/iguide-kaminsky-dns-vuln.html
2. The DNS Vulnerability. (n.d.). Retrieved from http://www.schneier.com/blog/archives/2008/07
/the_dns_vulnera.html
3. djbdns: Domain Name System Tools. (n.d.). Retrieved from https://cr.yp.to/djbdns.html
4. Frequently Asked Questions (FAQ). (2019, April 29). Retrieved from http://cwe.mitre.org/about
/faq.html
5. Category: OWASP Top Ten Project. (n.d.). Retrieved from https://www.owasp.org/index.php/Top
_10
6. The Web Application Security Consortium/Improper Input Handling. (n.d.). Retrieved from
http://projects.Webappsec.org/Improper-Input-Handling
7. Burp tools. (n.d.). Retrieved from https://portswigger.net/burp/documentation/desktop/tools
8. CWE—CWE-20: Improper Input Validation (3.3). (2019, June 20). Retrieved from http://cwe
.mitre.org/data/definitions/20.html
9. HTML Standard. (n.d.). Retrieved from http://www.w3.org/TR/html5/forms.html#the-pattern
-attribute
10. Canonicalization, locale and Unicode. (n.d.). Retrieved from http://www.owasp.org/index.php
/Canoncalization,_locale_and_Unicode
11. OWASP Secure Coding Practices—Quick Reference Guide. (n.d.). Retrieved from https://www
.owasp.org/index.php/OWASP_Secure_Coding_Practices_-_Quick_Reference_Guide
Chapter 8
Testing Part 1: Static
Code Analysis
CHAPTER OVERVIEW
In Chapter 8, we’ll begin exploring how to test the resilience of custom application code and
find ways to further improve it. Chapter 8 begins to build the foundation for using test
ing tools and performing review techniques that apply throughout Agile Development sprints
while software is in the construction stage.
CHAPTER TAKEAWAYS
• Determine the various types of software testing during development that Build Security
In to all processes to maximize software security assurance.
• Understand the true costs of waiting to find and eradicate software flaws.
• Planning and conducting manual and automated source code review techniques.
• Shifting Left with code analysis tools.
97
98 Practical Security for Agile and DevOps
learn the actual costs of developing software. There are direct and indirect costs to finding and
fixing security defects. If a vulnerability is found and exploited in a production application, the
brand damage that results cannot be easily measured or repaired.
There are direct costs that we can certainly measure. One of the easiest to measure is the
average cost to code a fix:
Average cost to code a fix = (number of developer man-days * cost per man-day)/
number of defects fixed
Apart from this cost, there are additional costs we need to consider:
• Unit testing
• Integration testing
• Quality assurance testing
• User acceptance testing
Developers drive and conduct unit tests on the code that they write and own. Unit testing is a
best practice from an overall code-quality perspective and has security advantages. Unit testing
Testing Part 1: Static Code Analysis 99
helps prevent defects from finding their way into the larger testing phases. Because developers
understand their own code better than anyone else, simple unit testing ensures the effective
ness of the test.
Developers need to make sure that they also document what they test because it is very easy
to miss a test that is performed by hand. Some of the key issues a developer can find in unit
testing include:
• Boundary conditions
○ Integer over/underflows
○ Path length (URL, file)
○ Buffer overflows
• When writing code in the C language and coding their own memory management rou
tines, all arithmetic pertaining to those should be tested as well.
Developers can also conduct direct security testing using fuzzing techniques. Fuzzing, in
simplest terms, is sending random data to the interfaces that the program uses to determine
what, when, and how it might break the software.
Fuzzing is usually done in several iterations (100,000+) and can be made smarter by doing
targeted variations in key parts of data structures (length fields, etc.). Fuzzing is an effective
test that most developers can perform themselves. It is one of the cheapest, fastest, and most
effective ways to identify security bugs, even in organizations that have mature SDLC security
and resilience programs.
Manual source code reviews can begin when there is sufficient code from the development
process to review. The scope of a source code review is usually limited to finding code-level
problems that could potentially result in security vulnerabilities. Code reviews are not used
to reveal:
Source code reviews typically do not worry about the exploitability of vulnerabilities.
Findings from the review are treated just like any other defects found by other methods, and
they are handled in the same ways. Code reviews are also useful for non-security findings
that can affect the overall code quality. Code reviews typically result in the identification of
not only security problems but also dead code, redundant code, unnecessary complexity, or
any other violation of the best practices that we’ve covered throughout the book. Each of the
findings carries its own priority, which is typically defined in the organization’s “bug priority
matrix.” Bug reports often contain a specific remediation recommendation by the reviewer so
that the developer can fix it appropriately.
Manual code reviews are expensive because they involve many manual efforts and often
involve security specialists to assist in the review. However, manual reviews have proven their
100 Practical Security for Agile and DevOps
value repeatedly when it comes to accuracy and quality. They also help identify logic vulner
abilities that typically cannot be identified by automated static code analyzers.
Source code reviews are often called “white box” analysis. This is because the reviewer has
complete internal knowledge of the design, threat models, and other system documentation for
the application. “Black box” analysis, covered in Chapter 9, on the other hand, is performed
from an outsider’s view of the application, with no access to specifications or knowledge of the
application’s inner workings. “Gray box” analysis is somewhere in between white box and black
box analysis.
The code review process begins with the Scrum team making sure that there is enough time
and budget allocated in the SDLC to perform these reviews. Tools that are helpful in perform
ing these reviews should be made available to all developers and reviewers.
The code review process consists of four high-level steps, illustrated in Figure 8.1.
• The first step in the code review process is to understand what the application does (its
business purpose), its internal design, and the threat models prepared for the application.
This understanding greatly helps in identifying the critical components of the code and
assigning priorities to them. The reality is that there is not enough time to review every
single line of code in the entire application every time. Therefore, it is vital to understand
the most critical components and ensure that they are reviewed completely.
• The second step is to begin reviewing the identified critical components based on their
priority. This review can be done either by a different Scrum team who were not origi
nally involved in the application’s development or by a team of security experts. Another
approach is to use the same Scrum team who built the application to perform peer reviews
of each other’s code. Regardless of how code reviews are accomplished, it is vital that they
cover the most critical components and that both developers and security experts have
a chance to see them. All the identified defects should be documented using the enter
prise’s defect management tool and are assigned the appropriate priority. The reviewers
must document these defects along with their recommended fix approaches to make sure
that they do not creep into final production code.
• The third step of a code review is to coordinate with the application code owners and
help them implement the fixes for the problems revealed in the review. These may involve
the integration of an existing, reusable security component available to developers (e.g.,
reusable libraries for Single Sign On or cryptography functions), or it may require simple-
to-complex code changes and subsequent reviews.
• The final step is to study the lessons learned during the review cycle and identify areas for
improvement. This makes sure the next code review cycle is more effective and efficient.
Some of critical components that require a deep-dive review and analysis follow:
Because manual analysis is time consuming and expensive, enterprises often implement auto
mated source code analysis tools to complement—not replace—manual reviews.
Typical software development priorities are schedule, cost, features, and then quality—in
most cases, in that order. The pressure from a time-to-market perspective can negatively affect
software quality and resilience and sometimes cause the postponement of adding features to
the software.
As Phillip Crosby said, “Quality is free,”2 and this is most true of the software develop
ment process. However, managers in organizations that do software development often
believe otherwise: They appear to think that a focus on software quality increases costs and
delays projects. Studies of software quality (not necessarily software security) have consis
tently proven this belief wrong. Organizations with a mature SDLC process usually face
little extra overhead because attention to software quality and resilience, and the correspond
ing cost savings from process improvements, far exceed the cost of added developer activities.
Static application security testing (SAST) supports the secure development of programs
in an organization by finding and listing the potential security bugs in the code base; this is
one of the most powerful processes in implementing a Shift Left model. SAST tools offer a
wide variety of views/reports and trends on the security posture of the code base and can be
used as an effective mechanism to collect metrics that indicate the progress and maturity of
the software security activities. Source code analyzers operate relatively quickly compared to
the several thousands of man-hours needed to complete the analysis if it were done manually.
Automated tools also provide risk rankings for each vulnerability, which helps the organization
to prioritize its remediation strategies.
Most important, automated code analyzers help an organization uncover defects earlier in
the SDLC, enabling the kinds of cost and reputation savings we discussed earlier in this chap
ter. Here are a number of other tangible and intangible benefits that come from using SAST:
• Automated tools sometimes report a high number of false positives. Sometimes it will
take an organization several months to fine-tune the tool to reduce these false positives,
but some level of noise will always remain in the findings.
Testing Part 1: Static Code Analysis 103
• Applications are selected for scanning, and SAST scans are run.
• Reports of the scan results are prepared and shared with the team responsible for the
application.
• Developers don’t understand what the reports are telling them and settle into bewilder
ment that leads to inaction or analysis paralysis.
• When developers do react, it’s often with incredulous disbelief that their program is
capable of being defective, and they assume that the security team must be wrong or
crazy or is picking on them.
• Appsec architects responsible for this new processing are left holding the ever-growing
bag of software defects that cannot be addressed properly, and they wait while manage
ment resolves what’s now become human factors–based incidents that require addressing
by managers across multiple areas.
This situation plays out everywhere whenever a workforce is unprepared for new tools and is
being required to do something they’re unprepared to do. This results in ill regard by developers
for the security team, the management team, and the entire SDLC process. Always remember,
appsec is a people—not a technical—issue.
The good news is there’s a better way.
Treating SAST as a developer-oriented and developer-based tool usually transforms the
process from a bad implementation to a helpful enrichment of the SDLC. The best first step
to making that happen is, once again, education. Appsec practitioners should not make the
mistake of thinking that SAST tools were developed for security professionals. They were
mostly written by developers for developers to help resolve the scourge of bad software. SAST
tools offer plenty to the security team, but their primary purpose is to help developers become
better developers.
Teaching people what SAST is all about, how SAST tools work, and how the scanner data
is useful for improving software quality fortifies the skills that developers need to meet the
acceptance criteria for the user stories they’re working on.
According to Gartner,3 SAST is a set of technologies designed to analyze application code,
byte code, and binaries for coding and design conditions that are indicative of security vulner
abilities. SAST solutions analyze an application from the “inside out” in a nonrunning state.
104 Practical Security for Agile and DevOps
Without getting into the heavy computer science behind SAST tools, suffice it to say that a
SAST tool suite consists of a scanning engine, an interface to process scan results, and special
processing to identify security issues in open source and third-party code libraries, and it may
include dynamic testing and other appsec-related components within the suite. Some tools
are run on premises and others are offered as software as a service (SaaS)-based cloud services.
Some tools are free for the taking from open source projects, whereas others cost millions of
dollars, depending on your application portfolio.
In essence, the SAST engine receives a copy of the compiled modules for the application
and begins a reverse-engineering process that builds a database to represent the application
and its control and data flows. The engine then runs queries against the database to trace how
variables and control (branching) flows through the application.
The basic premise is this:
Trust Boundary (security perimeter). In the case of SAST, only the packaged application
binaries are considered within the trust boundary. Everything outside this perimeter is explic
itly distrusted.
Source. The point of entry of data at which variables are set with values that come from out
side the trust boundary.
Sink. The point of exit for data at which variables are used in computing processes or are
sent to a file, a database, a screen, or a command or are subsequently used in another external
process.
Taint. The condition ascribed to every variable that’s found at a source. Tainted is the label
that’s affixed to that variable as data comes in and control flow proceeds. Any data that enters
the trust boundary is tagged as tainted as the implementation of explicit distrust.
Vulnerability. The condition in which a variable that still is tagged as tainted reaches a sink
with no intervening function that “removes” the taint.
Cleanser or Taint Removal. A software routine that the scanner recognizes as a viable func
tion that removes the taint tag when the variable is output from the process. It can be a data
validation routine (white list) or a built-in framework service, such as XSS prevention suite in
Microsoft®’s .NET.
Taint Propagation. This happens when another variable is set to the value of a variable that’s
still tagged as tainted. This new variable is also tagged as tainted, as is any other variable that
uses it in a calculation or process. One tainted variable can propagate throughout the applica
tion, and when any of those variables reach a sink, a vulnerability is declared.
through a taint removal or cleansing function, the taint tag for that variable is removed. If
a variable reaches a sink with the tainted tag still attached, a vulnerability of some sort is
declared.
The process occurring at the sink determines the exact vulnerability. If the sink is an SQL
statement that’s constructed using a tainted variable, SQL injection is reported. If the variable
is reflected to another Web page or service, cross-site scripting (XSS) is reported. If the vari
able is used in a command string, command injection is reported, and so forth, as shown in
Figure 8.3. In this case, a cleansing function was invoked for var2, which then shows as clean
with the taint tag removed. In the case where var2 was not cleansed, the reported vulnerability
would be XSS.
Scanners typically report the CWE™ number of the weakness it finds, along with the sever
ity of the weakness (critical, high, medium, low, informational). When the final scanning is
done, a report is produced, and the submitter of it is notified. The report will contain a score
or an outcome—pass, conditional pass, fail. This process is shown in the control flow analysis
in SAST, depicted in Figure 8.3.
SAST tools often include what you can think of as a security spell checker. These tools come as
plug-ins to the IDE and are invoked automatically when some code is saved by the developer.
Once it’s done, the checker will highlight segments of code and provide an explanation of
what it found, why it’s a problem, and what to do about it, such as the use of insecure func
tions or data found at a source that’s not sanitized. A product in this family from Veracode isSM
called Greenlight,4 and one from Synopsys® is called SecureAssist.5 Remember, a defect that’s
made, caught, and eliminated at the point it’s introduced is a defect that won’t appear in a
feature branch.
These tools are not perfect, and they can’t see the entire application or even enough of it to
substitute for running a complete scan. Instead, we recommend what’s called sandbox scanning.
Typically, scanners are set up for each application that needs scanning. Scans run in the policy
area of the tool are collected and reported as official scans, which often appear in dashboards
and reports from the tool that aggregates results. Application-specific work areas are separate
from other application work areas and are flexible, so you can set up scanning to run automati
cally, based on some schedule for the frequency of release and deployment. You can also set up
sandboxes for scanning that you don’t want reported as Policy Scans. These sandboxes should
be used regularly by the Scrum team, and defects should be removed as quickly as is practical.
Once a clean scan (according to policy) is attained, it’s possible to promote that scan as a policy
or gated scan, as proof of meeting acceptance criteria or Definition of Done.
This raises another important issue: Not all scanner-reported defects are equal. Some are
riskier to leave untreated than others, whereas others may need to wait on the back burner until
riskier issues are addressed.
Furthermore, it’s possible to rationalize that some reported vulnerabilities cannot be
exploited (and are a reduced risk) because of some controls outside the trust boundary that
prevent tainted data from entering the boundary. Other rationalizations may involve declar
ing that an internal function serves as a cleanser, but the scanner does not recognize it as an
established cleanser.
It’s also possible that there’s a routine—for example, in batch processing—in which data
is previously cleansed or purified because several downstream applications need this level of
data purification. If there is evidence that this is indeed true and that the process is working
as expected, SAST tools often let you indicate that the vulnerability is mitigated (not remedi
ated) because of external factors. Typically, a developer or Scrum team security champion
will propose this mitigation inside the scanner triage view and let the security team know a
mitigation proposal is ready for review for the specific application. The security team will meet
with the proposer and discuss the situation. If the security staff is convinced, they can approve
the proposal, and the vulnerability will be tagged as mitigated. These mitigated vulnerabilities
improve the overall outcome for the scan, but remain on the scan history for its lifetime.
For an example of how this process could work, MITRE® produced a guide called “Monster
Mitigations”6 that’s connected to the CWE Top 25 Most Dangerous Programming Errors.7
These monster mitigations are intended to reduce the severity or impact of reported vulner
abilities to help with practical risk management of appsec concerns. You can use this guidance
to help development teams understand how to propose mitigations when the situation is right
for them. These can also be reused across teams and scans as experience with the SAST tool
is gained.
(A description of Figure 8.4 can be found on the following page.)
Figure 8.4 Some Sample Veracode-Recognized Cleansers. (Source: Used with permission from Veracode, Inc.)
110 Practical Security for Agile and DevOps
Compiled after examining the findings from the anonymized data of over 1,100 com
mercial codebases audited in 2017 by the Black Duck On-Demand audit services group,
the report revealed that:
They also revealed from the study that 78 percent of the codebases examined contained at
least one vulnerability, with an average of 64 vulnerabilities per codebase.
Vulnerabilities on these components are maintained in a separate dictionary from the
MITRE Corporation called Common Vulnerabilities and Exposures, or CVE™.11 CVE is a
collection of the known or reported vulnerabilities in purchased, commercial off-the-shelf
(COTS) products, and free-for-the-taking open source projects found in repositories such
as GitHub and others. CVE is a list of entries, each containing an identification number, a
description, and at least one public reference for publicly known cybersecurity vulnerabilities.
CVE is a companion to the CWE but applies to commercial and freely available public sources
or reusable code.
SCA scans are run when any scan is run. The ability to view the results of these depends
on your license(s) for using the scanner. When SCA is available, any of the vulnerabilities
that are found on the specific versions of open source components in the trust boundary are
reported on the scanner’s scan report facilities. It will list the version of the vulnerable library,
including which vulnerabilities from CVE are reported on it, and provides some assistance in
updating the library to a version that’s not reported as vulnerable (if one exists). These results,
along with custom code scan results, are available to the Scrum team developers and security
champion, where they can be properly triaged and set up for the next steps to remediate or
mitigate the threats.
mitigation reviews and approvals, and planning on how to treat medium-level flaws that will
be included as the policies get tighter and program maturity increases.
Adopters of SAST tools should be prepared to staff the function inside Scrum teams, nomi
nate a person who can best represent the application to work with the security team (security
champions), and provide enough people on the security teams to help with training developers
on secure development practices and how to use the SAST tool in the way it’s intended for use.
Plan your implementation deliberately and resist the urge to crack down on compliance
with the scan results before your processes and constituents are prepared. Pick a team or two
(not a mission-critical application team, though!) and experiment with the tool and internal
processes. As you learn more about how the tool works and how your processes really work,
you’ll be able to integrate it into Agile workflows, similar to how NFRs get integrated—with
agility and practicality.
8.11 Summary
In Chapter 8, we explored the overall landscape for software testing and the areas within it.
You saw a multistep process for testing, beginning with the first elements of source code, all the
way through comprehensive static code analysis using complex SAST tools.
As you are continuing to see, testing for security and resilience requires comprehensive
tools and techniques along with skilled personnel who can collect and analyze software from a
number of points of view. The principle of “defense in depth” is equally applicable in the test
ing phases of the SDLC because they are within the design and development phases. There is
no single tool or technique that can uncover all security-related problems or issues.
In Chapter 9, we’ll turn our attention to testing using dynamic and external methods to
learn how the program behaves as it’s running.
2. Static code testing can expose all but which of the following?
a) Unvalidated/unsanitized variable used in data flow
b) Use of insecure functions
c) Vulnerable software libraries contained in the application
d) Logic flaws that affect business data
Exercises
1. A programmer has come to you with questions about the static scan for the application
his team is developing. The scan report is showing 375 instances of a critical flaw, SQL
Injection (CWE-80), throughout the code. You explain to her that the reason it’s report
ing CWE-80 is that there are unsanitized variables being used in the construction of the
dynamic SQL statement by the program. What would you advise she do to remediate
the issue?
2. A static scan of a critical business application typically runs for 5 hours before a complete
scan report is returned. The DevOps Team is considering initiation of the scan in the
Build Pipeline process for the application, which typically releases twice a week. This
scan would force the build to wait (blocking) for the scan to finish before the build con
tinues. You strongly advise they don’t proceed with that plan. What alternative(s) would
you propose?
References
1. Control–SA-11–DEVELOPER SECURITY TESTING AND EVALUATION. (n.d.). Retrieved
from https://nvd.nist.gov/800-53/Rev4/control/SA-11
2. Crosby, P. B. (1996). Quality Is Still Free: Making Quality Certain in Uncertain Times. New York
(NY): McGraw-Hill Companies.
3. Static Application Security Testing (SAST). (2019, April 12). Retrieved from https://www.gartner
.com/it-glossary/static-application-security-testing-sast/
4. Greenlight Product Page. (2019, April 25). Retrieved from https://www.veracode.com/products
/greenlight
5. SecureAssist Overview & Datasheet. (2019, April 15). Retrieved from https://www.synopsys
.com/software-integrity/resources/datasheets/secureassist.html
6. 2011 CWE/SANS Top 25: Monster Mitigations. (2011, June 27). Retrieved from https://cwe.mitre
.org/top25/mitigations.html
7. CWE/SANS Top 25 Archive. (2019, June 20). Retrieved from https://cwe.mitre.org/top25
/archive/index.html
8. Veracode. (n.d.). Supported Cleansing Functions. Retrieved August 9, 2019, from https://help
.veracode.com/reader/DGHxSJy3Gn3gtuSIN2jkRQ/y52kZojXR27Y8XY51KtvvA
114 Practical Security for Agile and DevOps
9. Streamlining Scan Results: Introducing Veracode Custom Cleansers. (2017, December 7). Retrieved
from https://www.veracode.com/blog/managing-appsec/streamlining-scan-results-introducing-vera
code-custom-cleansers
10. The Percentage of Open Source Code in Proprietary Apps Is Rising. (2018, May 22). Retrieved
from https://www.helpnetsecurity.com/2018/05/22/open-source-code-security-risk/
11. CVE. (n.d.). Common Vulnerabilities and Exposures (CVE). Retrieved from https://cve.mitre.org/
Chapter 9
Testing Part 2: Penetration
Testing/Dynamic Analysis/
IAST/RASP
CHAPTER OVERVIEW
In Chapter 9, we’ll look at the other side of the coin for application testing—DAST—which
actively attacks a running application. You need both SAST and DAST for a 360-degree view
of how your application is built and how it behaves.
DAST tools are a form of penetration (pen) testing or black box testing, in that testers don’t
need to possess the knowledge about the application, its design, structure, or requirements.
On the other hand, DAST tools are rather potent as attack tools, so their use should be well
controlled and well understood.
We’ll cover the various forms of pen testing, some popular methodologies for testing, and
the role of scanners, Gray Box tests, interactive application testing, and runtime application
self-protection (RASP).
CHAPTER TAKEAWAYS
• Determine the dynamic security testing needs for various classes of applications.
• Select an appropriate testing methodology for a portfolio of applications to identify and
address risk treatment and management.
• Determine how to include runtime security testing within the software development
lifecycle (SDLC) to mitigate risk and identify required remediation for security defects.
115
116 Practical Security for Agile and DevOps
security experts. In this chapter, the focus shifts to dynamic application security testing (DAST),
along with some runtime security controls that serve as additional layers of defense in depth.
The Institute for Security and Open Methodologies (ISECOM) began with the release
of the OSSTMM in the early 2000s. Many researchers from various fields contributed to the
effort in developing an open document to help appsec professionals set up and operate an effec
tive pen testing program.
ISECOM also publishes a free, regularly updated Open Source Cybersecurity Playbook for
appsec professionals to use, with 27 pages of practical advice and tactics. It’s intended to help
you to lay out a detailed game plan you can use to take control of your security and close your
gaps. You can get a copy of the OSSTMM and the Playbook from the ISECOM website.2
The Application Security Verification Standard defines three security verification levels,
with each level increasing in depth:
• ASVS Level 1 is for low assurance levels, and is completely penetration testable.
• ASVS Level 2 is for applications that contain sensitive data, which requires protection
and is the recommended level for most apps.
• ASVS Level 3 is for the most critical applications—applications that perform high-value
transactions, contain sensitive medical data, or any application that requires the highest
level of trust.
Figure 9.1 shows the context of the ASVS and its three levels.
One of the best ways to use the ASVS is as a blueprint to create or supplement a Secure
Coding Checklist specific to your application, platform, or organization. Tailoring the ASVS
Figure 9.1 Application Security Verification Standard (ASVS) Version 4 Levels
Testing Part 2: Penetration Testing/Dynamic Analysis/IAST/RASP 119
to your acceptance criteria will increase the focus on the security nonfunctional requirements
(NFRs) that are most important to your projects and environments.
Black box testing helps to identify potential security vulnerabilities within commercial and
proprietary Web applications when the source code is not available for review and analysis.
If you implement a DAST solution from the same provider of SAST or SCA tools, you may
be able to take advantage of an interface that’s consistent for developers to use for processing
reported defects, rather than needing to learn an entirely different tool. As much as possible,
the processes and workflows you established to process the results from SAST tools should be
reused for DAST, since adoption will be simpler than adding yet another security process to
the heap.
Here are a few of the most popular black box penetration testing tools and suites:
• Veracode DAST4
SM
• HCL AppScan®5
• Micro Focus® Fortify WebInspect6
Although it’s beyond the scope of this book to offer exhaustive coverage of DAST tools and
how they work, a typical DAST product looks for and reports on the following vulnerabilities:
Since DAST looks at the application from an attacker point of view on a running version of
the application, it can detect defects that SAST tools can’t, thereby providing you with valuable
information about vulnerabilities that remain in the application that you can remediate before
a malevolent outsider discovers them for you!
For Web applications when the application development architecture and development method
ology permit, providing restricted access to a black box scanning tool to the developers of the
application is usually recommended in the spirit of Shift Left, since security vulnerabilities
discovered early in the life cycle prevent security defects from entering the integration or build
testing phases, similar to the deployment of source code analyzers in the same environment.
Repeated testing is often needed, since the application changes with each release. Remember,
DAST runs can take hours or days to complete.
Apart from providing developers access to the black box tool, the QA team or knowledgeable
testers on a Scrum team should also have access to these tools. The testing carried out by this
independent team might also serve as gating criteria for promoting the application QA testing
and production environments. The results from these test results should be shared with the
developers quickly after the tests are run, so they can develop strategies for fixing the problems
that are uncovered. Once the criteria for moving to production are met, the QA team should
sign off on the security vulnerability testing, along with the other test results (functional test
ing, user acceptance testing, etc.).
Centralized pen testing also ensures that other minor feature additions and bug fixes are
also tested for security defects before they too are deployed to production.
and functions. Most products allow you to configure the credentials needed, but it is vital that
the test accounts that are used for logging in are reflective enough of real-life data. As a result,
it’s vital that the testing environment mirror the production environment as much as possible,
and since testing in production is a universal violation of best practices and most regulations,
you have little choice but to assure that your QA test environment can behave nearly identically
to your production environment, without the risks of using real-life data for testing purposes.
RASP builds security into a running application wherever it resides on a server, usually
through agents. It intercepts all calls from the application to a system, making sure they’re
secure, and validates data requests directly inside the application. Both Web and non-Web
apps may be protected by RASP. The technology doesn’t affect the design of the application
because RASP’s detection and protection features run independently on the server the applica
tion hosts.
Some of the popular RASP technologies include:
9.11 Summary
In Chapter 9, you saw various ways that professionals address software development and run
ning software using testing and runtime tools for security and resilience. We looked at tools
for manual and automated penetration (pen) testing, Dynamic Application Security Testing
(DAST), Interactive Application Security Testing (IAST), and Runtime Application Self-
Protection (RASP).
In Chapter 10, we’ll take a deeper look into securing the DevOps environment and look at
an ecosystem that brings together these tools and techniques into an orchestrated process that
helps to assure security and resilience across every element of the SDLC.
3. Which of the following are the potential benefits of using tools for testing?
i. Reducing the repetitive work
ii. Increasing consistency and repeatability
iii. Over-reliance on the tool
124 Practical Security for Agile and DevOps
a) i and ii only
b) ii and iii only
c) i and iii only
d) All i, ii, and iii
4. Which of the following are the success factors for the deployment of the tool within an
organization?
i. Assessing whether the benefits will be achieved at a reasonable cost
ii. Adapting and improving processes to fit with the use of the tool
iii. Defining the usage guidelines
a) i and ii only
b) ii and iii only
c) i and iii only
d) All i, ii, and iii
Exercises
1. A raging debate about using “reformed hackers” to participate or conduct a pen test
in an official engagement has spread throughout the information security community.
What’s your view on this? Can a hacker really be reformed? Can a former hacker really
be trusted with sensitive or protected data and resources?
2. Once you’ve implemented a DAST tool in your environment, how should you plan on
managing access to it? Should its use be controlled? What constraints for using it would
you impose on the development community and the security team?
References
1. Open Source Security Testing Methodology Manual (OSSTMM). (n.d.). Retrieved from http://www.
isecom.org/research/osstmm.html
2. The Open Source Cybersecurity Playbook. (n.d.). Retrieved from http://www.isecom.org/research
/playbook.html
3. Category: OWASP Application Security Verification Standard Project. (n.d.). Retrieved from
https://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification
_Standard_Project
4. DAST Product Page. (2019, February 21). Retrieved from https://www.veracode.com/products
/dynamic-analysis-dast
5. IBM Knowledge Center. (n.d.). Retrieved from https://www.ibm.com/support/knowledgecenter
/en/SSW2NF_9.0.0/com.ibm.ase.help.doc/topics/c_intro_ase.html
6. Dynamic Application Security Testing (DAST): Web Dynamic Analysis Tool. (n.d.). Retrieved
from https://www.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview
7. What is IAST? Interactive Application Security Testing. (2019, March 13). Retrieved from https://
www.synopsys.com/software-integrity/resources/knowledge-database/what-is-iast.html
8. Introduction to IAST. (n.d.). Retrieved from https://dzone.com/refcardz/introduction-to-iast?
chapter=1
Testing Part 2: Penetration Testing/Dynamic Analysis/IAST/RASP 125
9. Contrast Security (n.d.). Contrast Assess | Interactive Application Security Testing | IAST.
Retrieved from https://www.contrastsecurity.com/interactive-application-security-testing-iast
10. Interactive Application Security Testing (IAST). (n.d.). Retrieved from https://www.whitehatsec
.com/products/dynamic-application-security-testing/interactive-application-security-testing/
11. An introduction to IAST. (2017, July 13). Retrieved from https://www.checkmarx.com/2017/07
/13/an-introduction-to-iast/
12. Mello, J. P., Jr. (2016, March 17). What Is Runtime Application Self-Protection (RASP)? Retrieved
from https://techbeacon.com/security/what-runtime-application-self-protection-rasp
13. Signal Sciences. (n.d.). RASP—Server Module. Retrieved from https://www.signalsciences.com
/rasp-runtime-application-self-protection/
14. Contrast Security. (n.d.). Contrast Protect | Runtime Application Self-Protection | RASP.
Retrieved from https://www.contrastsecurity.com/runtime-application-self-protection-rasp
15. Runtime Application Self-Protection (RASP) Security Solutions. (n.d.). Retrieved from https://www
.microfocus.com/en-us/products/application-defender/overview
16. RASP Market Leader | Secure all Applications by Default | Imperva. (n.d.). Retrieved from https://
www.imperva.com/products/runtime-application-self-protection-rasp/
Chapter 10
Securing DevOps
CHAPTER OVERVIEW
In Chapter 2, we introduced the concept of DevSecOps as an implementation of the marriage
between development and operations. In Chapter 10, we’ll dig deeper into DevSecOps and
find ways to help apply these activities into your own secure software development lifecycle
(SDLC) and to measure the maturity of your practices.
Figure 10.1, introduced in Chapter 2, is what DevOps looks like when comprehensive
security controls transform it into DevSecOps.1
Getting your SDLC to this point is rather challenging and requires foundational and fun
damental changes in attitudes, culture, enhanced skills, and up-rooted old ways in how an
organization conducts software development and operates software within the environment.
CHAPTER TAKEAWAYS
• Develop an appropriate set of automated security control recommendations to harden a
DevOps practice.
• Apply the “Three Ways” in process improvements that lead to enhanced security testing.
• Determine appropriate levels of assurance across a portfolio of applications to prioritize
and manage risk.
127
Figure 10.1 DevSecOps Cycle (Source: Retrieved from https://published-prd.lanyonevents.com/published/rsaus20/sessionsFiles/17851
/2020_USA20_CXO-W09-The-Impact-of-Software-Security-Practice-Adoption-Quantified.pdf. Used with permission of L. Maccherone, Jr.)
Securing DevOps 129
Digital organizations aspire to be agile like Amazon and Netflix, to innovate, to adapt,
and to remain resilient against the cyber security challenges we face in today’s digital world.
Security needs to keep pace to enable an organization’s accelerated IT delivery initiatives.2
• Educate. Embed security into the design and cultivate a collaborative, “security-enabling
business” mindset.
• Automate. Automate the SDLC pipeline and its security at every opportunity.
• Monitor. Take a risk-based approach to code review, application testing, and monitoring.
• Iterate. Strive for continuous security improvements through achievable iterations.
• Culture
○ Break down barriers between development, security, and operations through education
and outreach.
• Automation
○ Embed self-service automated security scanning and testing in continuous delivery.
• Lean
○ Value stream analysis on security and compliance processes to optimize flow.
• Measurement
○ Use metrics to shape design and drive decisions.
• Sharing
○ Share threats, risks, and vulnerabilities by adding them to engineering backlogs.
As you see, culture plays a vitally important role that needs addressing first. These cultural
changes are required for any shift to DevOps. McKinsey, a global management consulting
firm, identifies five key characteristics3 that are required for a successful transformation to a
DevOps culture:
1. Push change from the top. Start it from the bottom. Change, especially cultural change,
doesn’t happen without top-down sponsorship but won’t take hold until it’s executed at
the smallest unit possible. Implementing DevOps at the team level, for example, enables
the ability to demonstrate what is possible, locate obstacles, and break through them
while the issues are still small enough to handle. Successful transformations are usually
a continuous improvement journey rather than a big bang execution.
2. Reimagine trust. Traditionally, organizations establish trust through audit-based con
trol frameworks designed to improve quality, assurance, security, compliance, and risk
mitigation via checklists and audits of activity. DevOps doesn’t work that way. It requires
control functions to trust that product teams can and will be responsible stewards of
130 Practical Security for Agile and DevOps
organization-wide principles and requirements. Clearly trust needs to be earned, but this
usually happens quickly when teams collaborate and demonstrate success through small
pilots before scaling initiatives. This trust leads to empowering product teams to execute
changes that are right and safe for the organization.
3. Design for autonomy and empowerment. DevOps requires engineering teams to own
control responsibilities formerly owned by other functions. Engineering teams empowered
to push change through to production must embed controls (automate) in their pro
cesses to give the organization confidence that testing, risk management, and escalation
protocols are in place. Control must be designed into the process right from the start.
It’s about reimagining how controls are implemented to ensure they happen by default
within the process without the external interference that usually causes bottlenecks.
4. Crave improvement through testing. The hunger to improve—the process, the qual
ity, the speed, the impact of every single person—must pervade every corner of the
organization. That requires changing mindsets from “Let’s make it perfect” to “Good
enough, let’s see how it works and continue to iterate.” Supporting this cultural change
requires putting in place flexible systems and ways of working to identify issues and
opportunities, rapidly make adjustments, and test again.
5. Measure and reward the result, not process compliance. Cultures change when peo
ple are measured and rewarded for the right things. Everything, from performance con
tracts at the C-level to weekly objectives for sysadmins, needs to be aligned with strategic
business outcomes and the behaviors required to achieve them.
Culture changes in organizations take time and effort, but early results are possible as shift
ing paradigms produce positive, visible results, and others begin to hop on board to take part
in the transformation and gain their own positive results. In other words, small successes breed
larger successes over time.
Figure 10.2 The Three Ways for DevOps—The First Way: Systems Thinking (Source: Kim, G.,
Humble, J., Debois, P., and Willis, J. [2016]. The DevOps Handbook: How to Create World-
Class Agility, Reliability, and Security in Technology Organizations. IT Revolution Press.6 Used
with permission.)
can be continually made. Outcomes of The Second Way include understanding and respond
ing to all customers, internal and external; shortening and amplifying all feedback loops; and
embedding knowledge where we need it.
The Third Way is about creating a culture that fosters two things:
Figure 10.3 The Three Ways for DevOps—The Second Way: Amplify Feedback Loops
(Source: Kim, G., Humble, J., Debois, P., and Willis, J. [2016]. The DevOps Handbook: How to
Create World-Class Agility, Reliability, and Security in Technology Organizations. IT Revolution
Press.6 Used with permission.)
132 Practical Security for Agile and DevOps
Figure 10.4 The Three Ways for DevOps—The Third Way: Culture of Continual Experi
mentation and Learning (Source: Kim, G., Humble, J., Debois, P., and Willis, J. [2016]. The
DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology
Organizations. IT Revolution Press.6 Used with permission.)
Both of these are equally needed. Experimentation and taking risks ensure that people will
regularly push to improve, even if it means going deeper into danger zones they’ve never gone
into before. Mastery of these skills can also help to retreat out of a danger zone when you’ve
gone too far. The outcomes of The Third Way include allocating time for the improvement of
daily work, creating rituals that reward the team for taking risks, and introducing faults into the
system to increase resilience.
An important lesson for those on security teams comes from The DevOps Handbook 6:
One of the top objections to implementing DevOps principles and patterns has been, “Information
security and compliance won’t let us.” And yet, DevOps may be one of the best ways to better
integrate information security into the daily work of everyone in the technology value stream.
builds approved for use in development (such as secure versions of OpenSSL with cor
rect configurations), as well as toolchains, the deployment pipeline, and standards.
• Integrating security into your deployment pipeline
○ Automate as many security tests as possible, when possible, to run alongside other
automated tests in your deployment pipeline. Automate these tests at every major
commit of code by development or operations, even at very early stages. The goal is
to provide short feedback loops, so development and operations teams are notified of
any potential security issues in code commits. This allows teams to detect and correct
security problems quickly as a part of daily work instead of waiting until the end of the
SDLC, when fixes are often complex, time consuming, and expensive.
• Ensuring the security of the application
○ As you automate your tests, generate tests to run continuously in your deployment pipe
line, instead of performing unit or functional tests manually. This step is critical for the
quality assurance (QA) team, which will want to include static and dynamic analysis tests
(SAST and DAST), software composition analysis (SCA), interactive application secu
rity tests (IAST), and more. Many of these testing processes can be part of a continuous
integration (CI) or continuous delivery/deployment (CD) pipeline.
• Ensuring the security of the software supply chain
○ Up to 90% of modern applications are constructed from open source components,
making them a fundamental part of the software supply chain today. When using
open source components and libraries, DevOps teams must consider that applications
inherit both the functionality of open source code and any security vulnerabilities
it contains. Detecting known vulnerabilities in open source helps developers choose
which components and versions to use. Integrating vulnerability checking during the
CI process or within the binary repository or IDE helps ensure the security of the soft
ware supply chain.
The SANS Institute published a Secure DevOps Toolchain poster8 to help appsec profes
sionals select and evaluate potential tools for use in their own environments. Every implemen
tation of Agile and DevSecOps is unique in many ways, so there is no single environment that
addresses every need for everyone. There are dozens of open source projects that provide free
resources to help you with your efforts. The poster details the sets of open source tools that
apply to these activities:
• Pre-commit
○ Activities before code is checked in to version control
• Commit (continuous integration)
○ Fast, automated security checks during build and continuous integration steps
• Acceptance (continuous delivery)
○ Automated security acceptance, functional testing, and deep out-of-band scanning dur
ing continuous delivery
• Production (continuous deployment)
○ Security checks before, during, and after code is deployed to production
• Operations
○ Continuous security monitoring, testing, audit, and compliance checking
134 Practical Security for Agile and DevOps
Establishing, using, and maintaining a DevSecOps environment can yield the kinds of
efficiencies and improvements you expect, but these efforts won’t come easily, quickly, or cor
rectly the first few times through. As you gain experience and collect data, you can rapidly
adjust to reduce problems you encounter and add additional controls to move into higher areas
of maturity.
To help with measuring that progress, we turn to a DevSecOps Maturity Model from OWASP.
• Build
• Deployment
• Education and guidance
• Culture and organization
• Process
• Monitoring
• Logging
• Infrastructure hardening
• Patch management
• Dynamic depth for applications
• Static depth for applications
• Test-intensity
• Consolidation
• Application tests
• Dynamic depth for infrastructure
• Static depth for infrastructure
For each dimension under each maturity level, you’ll find a link that describes the risk and
opportunity for that dimension, along with related details of exploitation to help you determine
where your practices for that dimension appear. The model is also useful in understanding the
dimensions of the next level of maturity to help you plan your program’s future activities.
Securing DevOps 135
10.6 Summary
In Chapter 10, we explored the roots of DevOps and The Three Ways that describe any of the
DevOps implementation techniques. We then looked at the complexity of enriching DevOps
with controls, tools, and processes to transform it into DevSecOps, where in-control pipe
lines for software development and deployment are hardened to help assure security and resil
ience along every process that touches custom-developed software. Finally, we learned about
the OWASP DevSecOps Maturity Model and how it’s applied and the OWASP DevSecOps
Studio, with which you can put your ideas to the test before you commit to durable changes.
Exercises
1. Pick a security activity that you learned about from Chapters 3 to 9 and describe how
you would implement it in a continuous integration/continuous delivery (CI/CD) pipe
line to assure its inclusion in an overall DevSecOps environment.
2. What changes are needed to internal information security standards to support a
DevSecOps environment? How can the fundamental security principles of Least Privi
lege and Separation of Duties be preserved and enforced with automated pipelines that
continuously build and deliver new software versions?
3. Secrets management is one of the first testing suites that can be implemented in
DevSecOps. This kind of testing will reveal problems related to hardcoded passwords,
poorly managed API keys and credentials, and poor security of password storage. Upon
revealing these issues, what are some strategies you can think of to address them? How
can these strategies be implemented to prevent these issues as new applications are being
developed?
References
1. Maccherone, L. (2017, March 19). DevSecOps Cycle [Diagram]. Retrieved from https://twitter.com
/lmaccherone/status/843647960797888512
2. Capgemini. (2019, June 12). DevSecOps—Security in Fast Digital. Retrieved from https://www
.capgemini.com/gb-en/service/cybersecurity-services/devsecops-security-in-fast-digital/
3. Das, L., Lau, L., and Smith, C. (2017, February 26). Five Cultural Changes You Need for DevOps to
Work. Retrieved from https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights
/digital-blog/five-cultural-changes-you-need-for-devops-to-work
4. Kim, G., Humble, J., Debois, P., and Willis, J. (2016). The DevOps Handbook: How to Create
World-Class Agility, Reliability, and Security in Technology Organizations. IT Revolution Press.
5. Kim, G. (2012, August 22). The Three Ways: The Principles Underpinning DevOps. Retrieved
from https://itrevolution.com/the-three-ways-principles-underpinning-devops/
6. Kim, G., Humble, J., Debois, P., and Willis, J. (2016). The DevOps Handbook: How to Create
World-Class Agility, Reliability, and Security in Technology Organizations. IT Revolution Press.
Securing DevOps 137
7. Synopsis®. (2019, May 23). DevSecOps: The Intersection of DevOps & Security. Retrieved from
https://www.synopsys.com/software-integrity/solutions/by-security-need/devsecops.html
8. SANS Institute. (n.d.). Secure DevOps Tool Chain. Retrieved from https://www.sans.org/security
-resources/posters/appsec/secure-devops-toolchain-swat-checklist-60
9. OWASP. (n.d.). DevSecOps Maturity Model. Retrieved from https://www.owasp.org/index.php
/OWASP_DevSecOps_Maturity_Model
10. OWASP. (n.d.). DevSecOps Activities Overview. Retrieved from https://dsomm.timo-pagel.de/
11. OWASP. (n.d.). DevSecOps Studio Project. Retrieved from https://www.owasp.org/index.php
/OWASP_DevSecOps_Studio_Project
12. OWASP. (n.d.). DevSecOps Studio. Retrieved from https://2018.open-security-summit.org/out
comes/tracks/devsecops/working-sessions/owasp-devsecops-studio/
Chapter 11
Metrics and Models for
AppSec Maturity
CHAPTER OVERVIEW
All roads lead to Rome. It makes no difference what path you take—as long as you continue
to strive for improvement, your efforts will be rewarded. Although any methodology to get
there will do, you have undoubtedly noticed by now that metrics and measurement are vital to
assure that you are headed in the right direction for secure and resilient systems and software.
In Chapter 11, you will find a detailed examination of two measurement and metrics
models intended to help you determine the baseline maturity of the secure development
integration into your software development lifecycle (SDLC) and determine the pathways to
further improve the maturity of your program.
We’ll take a look at the two leading software security maturity approaches:
CHAPTER TAKEAWAYS
• Compare and contrast how OpenSAMM and BSIMM are used to measure appsec pro
gram maturity.
• Define the process for adopting a maturity model to measure an appsec program’s progress.
• Apply the most appropriate maturity model based on the organization’s readiness for
measurements.
139
140 Practical Security for Agile and DevOps
So how do security maturity models like OpenSAMM and BSIMM fit into this picture?
Both have done a great job cataloging, updating, and organizing many of the “rules of
thumb” that have been used over the past few decades for investing in software assurance. By
defining a common language to describe the techniques we use, these models will enable us
to compare one organization to another and will help organizations understand areas where
they may be more or less advanced than their peers. . . . Since these are process standards, not
technical standards, moving in the direction of either BSIMM or OpenSAMM will help an
organization advance—and waiting for the dust to settle just means it will take longer to catch
up with other organizations. . . . [I]n short: do not let the perfect be the enemy of the good. For
software assurance, it’s time to get moving now.
SAMM was defined with flexibility in mind so that it can be utilized by small, medium,
and large organizations using any style of SDLC. The model can be applied organization-wide,
for a single line of business, or even on an individual project.
OpenSAMM was beta released under a Creative Commons Attribution Share-Alike license.
The original work was donated to OWASP and is currently being run as an OWASP project.
OpenSAMM starts with the core activities that should be present in any organization that
develops software:
• Governance
• Construction
• Verification
• Deployment
In each of these core activities, three security practices are defined for 12 practices that are
used to determine the overall maturity of your program. The security practices cover all areas
Figure 11.1 OpenSAMM Model (Source: OpenSAMM by OWASP is licensed under CC-BY-SA)
relevant to software security assurance, and each provides a “silo” for improvement. These
three security practices for each level of core activities are shown in Figure 11.1.
Objectives under each of the 12 practice areas define how each practice are can be improved
over time and establish a maturity level for any given area. The three maturity levels for a prac
tice correspond to:
In this section, we’ll break down each of the practice areas into specific practices within it.
• Strategy and Metrics (SM) involves the overall strategic direction of the software assur
ance program and instrumentation of processes and activities to collect metrics about an
organization’s security posture.
• Policy and Compliance (PC) involves setting up a security and compliance control and
audit framework throughout an organization to achieve increased assurance in software
under construction and in operation.
Metrics and Models for AppSec Maturity 143
• Education and Guidance (EG) involves increasing security knowledge among person
nel in software development through training and guidance on security topics relevant
to individual job functions.
• Objective
• Activities
• Results
• Success metrics
• Costs
• Personnel
• Related levels
11.3.1 Objective
The objective is a general statement that captures the assurance goal of attaining the associated
level. As the levels increase for a given practice, the objectives characterize more sophisticated
goals in terms of building assurance for software development and deployment.
11.3.2 Activities
The activities are core requisites for attaining the level. Some are meant to be performed
organization-wide, and some correspond to actions for individual project teams. In either case,
the activities capture the core security function, and organizations are free to determine how
they fulfill the activities.
11.3.3 Results
The results characterize capabilities and deliverables obtained by achieving the given level. In
some cases, these are specified concretely; in others, a more qualitative statement is made about
increased capability.
The success metrics specify example measurements that can be used to check whether an organi
zation is performing at the given level. Data collection and management are left to the choice
of each organization, but recommended data sources and thresholds are provided.
11.3.5 Costs
The costs are qualitative statements about the expenses incurred by an organization attaining
the given level. Although specific values will vary for each organization, these are meant to pro
vide an idea of the one-time and ongoing costs associated with operating at a particular level.
11.3.6 Personnel
These properties of a level indicate the estimated ongoing overhead in terms of human resources
for operating at the given level.
Metrics and Models for AppSec Maturity 145
The related levels are references to levels within other practices that have some potential over
laps, depending on the organization’s structure and progress in building an assurance program.
Functionally, these indicate synergies or optimizations in activity implementation if the related
level is also a goal or already in place.
11.3.8 Assurance
Because the 12 practices are each a maturity area, the successive objectives represent the “build
ing blocks” for any assurance program. OpenSAMM is designed for use in improving an assur
ance program in phases by:
• Selecting security practices to improve in the next phase of the assurance program
• Achieving the next objective in each practice by performing the corresponding activities
at the specified success metrics
• Gap analysis
○ Capturing scores from detailed assessments versus expected performance levels
Figure 11.2 Sample OpenSAMM Assessment Worksheet Extract (Source: OpenSAMM by OWASP is licensed under CC-BY-SA)
Metrics and Models for AppSec Maturity 147
• Demonstrating improvement
○ Capturing scores from before and after an iteration of assurance program build-out
• Ongoing measurement
○ Capturing scores over consistent time frames for an assurance program that is already
in place
An example of a scorecard for each of the 12 practice areas with before-and-after measure
ments is shown in Figure 11.3.
One of the main uses of OpenSAMM is to help organizations build software security
assurance programs. That process is straightforward and generally begins with an assessment
if the organization is already performing some security assurance activities.
Several roadmap templates for common types of organizations are provided. Thus, many
organizations can choose an appropriate match and then tailor the roadmap template to their
needs. For other types of organizations, it may be necessary to build a custom roadmap.
Roadmap templates are provided for:
Roadmaps consist of phases in which several practices are each improved by one level.
Building a roadmap entails selecting practices to improve in each planned phase. Organizations
are free to plan into the future as far as they want, but they are encouraged to iterate based on
business drivers and organization-specific information to ensure that the assurance goals are
commensurate with their business goals and risk tolerance.
Once a roadmap is established, the build-out of an assurance program is simplified.
• An organization begins the improvement phases and works to achieve the stated levels by
performing the prescribed activities.
• At the end of each phase, the roadmap should be adjusted based on what was actually
accomplished, and then the next phase can begin.
primary objective was to build a maturity model based on actual data gathered from nine
large-scale software development initiatives. Representatives from Cigital® and Fortify® con
ducted interviews and collected data from nine original companies, including Adobe®, Dell®
EMC®, Google™, Microsoft®, and five others. Using this data and conducting in-person execu
tive interviews, the team developed a Software Security Framework (SSF) that creates buckets
and three maturity levels for the 116 activities that they observed being performed in software
development organizations. BSIMM has been updated over the course of its lifetime. Now, in
its 11th edition, it includes new activities that have been added to clearly show that appsec in
the cloud is becoming mainstream and indicates that activities observed among independent
software vendors, Internet of Things (IoT) companies, and cloud firms have begun to converge,
suggesting that common cloud architectures require similar software security approaches.
BSIMM is meant for use by anyone responsible for creating and executing a software secu
rity initiative (SSI). The authors of BSIMM observed that successful SSIs are typically run by
a senior executive who reports to the highest levels in an organization. These executives lead an
150 Practical Security for Agile and DevOps
internal group that BSIMM refers to as the software security group (SSG), charged with directly
executing or facilitating the activities described in the BSIMM. The BSIMM is written with
the SSG and SSG leadership in mind.
• Governance
• Intelligence
• Software security development lifecycle (SSDL) touchpoints
• Deployment
BSIMM indicates that SSGs should emphasize security education and mentoring rather
than policing for security errors. BSIMM is not explicitly intended for software developers.
Instead, it’s intended for people who are trying to teach software developers how to do proper
software security.
Properly used, BSIMM can help you determine where your organization stands with
respect to real-world software security initiatives, what peers in your industry are doing, and
what steps you can take to make your approach more effective.
A maturity model is appropriate because improving software security almost always means
changing the way an organization works—something that never happens overnight. BSIMM
provides a way to assess the state of an organization, prioritize changes, and demonstrate prog
ress. Not all organizations need to reach the same security goals, but by applying BSIMM, all
organizations can be measured with the same yardstick.
11.7.1 Governance
Governance includes those practices that help organize, manage, and measure a software
security initiative. Staff development is also a central governance practice. In the governance
domain, the strategy and metrics practice encompasses planning, assigning roles and responsi
bilities, identifying software security goals, determining budgets, and identifying metrics and
gates. The compliance and policy practices focus on:
11.7.2 Intelligence
Intelligence includes those practices that result in collections of corporate knowledge used in
carrying out software security activities throughout the organization. Collections include both
proactive security guidance and organizational threat modeling.
The intelligence domain is meant to create organization-wide resources. Those resources
are divided into three practices:
• Attack models capture information used to think like an attacker: threat modeling,
abuse-case development and refinement, data classification, and technology-specific
attack patterns.
• The security features and design practice are charged with creating usable security pat
terns for major security controls (meeting the standards defined in the next practice),
building middleware frameworks for those controls, and creating and publishing other
proactive security guidance.
• The standards and requirements practice involves eliciting explicit security requirements
(nonfunctional requirements [NFRs] as acceptance criteria) from the organization,
determining which COTS software to recommend, building standards for major secu
rity controls (such as authentication, input validation, etc.), creating security standards
for technologies in use, and creating a standards review board.
SSDL touchpoints include those practices associated with analysis and assurance of particu
lar software development artifacts and processes. All software security methodologies include
these practices.
The SSDL touchpoints domain is probably the most familiar of the four domains. This
domain includes essential software security best practices that are integrated into the SDLC.
The two most important software security practices are architecture analysis and code review.
Architecture analysis encompasses capturing software architecture in concise diagrams,
applying lists of risks and threats, adopting a process for review (such as STRIDE or archi
tectural risk analysis), and building an assessment and remediation plan for the organization.
The code review practice includes use of code review tools, development of customized
rules, profiles for tool use by different roles (e.g., developers versus analysts), manual analysis,
and tracking/measuring results. The security testing practice is concerned with prerelease test
ing, including integrating security into standard QA processes. The practice includes use of
black box security tools (including fuzz testing) as a smoke test in QA, risk-driven white box
testing, application of the attack model, and code coverage analysis. Security testing focuses
on vulnerabilities in construction.
11.7.4 Deployment
Deployment includes those practices that interface with traditional network security and soft
ware maintenance organizations. Software configuration, maintenance, and other environ
ment issues have direct impacts on software security.
Metrics and Models for AppSec Maturity 153
By contrast, in the deployment domain, the penetration testing practice involves more
standard outside-in testing of the sort carried out by security specialists. Penetration testing
focuses on vulnerabilities in final configuration and provides direct feeds to defect management
and mitigation. The software environment practice concerns itself with operating system and
platform patching, Web application firewalls, installation and configuration documentation,
application monitoring, change management, and, ultimately, code signing. Finally, the con
figuration management and vulnerability management practice is concerned with patching and
updating applications, version control, defect tracking and remediation, and incident handling.
In preparation for that formal assessment, you’ll need to consider which application
development teams to engage for the interviews. BSIMM assessments are conducted as a series
of interviews with subject matter experts (SMEs) and knowledgeable people in your organiza
tion who are involved with your SSI. You can slice the potential assessment population into
any cross sections that you’d like, but try to select those teams that represent actual outcomes
from your efforts to roll out appsec. In other words, you want teams who have engaged in
appsec practices you’ve helped to implement and are showing positive outcomes from those
engagements. To gain a representative view of the SSI itself, you’ll need a good cross section of
those whose lives you’ve touched with your program and who have a good understating of your
mission and objectives for appsec.
11.12 Summary
In Chapter 11, you saw two approaches to developing, collecting, and assessing metrics to
help determine an overall maturity level of your secure development implementation efforts
and programs. Although both models should lead you to improved and measurable processes,
selecting the one to use must be determined by your own organization’s structure, its internal
development processes, and your own good judgment. Although we won’t recommend one
approach over the other, you should be able to see the overlaps between them and use the one
that best fits your purposes. As we mentioned early in this chapter, don’t let the perfect be the
enemy of the good. For software assurance, the time to get moving is now!
2. The Maturity Model you select for your own organization makes no difference so long
as your measurement system is:
a) Repeatable
b) Actionable
c) Consistent
d) All the above
Exercises
1. BSIMM does not advertise itself as a series of prescriptions for software security; rather,
it looks at families of activities in various specialty areas and ranks them based on how
frequently these activities were found in the participating organizations. How can this
help you with your own software security program?
2. How can you best decide which software assurance maturity model you’d prefer to
use, and once you make that decision, how will you communicate to the development
community about your efforts to build a baseline measurement and process for ongoing
measurement?
References
1. Epstein, J. (n.d.) Jeremy Epstein on the Value of a Maturity Model. OpenSAMM. Retrieved from
https://www.opensamm.org/2009/06/jeremy-epstein-on-the-value-of-a-maturity-model/
2. SAMM (n.d.). Software Assurance Maturity Model (SAMM): A Guide to Building Security into
Software Development. Retrieved from http://www.opensamm.org/downloads/SAMM-1.0.pdf
3. BSIMM. (2020, September 1). About the Building Security In Maturity Model. Retrieved from
https://www.bsimm.com/about.html
Chapter 12
Frontiers for AppSec
CHAPTER OVERVIEW
The stakes for software development teams (and the rest of us, actually) are already rather high
and getting higher. On top of the countless changes in how software is developed, we have new
and unexpected places where software showed up. The attack surface for software is growing
exponentially as we see new technology and new ways of using software emerge and flourish.
Today, development teams are responsible not just for security, but for safety as well.
Imagine trying to live with yourself knowing software you wrote was the cause of someone’s
death . . .
Those are the stakes and those most responsible for it need the right kinds of preparation.
In Chapter 12, we’ll survey a potpourri of new technologies and new ways software is being
packaged, deployed, and used across the world.
CHAPTER TAKEAWAYS
• Assess various application methods of new technology for security assurance.
• Determine best practices for different classes of technology that lead to “Building Secu
rity In.”
• Evaluate organizational standards for implementing security controls in new technology.
159
160 Practical Security for Agile and DevOps
Someone is writing all the software for these things, but do they truly appreciate the awe
some responsibility that goes along with it?
Because of IoT, software is forced into a state in which a recall of a physical device may be
needed to remediate basic software defects or worse, security defects in the software. How will
IoT firms respond? Who’s willing to ship their refrigerator back for a software update?
The IoT Security Foundation is a non-profit organization that has been dedicated to driving
security excellence since 2014. Their mission is a vendor-neutral, international initiative to
serve as an expert resource for sharing knowledge, best practices, and advice. Their program
is designed to propagate good security practice, increase adopter knowledge, and raise user
confidence in IoT.1
The foundation publishes a series of best practice guides2 on how to plan and develop IoT
applications and devices with these goals in mind:
• Aid confident adoption of secure IoT solutions, enabling their technology benefits
• Influence the direction and scope of any future necessary regulation
• Influence IoT procurement requirements, including by governments
• Increase the levels of security expertise throughout the IoT sector
• Deliver business value to members by building an eminent, diverse, and international IoT
security network
In July 2019, the US National Institute of Standards and Technology (NIST) published a
roadmap for IoT security: The Core Cybersecurity Feature Baseline for Securable IoT Devices: A
Starting Point for IoT Device Manufacturers as NISTIR 8259.3 The documents define a core
baseline of cybersecurity features that manufacturers may voluntarily adopt for IoT devices
they produce. It also provides information on how manufacturers can identify and imple
ment features beyond the core baseline most appropriate for their customers. Draft NISTIR
8259 builds upon NIST’s previous work, Considerations for Managing Internet of Things (IoT)
Cybersecurity and Privacy Risks.4 The roadmap covers:
12.2 Blockchain
Blockchain, as a distributed ledger technology5 that makes Bitcoin and hundreds of other
cryptocurrencies possible, is touted as a tremendous advance in computer security. As people
enter the mix, however, blockchain often turns into a computer security liability.
Creative applications of blockchain increase its appeal, but sometimes a system is turned
against itself, as is the case with cryptomining bot-controlled malware that steals computing
resources for brute-force computations. We’re all placed at risk when insecure user practices
and defective application software allows for malware infections that may be impossible to
remove. In addition, the private cryptographic keys that are needed to identify ownership or
transaction outcomes are often the target of theft, which is only made easier when the software
wallets to manage these keys are written with defective code.
vulnerabilities indicates three areas in which the creation and use of blockchains may be vul
nerable to problems related to computer security.
Although blockchain may be one of the most secure data protection technologies available
for use today, taking its security for granted is dangerous and risky. As the blockchain technol
ogy evolves, so will its vulnerabilities, and it’s only a matter of time before hackers will find a
way to breach blockchain networks. Organizations need to secure their blockchain right from
the start (build security in by Shifting Left) through implementing strong authentication,
employing cryptography key vaulting mechanisms, and, of course, implementing secure treat
ment of software within every step of the SDLC.
obvious that a lift and shift strategy was not going to fly. Rather, these new ways of operating
data centers required rethinking how applications are built, deployed, and used. Large, mono
lithic applications started undergoing a refactoring process to break up the applications using
some of the principles that we discussed in Chapters 5 and 6. This activity involves isolating
functions (services) around a central concept or entity—for example, one microservice might
deal with all the functions needed to create a client entity, another microservice might represent
the sales entity, yet another might represent order processing, and so on. These microservices
are activated using an application programming interface (API) that describes what functions
are available to access and what attributes are involved. These APIs are most often developed
using the Representational State Transition,8 or REST, architectural style.
Application development using these microservices entails developing a series of API invo
cation processes that perform useful work without concern for how these services are imple
mented, while permitting enhancement and changes to occur without interface changes to
isolate the need to rebuild every application that needs those services. This presents a challenge
to application security testing. API code itself may be run through a scanner, but this likely
won’t find and report on vulnerabilities. As we saw in Chapter 8, an application needs data
input sources and control flows for the scanner to detect vulnerabilities. An API—as code on
its own—won’t have those elements, and static code scanners might prove useless for API test
ing. Point solutions in the market may address these scanning needs but will run as developer-
oriented tools that work inside the API development environment. It may prove challenging to
establish a security policy on how these tools should be used and how they detect and risk-rate
discovered defects.
The restfulapi.net website, established to collect, present, and distribute information deemed
vital while building next-generation RESTful APIs, offers security design advice9 for architects
and designers:
• Least privilege. An entity should only have the required set of permissions to perform
the actions for which they are authorized, and no more. Permissions can be added as
needed and should be revoked when no longer in use.
• Fail-safe defaults. A user’s default access level to any resource in the system should be
“denied” unless they’ve been explicitly granted a “permit.”
• Economy of mechanism. The design should be as simple as possible. All the component
interfaces and the interactions between them should be simple enough to understand.
• Complete mediation. A system should validate access rights to all its resources to ensure
that they’re allowed and should not rely on a cached permission matrix. If the access level
to a given resource is being revoked, but that isn’t reflected in the permission matrix, it
would violate the security.
• Open design. This principle highlights the importance of building a system in an open
manner—with no secret, confidential algorithms.
• Separation of privilege. Granting permissions to an entity should not be purely based
on a single condition; a combination of conditions based on the type of resource is a bet
ter idea.
• Least common mechanism. This concerns the risk of sharing state among different
components. If one can corrupt the shared state, it can then corrupt all the other compo
nents that depend on it.
Frontiers for AppSec 165
They also list these security best practices for both designers and developers:
• Keep it simple.
• Use password hashes.
• Never expose information on URLs.
• Consider OAuth instead of basic authentication.
• Add timestamps in request.
• Input parameter validation.
• Ensure psychological acceptability—security should not make the user experience worse.
API and microservices security have caught the attention of the global appsec community.
New research and guidance publications on API security abound, but the one most used
is the OWASP API Top 10 Vulnerabilities 2019. The primary goal of the OWASP API
Security Top 10 is to educate those involved in API development and maintenance—for
example, developers, designers, architects, and managers. As a result of writing software as
asynchronous request-response mechanisms rather than as session-oriented applications,
there are a number of issues that arise in design and development, mostly related to access
control authentication and authorization. You can obtain a copy of the OWASP API Top
10 Vulnerabilities from https://raw.githubusercontent.com/OWASP/API-Security/master
/2019/en/dist/owasp-api-security-top-10.pdf
12.4 Containers
The next step beyond microservices and APIs is containerization of applications. A container10
is a standard unit of software that packages code and dependencies so that the application runs
quickly and reliably from one computing environment to another. Docker is the present-day
standard for creating container images. These images are lightweight, standalone, executable
packages of software that include everything that’s needed to run an application: code, run
time, system tools, system libraries, and settings. Containers isolate software from its environ
ment and ensure that the applications work uniformly.
An article from Container Journal 11 outlines four areas of security that need attention:
• Container images
○ Container technology is dependent on images as the building blocks for containers.
The technology enables developers to easily create their own images or download pub
lic files from sites such as Docker Hub. However, images are not always easy to trust
from a security perspective. The images must be signed and originate from a trusted
registry to ensure high-quality protection. They also must get properly vetted and the
code validated. If not, the images are vulnerable to cyberthreats.
166 Practical Security for Agile and DevOps
The NIST Special Publication 800-190, Application Container Security Guide (SP800-190),12
explains the security concerns associated with container technologies and makes practical rec
ommendations for addressing those concerns when planning for, implementing, and maintain
ing containers. SP800-190 offers recommendations in these areas to help appsec professionals
address container risks before they manifest themselves:
• Image countermeasures
○ Image vulnerabilities
○ Image configuration defects
○ Embedded malware
○ Embedded clear text secrets
○ Use of untrusted images
• Registry countermeasures
○ Insecure connections to registries
Frontiers for AppSec 167
such as cross-site scripting (XSS) and other injection attacks, and take some action before those
are passed on the Web application for processing. The actions they take are based on a policy
the WAF is given to enforce. This policy can simply be to alert and report the possible attack,
or it can block the traffic and prevent the exploit from succeeding. Because WAFs operate
outside the knowledge of or involvement by the Web application, the solution acts as a double-
edged sword.
Developers may use its presence as an excuse for not paying sufficient attention to code-
level defects they introduce or as a crutch, believing that the WAF will take care of their appli
cation’s security. Both situations work against your efforts to secure the SDLC.
As an element in defense in depth, WAFs can be an excellent control to help prevent
exploits, but it should not get in the way of efforts to drive human behavior that leads to
defensive programming. One way to help prevent this from being a problem is using a WAF in
enforcement mode only in production segments and not allowing the WAF to block the work
of pen testers in QA as they try to find defects in applications; it does you no good that a WAF
may prevent proving that a defect reported by a scanning tool exists. Testing the effectiveness
of the WAF in the QA environment after your application security testing steps are complete
is one possible way to incorporate its use as an effective layer of defense.
By machine learning, we usually understand such algorithms and mathematical models that
are capable of learning and acting without human intervention and progressively improve their
performance. In computer security, various machine learning methods have long been used in
spam filtering, traffic analysis, fraud prevention, and malware detection. In a sense, this is a
game where, by making a move, you expect the reaction of the enemy. Therefore, while playing
this game, one has to constantly update and correct models, feeding them with new data, or
even changing them completely. . . . As a result, future preference will be given to tools that use
more intelligent techniques, such as machine learning methods. These allow you to identify new
features automatically, can quickly process large amounts of information, generalize, and make
quick and correct decisions. It is important to note that on the one hand, machine learning can
be used for protection and on the other hand, it can also be used for more intelligent attacks.
• By influence type
○ Causative attacks affect the learning of the model through interference with the train
ing data sample.
○ Exploratory attacks use classifier vulnerabilities without affecting the training data set.
Frontiers for AppSec 169
• By security violation
○ Integrity attacks compromise the system through type II (false negative) errors.
○ Availability attacks cause the system to be shut down, usually based on type I and type II
errors.
• By specificity
○ A targeted attack is aimed at changing the prediction of the classifier when working
with a particular class.
○ An indiscriminate attack is aimed at changing the response/decision of the classifier to
any class, except the correct one.
In preventing or reducing the chance of attacks, the article offers this advice:
To deliberately undermine the quality of your big data analysis, cybercriminals can fabricate
data and “pour” it into your data lake. For instance, if your manufacturing company uses
170 Practical Security for Agile and DevOps
sensor data to detect malfunctioning production processes, cybercriminals can penetrate your
system and make your sensors show fake results—say, wrong temperatures. This way, you can
fail to notice alarming trends and miss the opportunity to solve problems before serious dam
age is caused. Such challenges can be solved through applying a fraud detection approach.
As big data is collected, it undergoes parallel processing. One of the methods used here is the
MapReduce (MapR) paradigm. When the data is split into numerous bulks, a mapper pro
cesses them and allocates them to particular storage options. If outsiders have access to your
mapper’s code, they can change the settings of the existing mappers or add “alien” ones. This
way, your data processing can be effectively ruined: cybercriminals can make mappers produce
inadequate lists of key/value pairs, leading to faulty results brought up by the Reduce process.
Gaining such access may not be too difficult, because big data technologies generally don’t pro
vide an additional security layer to protect data and tend to rely on perimeter security systems.
Despite the opportunity to—and the need to—encrypt big data, cryptography is often ignored.
Sensitive data is generally stored in the cloud without any encrypted protection. Although
cryptographic countermeasures may slow down big data processing, negating its speed, data
protection must remain paramount, especially when working with large data sets.
Related to lacking cryptography to prevent unauthorized access to sensitive data, other lacking
or weak security controls may permit corrupt insider IT specialists or evil business rivals mine
unprotected data and sell it for their own benefit. Here, data can be better protected by adding
appropriate security and data access controls.
Sometimes, data items fall under tight restrictions, and very few people have the authoriza
tion to view the data, such as personal information in medical records (name, email, blood
sugar, etc.). But some unrestricted data elements could theoretically be helpful for users with
no access to the secret data, such as medical researchers. With finer-grained access controls,
people can access needed and authorized data sets but can view only the elements (attributes)
they are allowed to see. Such access is difficult to grant and control, simply because big data
technologies aren’t initially designed to do so. One solution is to copy permissible data sets
and data elements to a separate big data warehouse and provide access to a particular user as a
group. For medical research, for instance, only the medical information (without the names,
addresses, and so on) would be copied.
Frontiers for AppSec 171
Data provenance—or historical records about your data—complicates matters even more. Data
provenance is a broad big data concern, but from a security perspective, it is crucial because:
• Unauthorized changes in metadata can lead you to the wrong data sets, which will make
it difficult to find needed information.
• Untraceable data sources can be a huge impediment to finding the roots of security
breaches and fake data-generation cases.
The popularity of NoSQL databases (such as MongoDB) is huge when working in big data
science, but security is often the last thing considered—if it’s considered at all. These databases
were never built with security in mind, so attempts to bolt it on are likely to lead to loss of
access or loss of data.
Big data security audits help companies gain awareness of their security gaps, though very
few companies bother with them. There are several reasons that those who work with big
data claim auditing is unnecessary: lack of time, resources, qualified personnel, or clarity of
business-related security requirements.
12.9 Summary
In Chapter 12 we took a 10,000-foot flyover of trends and new uses for software that we could
have never anticipated even a decade ago. These trends and new uses are raising the stakes
for everyone across society as our world becomes more and more connected and automated.
Although we could not possibly have covered all the changes happening in the IT world, you
should now have a good sense of the areas of focus for today and tomorrow. Software profes
sionals are undergoing a renaissance period in which art, science, technology, human ethics,
and the choices developers make are shaping the future.
Let’s all do our part to make it a promising one!
c) QA
d) Development
2. Which of these is the most logical first step in an API development project?
a) Create a functional view of the related application
b) Put API usage monitoring in place
c) Choose either an API proxy or an API gateway
d) None of the above
3. The modern standard that makes the use of application containers during runtime pos
sible is called:
a) Transporter
b) Docker
c) Loader
d) Executor
Exercises
1. Your data scientists are asking for a complete copy of a production database that
contains customer sensitive data, protected healthcare data, and payment card data.
What questions do you have for them about the request so you’re able to fulfill the
request while maintaining security compliance? How would you treat the data before
it’s released to them?
2. When reviewing the logs from your home Wi-Fi router, you notice that there was a
brute-force access attempt on your home security doorbell. You noticed too that the
doorbell has reported losing its network connection much more frequently than when
you first installed it. Are these incidents related? What will you do to kick out the
intruders and protect all your IoT devices connected to Wi-Fi?
References
1. IoT Security Foundation. (n.d.). Our Mission. Retrieved from https://www.iotsecurityfoundation
.org/about-us/
2. IoT Security Foundation. (n.d.). Best Practice Guidelines. Retrieved from https://www.iotsecurity
foundation.org/best-practice-guidelines/
3. NISTIR 8259 (DRAFT). (n.d.). Core Cybersecurity Feature Baseline for Securable IoT Devices.
Retrieved from https://csrc.nist.gov/publications/detail/nistir/8259/draft
Frontiers for AppSec 173
4. NISTIR 8228. (2019). Considerations for Managing Internet of Things (IoT) Cybersecurity and
Privacy Risks. Retrieved from: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8228.pdf
5. Blockchain Definition.com. (2018, August 2). Retrieved from https://www.bankrate.com/glossary
/b/blockchain/
6. Butcher, J. R. and Blakey, C. M. (2019). Cybersecurity Tech Basics: Blockchain Technology
Cyber Risks and Issues: Overview. Retrieved from https://www.steptoe.com/images/content/1/8
/v2/189187/Cybersecurity-Tech-Basics-Blockchain-Technology-Cyber-Risks-and.pdf
7. Gemalto. (2018, December 4). Blockchain Security: 3 Ways to Secure Your Blockchain.
Retrieved from https://blog.gemalto.com/security/2018/12/04/blockchain-security-3-ways-to
-secure-your-blockchain/
8. REST API Tutorial. (2017, June 5). What Is REST: Learn to Create Timeless RESTful APIs.
Retrieved from https://restfulapi.net/
9. REST API Tutorial. (2018, July 20). REST API Security Essentials. Retrieved from https://restful
api.net/security-essentials/
10. Docker. (n.d.) The Industry-Leading Container Runtime. Retrieved from https://www.docker
.com/products/container-runtime
11. Bocatta, S. (2019, March 21). The 4 Most Vulnerable Areas of Container Security in 2019.
Retrieved from https://containerjournal.com/2019/03/22/the-4-most-vulnerable-areas-of
-container-security-in-2019/
12. Souppaya, M. P., Morello, J., and Scarfone, K. A. (2017, September 25). Application Container Secu
rity Guide. Retrieved from https://www.nist.gov/publications/application-container-security-guide
13. NIST. (n.d.). PAS 1885:2018. Retrieved from https://shop.bsigroup.com/ProductDetail/?pid=00
0000000030365446&_ga=2.267667464.704902458.1545217114-2008390051.1545217114
14. Sjafrie, H. (2019, December 11). Introduction to Self-Driving Vehicle Technology. Boca Raton (FL):
Chapman and Hall/CRC Press/Taylor & Francis Group. Retrieved from https://www.crcpress
.com/Introduction-to-Self-Driving-Vehicle-Technology/Sjafrie/p/book/9780367321253
15. Balaban, D. (2018, December 17). Security Problems of Machine Learning Models. Retrieved
from https://it.toolbox.com/articles/security-problems-of-machine-learning-models
16. Techopedia. (n.d.). What Is Big Data? Retrieved from https://www.techopedia.com/definition
/27745/big-data
17. ScienceSoft. (2018, April 4). Buried Under Big Data: Big Data Security: Issues, Challenges,
Concerns. Retrieved from https://www.scnsoft.com/blog/big-data-security-challenges
Chapter 13
AppSec Is a Marathon—
Not a Sprint!
CHAPTER OVERVIEW
We’ve covered a lot of ground in the last 12 chapters!
You saw the changes that drive how software is conceived and created, how it’s used, and
what the security implications of these changes are. You learned that the drive for advance
ments in software drive the changes and maturity of processes used to create it, which further
drives advancements in software and the perpetual cycle continues.
You learned that the world of software security is akin to a carousel ride, where the view
changes with each revolution. You also learned that applying ideas, concepts, tools, and tech
niques from Agile are useful for building effective controls in the Agile development process itself.
Some Agile and Scrum purists may disagree with some of the advice that’s been shown to
work, but throughout the book we strived to offer the best practical advice possible, given what
we’ve learned about Scrum and appsec as people are added to the mix.
We hope you learned that appsec is indeed a human-based problem to solve and requires
special attention, treatment, and lots of patience to empower those on the front lines of soft
ware development. With this book, you should consider yourself empowered to help effect posi
tive changes by applying the practical advice in your own unique environments.
CHAPTER TAKEAWAYS
• Discover how to certify yourself as a software security professional
• Learn how to become involved with industry organizations that promote software security
• Find ways to engage in one or more community projects to advance processes, tools, and
technology that improve software security.
175
176 Practical Security for Agile and DevOps
Appsec is not one of those security control frontiers that you can simply decide to enforce
before the people who have to live with those choices are well prepared and on board with
the changes affecting them. AppSec professionals and practitioners on the security team only
have influence as the tool for change. Development teams don’t report to the security team, so
there’s a lack of management authority over these people, especially when their own manage
ment creates conflicting demands on their time. You learned how to use this influence in
positive ways that help to establish an Agile appsec foundation, strengthen it, and build upon
it with deliberation to improve the people, processes, and technology that yield secure and
resilient software and secure and resilient software development lifecycles (SDLCs).
You also discovered that appsec is never really done. Similar to painting the Golden Gate
Bridge, once you’re close to the finish, you get to start all over again. There is no Definition of
Done for appsec controls—they’re a living organism that requires constant attention to con
tinuously improve, mature, and remain practical and achievable.
In wrapping up this book, it’s important to remind appsec professionals and practitioners
to remain current and continuously improve your own skills and currency in an ever-changing
environment. It’s vital to explore opportunities and make the time so you can take an active
role in the appsec industry itself to capitalize on connections you make, talks and presentations
you attend, volunteering efforts, and higher education.
• Flagship Projects. The OWASP Flagship designation is given to projects that have
demonstrated strategic value to OWASP and application security as a whole. After a
major review process, projects are considered to be flagship candidate projects.
AppSec Is a Marathon—Not a Sprint! 177
• Lab Projects. OWASP Labs projects represent projects that have produced a deliverable
of value. Although these projects are typically not production ready, the OWASP com
munity expects that an OWASP Labs project leader is producing releases that are at least
ready for mainstream usage.
• Incubator Projects. OWASP Incubator projects represent the experimental playground
in which projects are still being fleshed out, ideas are still being proven, and development
is still underway. The OWASP Incubator label allows OWASP consumers to readily
identify a project’s maturity. The label also allows project leaders to leverage the OWASP
name while their project is still maturing.
You can find the list of projects and a list of local chapters at the OWASP website.3
The CSSLP CBK contains the largest, most comprehensive collection of best practices, policies,
and procedures to help improve application security across all phases of application develop
ment, regardless of methodology. The CSSLP certification course and exam not only gauge
an individual’s or development team’s competency in the field of application security but also
teaches a valuable blueprint to install or evaluate a security plan in the lifecycle.
Certification
Level Course Available
Level 1 Secure DevOps: A Practical Introduction explains the fundamentals of
DevOps and how DevOps teams can build and deliver secure software. You
will learn DevOps principles, practices, and tools and how they can be
leveraged to improve the reliability, integrity, and security of systems.
DEV522: Defending Web Applications Security Essentials is intended for GWEB
anyone tasked with implementing, managing, or protecting Web
applications. It is particularly well suited to application security analysts,
developers, application architects, pen testers, auditors who are interested
in recommending proper mitigations for Web security issues, and
infrastructure security professionals who have an interest in better defending
their Web applications.
Level 2 SEC540: Cloud Security and DevOps Automation provides development,
operations, and security professionals with a methodology to build and
deliver secure infrastructure and software using DevOps and cloud services.
Students will explore how the principles, practices, and tools of DevOps can
improve the reliability, integrity, and security of on-premise and cloud-
hosted applications.
DEV541: Secure Coding in Java/JEE: Developing Defensible Applications GSSP-Java
secure coding course will teach students how to build secure Java
applications and gain the knowledge and skills to keep a website from
getting hacked, counter a wide range of application attacks, prevent critical
security vulnerabilities that can lead to data loss, and understand the
mindset of attackers. The course teaches you the art of modern Web
defense for Java applications by focusing on foundational defensive
techniques, cutting-edge protection, and Java EE security features you can
use in your applications as soon as you return to work.
DEV544: Secure Coding in .NET: Developing Defensible Applications. GSSP-.NET
DEV544 is a comprehensive course covering a huge set of skills and
knowledge. It’s not a high-level theory course. It’s about real programming.
In this course, you will examine actual code, work with real tools, defend
applications, and gain confidence in the resources you need for the journey
to improve the security of .NET applications. Rather than teaching students
to use a set of tools, this course teaches students concepts of secure
programming. This involves looking at a specific piece of code, identifying a
security flaw, and implementing a fix for flaws found on the OWASP Top 10
and CWE™/SANS Top 25 Most Dangerous Programming Errors.
Specialty SEC542: Web App Penetration Testing and Ethical Hacking helps students GWAPT
Courses move beyond push-button scanning to professional, thorough, high-value
Web application penetration testing. SEC542 enables students to assess a
Web application’s security posture and convincingly demonstrate the impact
of inadequate security that plagues most organizations.
SEC642: Advanced Web App Penetration Testing, Ethical Hacking, and
Exploitation Techniques. This pen testing course is designed to teach you
the advanced skills and techniques required to test modern Web
applications and next-generation technologies. The course uses a
combination of lectures, real-world experiences, and hands-on exercises to
teach you the techniques to test the security of tried-and-true internal
enterprise Web technologies, as well as cutting-edge Internet-facing
applications. The final course day culminates in a Capture the Flag
competition, in which you will apply the knowledge you acquired during the
previous five days in a fun environment based on real-world technologies.
AppSec Is a Marathon—Not a Sprint! 179
SANS offers a complete curriculum on secure software development. The curriculum con
sists of a set of seven courses, across three levels, that fit into an overall roadmap for profes
sionals in IT security. Some of these courses lead to a certification, while others are electives in
other overall courses of study. The courses are summarized in Table 13.1.
You can find out more about the courses and certifications on the SANS website.5
13.5 Conclusion
Appsec is a legacy problem that can’t and won’t be solved overnight. It requires everyone’s
active diligence, vigorous participation, ongoing awareness and evangelism, continuing educa
tion, and determination to make any dent in the problems.
From the start, you’ve seen the folly and dangers of unleashing insecure, unreliable, and
flawed software onto the Internet, but along the way you discovered how to avoid and solve
most of the problems that will land you in hot water. Beyond the principles, tools and tech
niques, and advice offered to help you build secure and resilient software and systems that
support the development of software, we hope you’ve also begun to shift your thinking toward
a security consciousness that will serve you and organizations well, now and into the future.
By tackling your own software security and resilience, you’ll instill—and maintain—
the right levels of trustworthiness that your customers demand and deserve. You have seen
throughout this book that software security requires a holistic, comprehensive approach. It is
as much a set of behaviors as it is a bundle of tools and processes that, if used in isolation, will
leave you with a false sense of security and quite a mess on your hands.
Effective security requires that you educate yourself and your staff, develop manageable
security processes, and create a software development environment that reinforces the right set
of human behaviors. It also means investing in the tools and expertise that you deem necessary
to evaluate and measure your progress toward a holistic environment that rewards defensive
systems development.
Our objective in this book has been to give you the information that we feel is fundamental
for software that is considered secure and resilient. We hope that you take to heart what we
have offered here and bring it to life—improving the world for yourselves, your families, your
organizations, your customers, and your peers.
2. OWASP Flagship project designation is given to projects that have demonstrated strate
gic value to OWASP and the application security community as a whole.
180 Practical Security for Agile and DevOps
a) False
b) True
3. Demand6 for appsec practitioners has been dramatically increasing annually, and in
2021, the projected 5-year growth is estimated to be:
a) 164%
b) 95%
c) 110%
d) 2,000%
Exercises
1. As a member of the Security Team charged with implementing your corporate software
security program, you decided that you’d like to prepare and sit for the CSSLP exam.
Develop the justification you would use to convince your manager to allow you the
commitment of time and budget to gain the certification.
2. In Haiku Form, write up a summary related to any of the practices used in software
security based on what you learned from taking this course. Have some fun with it!
For example:
iPhone addiction
I must overcome - oh wait...
There’s an app for that!
— Author unknown
References
1. OWASP. (n.d.) Membership. Retrieved from https://www.owasp.org/index.php/Membership#tab
=Other_ways_to_Support_OWASP
2. OWASP. (n.d.) Category: OWASP Project. Retrieved from https://www.owasp.org/index.php
/Category:OWASP_Project#tab=Project_Inventory
3. OWASP. (n.d.) OWASP Chapter. Retrieved from https://www.owasp.org/index.php/OWASP
_Chapter
4. (ISC)2. (n.d.). CSSLP Ultimate Guide. Retrieved from https://www.isc2.org/Certifications
/Ultimate-Guides/CSSLP
5. SANS Institute. (n.d.). Secure Software Development Curricula. Retrieved from https://www
.sans.org/curricula/secure-software-development
6. Columbus, L. (Ed.). (2020, November 01). What Are the Fastest Growing Cybersecurity Skills
in 2021? Retrieved from https://www.forbes.com/sites/louiscolumbus/2020/11/01/what-are-the
-fastest-growing-cybersecurity-skills-in-2021/?sh=10f571555d73
Appendix A
Security Acceptance Criteria
Appendix A is offered as a small subset of prewritten acceptance criteria for application product
user stories that have an associated business function, such as log-in to gain access to data and
services. These items either cover a broad range of topics related to required security functions
or describe a set of desirable security attributes that are observable as the application undergoes
testing that leads to a user story or Definition of Done completion. These are useful directly
or are adaptable to organization-specific requirements for security and specific tools in use.
Mostly, these are offered to you as a model for coming up with good acceptance criteria for a
variety of user stories.
181
182 Practical Security for Agile and DevOps
Category: Authentication
Topic: Credential security Acceptance criteria: The system stores the information used
for authentication according to the firm’s standard for secure
storage of user credentials.
Topic: Replay attack protection Acceptance criteria: The authentication process of the
application protects the system from replay attacks by
protecting the transmitted authentication information and
examining the sequence of submitted authentication
information.
Topic: Protect credential Acceptance criteria: The system provides generic feedback to
guessing the user during the authentication procedure when an
incorrect ID or password is entered—for example, “Either your
ID or password is incorrect.”
Topic: Reauthentication Acceptance criteria: The system reauthenticates the user after
the timeout period of inactivity is reached.
Topic: Protection of credentials Acceptance criteria: The application masks the display of the
password when it’s entered or changed.
Topic: Password first use Acceptance criteria: On first use, the system prompts the user
to change the initial password and prevents access if the user
does not comply.
In addition, it will allow users to change their own password
and/or PIN later on at any time.
Topic: Password resets Acceptance criteria: The system permits users to change their
own password when they desire or are required to.
Topic: Password aging Acceptance criteria: The system forces users to periodically
change static authentication information based on
administrator-configurable time frames.
Topic: Password expiry Acceptance criteria: Prior to the expiration of the password,
the system notifies the user regarding the imminence of
expiration.
Topic: Preventing password reuse Acceptance criteria: The system prevents the reuse of
passwords within an administrator-defined period. For
example, when updating a password, a user will be prevented
from reusing a password that was already used, based on
administrator configurations.
Topic: Authentication data Acceptance criteria: The system enables authentication
configuration information configuration to administrator-specified rules for
minimum length, alphabetic characters, and/or numeric and
special characters.
Appendix A: Security Acceptance Criteria 183
Category: Audit
Topic: Audit log Acceptance criteria: The system maintains an audit log that
provides adequate information for establishing audit trails on
security breaches and user activity.
Topic: Logging of authentication Acceptance criteria: The system maintains the confidentiality of
information authenticators (e.g., passwords) by preventing them from
being included in any audit logs.
Topic: Logging of specific events Acceptance criteria: The system allows the administrator to
configure the audit log to record specified events such as:
• All sessions established
• Failed user authentication attempts
• Unauthorized attempts to access resources (e.g., software,
data, process)
• Administrator actions
• Administrator disabling of audit logging
• Events generated (e.g., commands issued) to make changes
in users’ security profiles and attributes
• Events generated to make changes in the security profiles
and attributes of system interfaces
• Events generated to make changes in permission levels
needed to access a resource
• Events generated that make changes to the system security
configuration
• Events generated that make modifications to the system
software
• Events generated that make changes to system resources
deemed critical (as specified by the administrator)
Topic: Action on audit log failure Acceptance criteria: The system provides the administrator
with the ability to specify the appropriate actions to take (i.e.,
continue or terminate processing) when the audit log
malfunctions or is terminated.
Topic: Archival of audit logs Acceptance criteria: The system provides the administrator
with the ability to retrieve, print, and copy (to some long-term
storage device) the contents of the audit log.
Topic: Log review and reporting Acceptance criteria: The system provides the administrator
with audit analysis tools to selectively retrieve records from the
audit log to perform functions such as producing reports,
establishing audit trails, etc.
Topic: Logging of specific Acceptance criteria: The system allows the administrator to
information configure the audit log to record specified information such as:
• Date and time of the attempted event
• Host name of the system generating the log record
• User ID of the initiator of the attempted event
• Names of resources accessed
• Host name of the system that initiated the attempted event
• Success or failure of the attempt (for the event)
• Event type
Topic: Protection of audit log Acceptance criteria: The system protects the audit log from
unauthorized access, modification, or deletion.
184 Practical Security for Agile and DevOps
Category: Authorization
Topic: Access rights Acceptance criteria: The system prevents access to system
resources without checking the assigned rights and privileges
of the authenticated user.
Topic: Access restriction Acceptance criteria: The system restricts session establishment
based on time-of-day, day-of-week, calendar date of the
log-in, and source of the connection.
Topic: User privileges Acceptance criteria: The system enables the assignment of
(discretionary access control) user and group privileges to a specific user ID.
Topic: User privileges role-based Acceptance criteria: The system permits the assignment of
access control (RBAC) users to roles (e.g., regular user, privileged user, administrator)
to permit or limit access to security features or other
application administrative functions.
Topic: Resource control Acceptance criteria: The system provides a resource control
mechanism mechanism that grants or denies access to a resource based
on user and interface privilege or role.
Category: Confidentiality
Topic: Sensitive information Acceptance criteria: The system is capable of protecting
protection system-defined, security-related, and user-sensitive or private
information (e.g., nonpublic data, protected healthcare data,
etc.) from unauthorized disclosure while stored or in transit.
Category: Identification
Topic: Unique user ID Acceptance criteria: The system uniquely identifies each user
of the system with a unique user ID.
Topic: Backdoor prevention Acceptance criteria: All interfaces of the software that are
accessed for performing any action have the capability to
connect the activity to a user ID.
Topic: Process identifier code/ Acceptance criteria: For each process running in the system
accountability that has been initiated by a user, the system associates the
process with the user ID of that specific user. Service-oriented
processes are associated with an identifier code indicating
system ownership or service ID ownership and are tied to an
individual accountable for its actions while in use.
Topic: Autodisable user IDs Acceptance criteria: The application automatically disables an
identifier if it remains inactive for an administrator-specified
time period (e.g., 90 days).
Topic: Security attributes Acceptance criteria: The system maintains the following list of
security attributes for each user: user ID, group memberships
(roles), access control privileges, authentication information,
and security-relevant roles.
Appendix A: Security Acceptance Criteria 185
Category: Integrity
Topic: Replay attack protection Acceptance criteria: The system provides mechanisms to
detect communication security violations in real time, such as
replay attacks that duplicate an authentic message.
Topic: Integrity of logs Acceptance criteria: The system protects the integrity of audit
log records by generating integrity checks (e.g., checksums or
secure hashes) as the log records are created and verifies the
integrity check data when the record is accessed.
Topic: Integrity checks Acceptance criteria: The system protects data integrity by
performing integrity checks and rejects the data if the integrity
check fails.
Category: Nonrepudiation
Topic: Secure logging of specific Acceptance criteria: The system securely records information
information related to the receipt of specific information from a user or
another system.
Topic: Time stamping Acceptance criteria: The system securely links received
information with the originator (sender) of the information and
other characteristics such as time and date.
Appendix B
Resources for AppSec
These links are being provided as a convenience and for informational purposes only; they do
not constitute an endorsement or an approval by the author or Taylor & Francis Group of any
of the products, services, or opinions of the corporation or organization or individual. The
author bears no responsibility for the accuracy, legality, or content of the external site or for that
of subsequent links. Contact the external site for answers to questions regarding its content.
Training
• Security Innovation’s CMD+CTRL Training Courses: https://www.securityinnovation
.com/training/software-application-security-courses/
• Synopsys Security Training and Education: https://www.synopsys.com/software-integrity
/training.html
• SAFECode Training: https://safecode.org/training/
• OWASP Secure Development Training: https://www.owasp.org/index.php/OWASP
_Secure_Development_Training
• Security Compass Secure Development Training: https://www.securitycompass.com
/training/enterprise/
• NTT eLearning: https://www.whitehatsec.com/products/computer-based-training/
• Veracode Developer Training: https://www.veracode.com/services/developer-training
• Secure Code Warrior: https://securecodewarrior.com/
• SecurityJourney: https://www.securityjourney.com/
Cyber Ranges
• CMD+CTRL Cyber Range: https://www.securityinnovation.com/training/cmd-ctrl-cyber
-range-security-training/
187
188 Practical Security for Agile and DevOps
Threat Modeling
• MS Threat Modeling Tool: https://www.microsoft.com/en-us/securityengineering/sdl
/threatmodeling
• ThreatModeler® DevOps Edition: https://threatmodeler.com/integrated-threat-modeling/
• OWASP Threat Dragon: https://threatdragon.org/login
• IriusRisk®: https://continuumsecurity.net/threat-modeling-tool/
Maturity Models
• Building Security In Maturity Model (BSIMM): https://www.bsimm.com/
190 Practical Security for Agile and DevOps
IAST Tools
• Contrast Assess: https://www.contrastsecurity.com/interactive-application-security-testing
-iast
• Synopsys Seeker Interactive Application Security Testing: https://www.synopsys.com/soft
ware-integrity/security-testing/interactive-application-security-testing.html
• Checkmarx CxIAST: https://checkmarx.com/product/cxiast-interactive-code-scanning/
• ImmuniWeb® Interactive Application Security Testing: https://www.immuniweb.com/
products/iast/
Browser-centric Protection
• Tala Security: https://www.talasecurity.io/
Appendix C
Answers to Chapter Quick
Check Questions
Appendix C provides the questions from each chapter’s Quick Check with the correct answers
shown in Bold to help you check your own understanding.
Chapter 1
1. A design principle to guide the selection of controls for an application to ensure its
resilience against different forms of attack, and to reduce the probability of a single
point of failure in the security of the system is called:
a) Defense in depth
b) Least privilege
c) Security by obscurity
d) Secure defaults
193
194 Practical Security for Agile and DevOps
Chapter 2
3. Software security is most effective when it’s addressed in which SDLC activity?
a) Design sprint
b) Development sprint
c) Sprint planning
d) All Scrum activities
Chapter 3
1. Effective appsec programs begin with an awareness track for everyone involved in soft
ware development to:
a) Provide context
b) Inform everyone about the program and roadmap
c) Provide common understanding and terminology
d) All of the above
2. The strategy for communication meant to overcome apathy to hazards that are signifi
cant is called:
a) Outrage management
b) Crisis communication
c) Precaution advocacy
d) Crisis management
Appendix C: Answers to Chapter Quick Check Questions 195
Chapter 4
Chapter 5
1. A design principle to guide the selection of controls for an application to ensure its
resilience against different forms of attack, and to reduce the probability of a single
point of failure in the security of the system is called:
a) Defense in depth
b) Least privilege
c) Security by obscurity
d) Secure defaults
3. The term that describes all possible entry points that an attacker can use to attack an
application or system is called:
a) Security perimeter
b) Attack surface
c) Attack perimeter
d) Gateway
Chapter 6
2. Risk is calculated for each identified threat to prioritize findings and needs for reme
diation using which technique?
a) RISKCOMPUTE
b) DANGER
c) DREAD
d) THREATORDER
3. Benefits derived from threat modeling include all but which of the following?
a) Identifies threats and compliance requirements and evaluates their risk
b) Defines the need for required controls
c) Balances risks, controls, and usability
d) Guarantees that secure designs are sent for development
Appendix C: Answers to Chapter Quick Check Questions 197
Chapter 7
2. What is the attack technique used to exploit websites by altering backend database
queries through inputting manipulated queries?
a) LDAP Injection
b) XML Injection
c) SQL Injection
d) OS Commanding
3. Which attack can execute scripts in the user’s browser and is capable of hijacking user
sessions, defacing websites, or redirecting the user to malicious sites?
a) SQL Injection
b) Cross-site scripting (XSS)
c) Malware uploading
d) Man in the middle
4. What is the type of flaw that occurs when untrusted user-entered data is sent to the
interpreter as part of a query or command?
a) Insecure Direct Object References
b) Injection
c) Cross Site Request Forgery
d) Insufficient Transport Layer Protection
Chapter 8
2. Static code testing can expose all but which of the following?
a) Unvalidated/unsanitized variable used in data flow
b) Use of insecure functions
c) Vulnerable software libraries contained in the application
d) Logic flaws that affect business data
198 Practical Security for Agile and DevOps
Chapter 9
3. Which of the following are the potential benefits of using tools for testing?
i. Reducing the repetitive work
ii. Increasing consistency and repeatability
iii. Over-reliance on the tool
a) i and ii only
b) ii and iii only
c) i and iii only
d) All i, ii, and iii
4. Which of the following are the success factors for the deployment of the tool within an
organization?
i. Assessing whether the benefits will be achieved at a reasonable cost
ii. Adapting and improving processes to fit with the use of the tool
iii. Defining the usage guidelines
a) i and ii only
b) ii and iii only
c) i and iii only
d) All i, ii, and iii
Appendix C: Answers to Chapter Quick Check Questions 199
Chapter 10
Chapter 11
c) OWASP
d) OpenSAMM
2. The Maturity Model you select for your own organization makes no difference so long
as your measurement system is:
a) Repeatable
b) Actionable
c) Consistent
d) All the above
Chapter 12
2. Which of these is the most logical first step in an API development project?
a) Create a functional view of the related application
b) Put API usage monitoring in place
c) Choose either an API proxy or an API gateway
d) None of the above
3. The modern standard that makes the use of application containers during runtime
possible is called:
a) Transporter
b) Docker
c) Loader
d) Executor
Chapter 13
1. Benefits of preparing for CSSLP certification include all but which of the following:
a) Gauges an individual’s or development team’s competency in the field of applica
tion security
b) Provides a valuable blueprint to install or evaluate a security plan in the SDLC
c) Demonstrates commitment to the practice of software security
d) Increases the likelihood of gaining an interview for a secure SDLC position
2. OWASP Flagship project designation is given to projects that have demonstrated stra
tegic value to OWASP and the application security community as a whole.
a) False
b) True
3. Demand for appsec practitioners has been dramatically increasing annually, and in
2021, the projected 5-year growth is estimated to be:
a) 164%
b) 95%
c) 110%
d) 2,000%
Glossary
203
204 Practical Security for Agile and DevOps
MapR MapReduce
MRD Master Requirements Documentation
MVP Minimally viable product
205
206 Practical Security for Agile and DevOps
D G
V W