0% found this document useful (0 votes)
10 views

Svt_module 2 Additional (1)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Svt_module 2 Additional (1)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Software Vulnerability Testing

Module-2

Software Vulnerability Fundamentals


Vulnerability Management Policy
Purpose
 The purpose of the (District/Organization) Vulnerability
Management Policy is to establish the rules for the review,
evaluation, application, and verification of system updates to
mitigate vulnerabilities in the IT environment and the risks
associated with them.
Audience
The (District/Organization) Vulnerability Management Policy
applies to individuals who are responsible for Information
Resource management.
Policy
Endpoint Protection (Anti-Virus & Malware)
 All (District/Organization) owned and/or managed Information
Resources must use the (District/Organization) IT management
approved endpoint protection software and configuration.
 (District/Organization) owned workstations and laptops must
use (District/Organization) IT management approved endpoint
protection software and configuration, prior to any
connection to a (District/Organization) Information Resource.
2
Vulnerability Management Policy Cont...
The endpoint protection software must not be altered,
bypassed, or disabled.
 Each email gateway must utilize (District/Organization) IT
management approved email virus protection software and must
adhere to the (District/Organization) rules for the setup and use of
this software, which includes, but is not limited to, scanning of
all inbound and outbound emails.
Controls to prevent or detect the use of known or suspected
malicious websites must be implemented.
All files received over networks or from any external
storage device must be scanned for malware before use.
 Every virus that is not automatically cleaned by the virus
protection software constitutes a security incident and must be
reported to (District/Organization) IT Support.
Logging & Alerting
(District/Organization)
Logging Standard and sent to a central
log management solution.
(District/Organization)will use file integrity monitoring or
change detection software on logs and critical files to alert
personnel to unauthorized modification.
3
Vulnerability Management Policy Cont...
Patch Management
The (District/Organization) IT team maintains overall
responsibility for patch management implementation,
operations, and procedures.
 All Information Resources must be scanned on a regular
basis to identify missing updates.
 All missing software updates must be evaluated according to
the risk they pose to (District/Organization).
 Missing software updates that pose an unacceptable risk to
(District/Organization) Information Resources must be
implemented within a time period that is commensurate with the
risk as determined by the (District/Organization) Patch and
Vulnerability Standard.
 Software updates and configuration changes applied
to Information Resources must be tested prior to widespread
implementation and must be implemented in accordance with the
(District/Organization) Change Control Policy.
Verification of successful software update deployment
will be conducted within a reasonable time period as
defined in the (District/Organization) Patch and Vulnerability
4
Standard.
Vulnerability Management Policy Cont...
Penetration Testing
Penetration testing of the internal network, external network,
and hosted applications must be conducted at least annually or
after any significant changes to the environment.
Any exploitable vulnerabilities found during a penetration
test will be corrected and re-tested to verify the vulnerability was
corrected.
Vulnerability Scanning
Vulnerability scans of the internal and external network must
be conducted at least quarterly or after any significant change
to the network.
 Failed vulnerability scan results rated at Critical or High will
be remediated and re-scanned until all Critical and High risks are
resolved.
 Any evidence of a compromised or exploited Information
Resource found during vulnerability scanning must be
reported to the (District/Organization) Information Security Officer
and IT support.
 Upon identification of new vulnerability issues, configuration
standards will be updated accordingly.
5
Auditing
Data and Security
 Every organization and individual is prone to a cyber attack
regardless of the security measures deployed.
 It is therefore important that security assessments are carried
out more often to ensure that there are proper up-to-date
controls and mitigation measures in the case of a cyber-attack.
 An organization should also conduct a special security audit
after a data breach, system upgrade or data migration, or when
changes to compliance laws occur, also when a new system has
been implemented or when the business grows by more than a
defined amount of users.
 These one-time audits may focus on a specific area where the
event may have opened security vulnerabilities.
 For example, if a data breach just occurred, an audit of the
affected system can help determine what went wrong.
Why IT security audit?
There are several reasons to do a security audit. Please find the
six main goals for carrying out an IT security audit:

6
Auditing Cont...
Identifysecurity problems and gaps, as well as system
weaknesses.
Establish a security baseline that future audits can be compared
with
Comply with internal organization security policies
Comply with external regulatory requirements
Determine if security training is adequate
Identify unnecessary resources
 Security audits will help protect critical data, identify security
loopholes, create new security policies and track the effectiveness
of security strategies.
 Regular audits can help ensure employees stick to security
practices and can catch new vulnerabilities.
 Security audits measure an information system’s performance
against a list of criteria.
 Determines the Current Security Posture
 Determines the need for Change in Policies and Standards
 Protect IT System & Infrastructure against Attacks
 Evaluates the Security of Data Flow
7
 Verifies Compliance
Auditing Cont...
 Keeps Security Measures Updated
 Formulate New Security Policies & Procedures
 Effectiveness of Security Training & Awareness
How to approach the audit?
When carrying out an audit, two different approaches can be used.
First,there is the black box security audit. It is so-called
because the personnel performing the analysis have no initial
knowledge of the technological infrastructure underlying the IT system.
 They are therefore going in blind as if they were an external attacker.
 In addition, the audit team also has no users to interact with
applications to be analyzed.
 This type of security review is ideal for evaluating systems and
applications from the perspective of an external attacker or outsider.
 With this simulation, it’s possible to identify system vulnerabilities
and the level of exposure to an attack as well as weaknesses in
authentication and authorization.
Through this type of security audit approach, the team of
analysts will collect and analyze the available information, and from
this data will try to identify the maximum possible vulnerabilities
through manual techniques and the use of other specific tools.
8
Auditing Cont...
 Secondly, an assessment can be carried out from another, more
comprehensive approach:
 The White Box audit. In this kind of approach, the audit team has,
from the outset, the necessary information about the assets to be
analyzed, including architectures, source code, or user or
administration documentation.
 In addition, depending on the assets, they may have other
types of data such as legitimate users.
 Based on the existence of all this data, the audit team will not have
to focus, as in the previous approach, on the previous collection of
information, but will have to focus its efforts on the identification
of vulnerabilities and critical elements for the business.
 These two approaches to security auditing are not mutually
exclusive. On the contrary, they are complementary, so you can run an
audit by opting for a Black Box approach, getting all the information,
and preparing for the most likely attack scenarios and then perform a
White Box audit, focusing on the critical aspects.
 Finally, it’s possible to approach the security review work
from a Grey Box approach.
 In these reviews, partial information is available, such as access to
users with different roles or privileges on the platform, but not all the
existing information. 9
Auditing Cont...
 This type of approach allows speeding up review times,
avoiding previous information gathering work, and accelerating
vulnerability identification and exploitation processes. In some
cases, it can be called post-authentication Black Box.
What type of security audit can be carried out?
the most common types:
Web audit. This service audits web technologies in
search of existing vulnerabilities.
 Mobile app audit: As its name suggests, it is focused on
mobile applications, both Android and iOS, and ultimately any
mobile technology. It consists of a set of security tests for testing
applications, with the mission of analyzing how they store,
transmit and process information. But also in the security offered
by the hardware device (terminal), considering it as a hostile
environment.
Audits of e-Commerce platforms: It’s very focused on
analyzing the availability of the e-commerce platform, to ensure
its proper functioning, as well as protecting the confidentiality of
customer data, especially payment information. This makes it a
key tool to reduce the risk of fraud, a major issue for digital
commerce. 10
Auditing Cont...
 Audits of cloud platforms and containers. These security
reviews focus on the analysis of security in the implementation of
technologies.
 Audit of security baselines in operating systems and
technologies. This type of analysis studies the correct
implementation of baseline guidelines and regulatory compliance,
security policies, and the configuration of servers and workstations.
 Internal penetration test. This audit scrutinizes the weaknesses
in the access routes to confidential information from the company’s
internal infrastructure. Its mission is not to prevent external
attacks on the system, but those that may originate from within,
either by users who have access to company resources, as
employees, former employees, or third parties (insider threats) or by
attackers outside the organization, who have gained access to the
internal network.
 WiFi audit. In this audit, a series of actions are carried out to
evaluate the deployment and security of the WiFi infrastructure in
wireless networks.
 Hardware hacking. Not only software is important in the security
management of an organization. Hardware must also be tested and
analyzed. This evaluation focuses on devices with physical access,
with the aim of identifying security flaws in the different gateways:
communication routers, cable modems, IoT devices… 11
Auditing vs Black-Box testing
Black Box Security Audit
In the Black Box Security Audit, our team will only have access to
publicly accessible information about the target environment. This type
of test aims to simulate the real-world scenario of external attackers
targeting and attempting to compromise your systems.
Black Box testing has the benefit of perfectly simulating a motivated
external attacker that has zero-knowledge of your operations and IT
infrastructure. It gives you an insight of the robustness of your
information security controls when under targeted attack by malicious
intruders.
White Box Security Audit
Inthis approach our team would have as much information as possible
about the target environment, such as an actual employee would
possess. This approach is designed to prepare for a worst-case-scenario
where an attacker has in-depth information about your infrastructure.
White Box testing allows you to prepare for scenarios such as insider
threats or an attacker that has obtained detailed internal information.
This process usually reveals more vulnerabilities and is much faster
since the audit team has transparent access to key information and
details required for attacking the organization. Additionally, it extends
the testing boundaries to areas such as source code audit, application
design review etc. which are not usually covered by a traditional black-
box audit. 12
Auditing vs Black-Box testing Cont...
Grey Box Security Audit
In a Grey Box Security Audit our team would be given partial
information about the target environment, such that could be
identified by a motivated attacker. Documents provided could
include policy documents, network diagrams and other valuable
information. This approach aims to deliver a cost-effective audit
while focusing on areas that are important to your organization.
Grey Box testing allows you to accurately simulate the threat
from an attacker that has been able to gain partial information
about your infrastructure. The audit prepares you for a scenario
where certain details or information have been leaked by social
engineering or other offline threats.

13
Auditing vs Black-Box testing Cont...

14
Auditing vs Black-Box testing Cont...

15
Software Vulnerability Fundamentals Cont...
Types of Vulnerabilities (Design, Implementation, and
Operational Vulnerabilities)
Design Vulnerability
 Exploiting memory-corruption bugs to compromise computers and
gain access to organizations is all too common and relatively simple.
 But design vulnerabilities in operating systems or other software that
can provide other avenues of attack into an organization's network.
 Unlike memory-corruption bugs, they are typically more
complicated to patch. Furthermore, security tools that look for the
exploitation of vulnerabilities cannot always address these types of
vulnerabilities.
 Design vulnerabilities or “logic flaws” aren't typical security flaws or
bugs. They are legitimate functions or features with unintended
consequences that attackers seek out to exploit.
 By leveraging logic flaws in the functionality of existing systems,
attackers are able to masquerade as legitimate operations and code, and
thus evade detection by security solutions.
 Design vulnerabilities are common to all operating environments.
Attackers can exploit them to gain root access to highly secure
systems, leading to data theft, disruption of critical infrastructure and
other serious consequences.
 They also cannot be easily detected and typical anti-exploitation tools
will fail to prevent them. 16
Software Vulnerability Fundamentals Cont...
The fact that they are designed as part of legitimate functionality
makes fixing them all the more difficult and cumbersome.
 In fact, it’s not unusual to see a “recall” on a fix for a certain
design vulnerability in order to patch the so-called fix itself.
 Take for example the Stuxnet .LNK vulnerability, which
allowed an attacker to run code using a malicious .LNK file.
 This was the vulnerability Stuxnet used to infect the machinery
of the Iranian nuclear program.
Microsoft patched the .LNK vulnerability back in 2010, or so they
thought. In 2015, Michael Heerklotz, a German student, found a
way to bypass the fix resulting in CVE-2015-0096.
There are other attacks that were enabled by design
vulnerabilities::
 In 2014, attackers exploited a design vulnerability
called Shellshock to infect thousands of machines with malware.
During that same year, the infamous Sandworm attack leveraged
a design vulnerability in Office for its cyber-espionage campaign.
Another such vulnerability, dubbed Rootpipe, enabled user
privilege escalation in OS X systems. There was even a design flaw
installed by default as a component in Trend
Micro’s PasswordManager. 17
Software Vulnerability Fundamentals Cont...
 These are just a few of the more well-known design
vulnerabilities — there are many others – and they are dangerous.
Because they exploit legitimate functionality, security systems
often won’t catch attacks exploiting the flaws until it’s too late.
There is no stopping design vulnerabilities, and undoubtedly,
threat actors will continue to exploit them.
 To deal with sophisticated cyber-threats, we need to assume
that the threat actors are within. Several measures that
organizations typically take to defeat against the cyber-attackers
include:
1. Segmenting the network:
 The idea here is to block off communications from a threat actor
to prevent lateral movement, or, in extreme cases, create air-
gapped networks (networks that aren’t connected to the
Internet).
 As such, a threat actor residing on one device isn't able to move
to another device on another network segment. In essence, the
threat is contained to a single network segment.
 One of the downsides? Segmenting the network won’t help in
cases of ransomware where the malware can block access to the
computers on the network segment it resides on. 18
Software Vulnerability Fundamentals Cont...
2. Detecting the threat:
 Several technologies now offer threat detection solutions –
whether via user or network behavior analytics or even through
deception. Unfortunately, these measures are utilized too late in the
race against the breach. For example, a threat actor’s goal may be
to retrieve sensitive documents from a CEO’s device. In such a case
where a threat actor is first detected on a CEO’s device, these
measures become futile.
3. Full Application Whitelisting
 This measure prevents the introduction of malicious code into the
system by allowing only a single pre-defined set of applications to
run on the computer.
 Unfortunately, IT and security administrators quickly run into
manageability issues trying to control all applications required by the
organization, leading to limited deployment.
 In summary, to counter design vulnerabilities, the benefits of each
approach must be weighed, with a focus on preventative
measures and diminishing the disadvantages of each
measure.
 For instance, although whitelisting the applications becomes
a tedious process, it is possible to minimize the scope to what is
really important - those applications that communicate outbound. 19
Software Vulnerability Fundamentals Cont...
 Once the scope of the threat has been reduced, it is possible to then
block the malicious outgoing connections of those business-authorized
applications.
 Similarly, it is possible to deal with ransomware by blocking malicious file
handling processes.
Trust Relationships
What is the trusted relationship attack?
 Attackers can breach or leverage organizations that let third-party partners
have access to their network. It is customary in some industries and
businesses to grant their third-party partners access to network resources in
the course of their business relationship.
Examples of third parties that would be granted this access include IT
contractors, managed service/security providers and infrastructure service
contractors.
 It is this valid account (based upon a trusted relationship) used by the third-
party partner that may be lost or otherwise compromised, thereby opening
up the organization to attackers’ prying hands.
 Sometimes, third-party access is granted via Virtual Private Network (VPN)
or private network circuit. Logistically speaking, this expands the
organization’s attack surface boundary to all of their third-party partners that
have this elevated access.

20
Software Vulnerability Fundamentals Cont...
MITRE and ATT&CK
 MITRE is a not-for-profit corporation dedicated to solving
problems for a safer world. Beginning as a systems engineering
company in 1958, MITRE has added new technical and
organization capabilities to its knowledge base — including
cybersecurity.
 To this end, MITRE released the MITRE ATT&CK list as a globally
accessible knowledge base of adversary techniques and tactics
based upon real-world observations. This information can then be
used as the basis for the development of threat models and
methodologies for cybersecurity product/service community,
private sector and government use.
Real-world examples
A) Target attack
 In 2013, Target fell victim to a trusted relationship attack where
attackers used Target network credentials used by a heating and
ventilation company that was servicing a Target store. These
credentials allowed the attackers to pivot into Target’s network
with ease with the same access that the third-party partner had.
 MenuPass is a China-based threat group that targeted IT MSPs,
21
mining companies, manufacturing companies and a university.
Software Vulnerability Fundamentals Cont...
How to detect the trusted relationship attack
The first step to detecting a trusted relationship attack is to
create a threat model. Threat models should spell out everywhere
trust is granted between entities. These relationships with third-
party partners can then be audited.
With a threat model implemented, the best way (and possibly
the only proactive way) is to monitor the usage of second-party,
third-party and trusted entities. The trust model-based monitoring
should use frequent audits to get a reasonably real-time picture of
how these trusted parties are using the priceless credentials that
you granted to them.
Mitigation
 Nobody likes it when their trust has been violated, and without
a doubt, this extends to the world of enterprise. However, unlike
trust between two individuals that many times is broken without
leaving a trace, business relationships that end up in credentials
being handed over are often documented in relatively great
detail. With this said (and at least a shred of hope extended to
you), below are some mitigation methods that will help prevent
future trusted relationship attacks.
Threat modeling 22
Software Vulnerability Fundamentals Cont...
Network segmentation and isolation
 Systems with sensitive information, especially systems that
trusted entities have no business accessing, should be isolated
with a liberal sprinkling of network segmentation.
 Think of it as like how a SCADA system should be isolated from
the general organization network as much as practicable. When
systems with sensitive information are isolated, this shrinks your
attack surface to a more manageable size with less risk to your
information security environment.
Authentication
 Ensure that your organization uses a tight authentication policy
when granting credentials to trusted entities. Make sure that
password credentials are changed regularly. You may want to also
add another layer of authentication, such as two-factor
authentication, to better ensure that the credentials will not be
compromised — or at the very least will minimize the time
available to abuse said credentials.
 The last measure you can take to mitigate a trusted relationship
attack is to vet the security policies of the trusted entities that
you have granted credentials to. Make sure that their security
policies properly cover how client network credentials are to be 23
Software Vulnerability Fundamentals Cont...
 As a further measure, if you have not yet established a trusted
relationship with an entity, require that your organization review a
possible trusted entity’s security policies as a matter of course
when onboarding one of these entities. Sometimes just trusting one
wrong entity is the difference between breach and safety.
On Vulnerabilities, Constraints and Assumptions
The difference between a vulnerability and an exploit:
A) An exploit, consists of a vulnerability present in the software
application and a method that is used to exploit this vulnerability.
Thus, an exploit occurs when we actually apply the method to
execute the vulnerability present in the software application.
B) A vulnerability is fundamental to exploiting a software
application, we can prevent an exploit if we are able to identify and
subsequently eliminate vulnerabilities present in a software
application. However, identifying vulnerabilities is a very difficult
task because of a number of contributing factors.
The primary ones are:
1. Complexity of software applications
2. Number of vulnerabilities
3. Complexity of vulnerabilities
24
Software Vulnerability Fundamentals Cont...
A Taxonomy

Main Memory
a) Dynamic Memory:
 Dynamic memory, the first subcategory of main memory, is, as the
name suggests, the memory component whose size changes as the
process executes.
 It consists of two subcomponents, the program stack and the heap,
both of which store variables while the process is in execution.
(i) The program stack or execution stack is a contiguous block of
memory used by the operating system and the process to store process
data, such as local variables, function frames, return addresses,
environment variables, program name, and so forth.
(ii) The heap is a block of memory used to store dynamically allocated
variables. For example, in C language the heap stores variables that are
allocated using malloc( ) call. 25
Software Vulnerability Fundamentals Cont...
Dynamic Memory Assumption
1. Data accepted as input by the process and assigned to a buffer
will occupy and modify only specific locations allocated to the
buffer.: This assumption is violated if the software process allows
data larger than the size of the buffer to be written to the buffer.
Since the data is larger than the buffer, it will overwrite memory
that lies beyond the bounds of the buffer.
2. The process will not interpret data present on the dynamic
memory as executable code: This assumption is violated if the
process is made to interpret data present on the dynamic memory
as executable code, which, in turn, can be accomplished by
changing process variables it holds, such as, return addresses,
exception pointers, and so on.
3. Environment variables being used by the process have
expected format and values: This is a violable assumption,
because a hostile entity can easily change the environment
variables before the process begins execution. These variables
are provided by the operating system and define the behavior of
the process. A violation occurs when the process uses these
variables and makes assumptions regarding their format and
values.
4. The process will be provided with the dynamic memory that it
26
Software Vulnerability Fundamentals Cont...
5. Data present on the dynamic memory cannot be observed
while the process is in execution: This is a violable assumption,
because a hostile entity can run the process in a controlled
environment and observe the contents of the dynamic memory,
including any privileged data3 it holds.
6. Data owned by the process and stored on the dynamic memory
cannot be accessed after the process frees the memory: This is a
violable assumption, because the memory being used by the
process is not erased after the process frees it. Hence, another
process, if allocated the same physical memory, can access the
data left over by the previous process.
7. A pointer variable being used by the process references a legal
memory location: This is a violable assumption, because a pointer
variable can point to any memory location, including memory
locations outside of the process address space. Furthermore, it
can reference wrong variables, thereby creating illegal memory
references.
8. A memory pointer returned by the underlying operating system
does not point to zero bytes of memory: This is a violable
assumption, because the operating system can provide the
process with a pointer that points to zero bytes of memory. Using
27
this pointer will cause illegal memory references or overwriting of
Software Vulnerability Fundamentals Cont...
The consequences of this violation vary from garbage value being written to
the memory location to the process going into an infinite loop.
10. Data accepted by the process will not be interpreted as a format string by
the I/O routines: This constraint is violated if a process accepts input and
interprets it as a format string. A violation will at the very least reveal the
contents of the process stack. Additionally, a hostile entity can provide the
process a specially crafted format string that allows it to write data to the
process stack.

b) Static Memory
 Static memory, the second subcategory of the main memory, is, as the
name suggests, static in size. In other words, the size of the memory is fixed
before the process begins execution and does not change as the execution
proceeds.
 There are two major components of static memory, the data segment and
the block storage segment (BSS). The data segment stores initialized global
variables and the BSS stores un-initialized global variables.
 The software process uses the two components in a similar way, to store
global data whose size is fixed before execution starts and does not change
during execution of the process.
 Again, the constraints and assumptions are simple, but failure to take them
into account gives rise to potential vulnerable states.

28
Software Vulnerability Fundamentals Cont...
1. Data accepted as input by the process and assigned to a buffer
occupies and modifies only specific locations allocated to
buffer on the static memory: This assumption is violated if the
software process allows data larger than the size of the buffer
to be written to the buffer. Because the data is larger than the
buffer, it will overwrite memory that lies beyond the bounds of
the buffer.
2. Data held on the static memory cannot be observed while the
process is in execution: This is a violable assumption, because
a hostile (cyber) entity can run the process in a controlled
environment and observe the contents of the static memory,
thereby gaining access to any privileged data held in static
memory.
Input/Output (I/O)
 The second category of the taxonomy is I/O. The software
process uses I/O resources to store input and output, present
output, receive input, and to communicate output. I/O, as a
category of this taxonomy, is divided into two subcategories,
file system and network interface.
a) Filesystem
 The filesytem, the first subcategory of I/O, refers to the
29
Software Vulnerability Fundamentals Cont...
 However, a software process, while using a filesystem as a
resource, does not perceive it as an encoding scheme; instead it
views the filesystem as a hierarchy of connected directories
containing metadata about files and files themselves. The
taxonomy adopts this view when referring to the filesystem as a
subcategory
1. Access permissions assigned to newly created files/directories
are such that only the required principals have access to them:
This constraint is violated if a software process creates new
files and directories without assigning them proper access
permissions, which, in turn, allows a hostile entity to access
them. This results in an error condition when a hostile entity
actually reads or modifies these files and directories.
2. Access permissions of the files/directories being used by the
process are such that only the required principals have access
to them: This is a violable constraint, because if a software
process uses file/directories, already present on the filesystem,
which do not have proper access permissions, then there exists
a probability that a hostile entity has already read or modified
these files. A violation occurs when the entity actually reads or
modifies these files.
30
3. A file being created by the process does not have the same
Software Vulnerability Fundamentals Cont...
 Depending on the underlying operating system, the consequences of
this attack vary from the process using the file placed by the hostile
entity to the process terminating execution.
4. A filename (including path) being used by the process is not a link
that points to another file for which the user, executing the process,
does not have the required access permissions: This is a violable
assumption, because a hostile entity can provide the software process
with a file that is a link to another file. If the process has more privileges
than the entity, then the entity can point the link to a file to which it
does not have access, thereby resulting in the entity gaining improper
access to the file pointed to by the link.
5. Files created/populated by a principal other than the process and
being used by the process will have expected format and data: This is a
violable assumption, because a hostile entity can provide the software
process with specially crafted files containing corrupt data. A violation
occurs when the software process uses these corrupt files.
6. Files being used by the process cannot be observed/
modified/replaced while the process is in execution: This constraint is
violated if the software process does not lock the files that it is using. A
violation occurs when a hostile entity can read, modify or replace these
files while the process is in execution.

31
Software Vulnerability Fundamentals Cont...
7. Files/directories being used by the process and stored on the file-
system (information used by the process over multiple runs) cannot be
observed/modified/replaced before the process starts execution: This is
a violable assumption, because a hostile entity can observe, modify or
replace files created by the software process and stored on the
filesystem after the process stops execution and before the next run.
8. Data held by files owned/used by the process cannot be accessed
after the process deletes them: This is a violable constraint, because
files stored on the filesystem are not erased after the software process
deletes them. The operating system simply deletes their name from the
list of existing files, but anyone with proper access permissions can
access this data by directly accessing the physical storage.
9. The process will be provided with the file-system space that it
requests: This is a violable constraint, because filesystem space is a
limited resource and the process will not always be provided with the
filesystem space that it requires.
10. Files having proprietary or obscure file formats cannot be
understood and modified: This is a violable constraint, because
proprietary or obscure file formats are not enough to keep the contents
of a file secret. Tools and techniques a hostile entity can use to reverse
engineer these formats to reveal the contents of the files exist.

32
Software Vulnerability Fundamentals Cont...
b) Network Interface
 The network interface, the second subcategory of I/O, refers to the
interface used by the software process to send and receive data.
 This interface is in the form of ports. A port is an end of a logical
connection, which a software process uses to communicate over the
network.
 Ports are numbered from 1 to 65535, with different ports being used
by different applications for different services.
 A software process can attach to any open port and use it to send and
receive data. Therefore, from the perspective of the software
process, the network interface is simply a resource that can be used to
send and receive data.
 The taxonomy takes this perspective when referring to network
interface as a subcategory.
. 1. The data received by the software process through the network
interface can be trusted to be unread and unmodified: This is a violable
assumption, because a hostile entity can intercept and read or modify
data that has been sent to the software process by sitting between the
sender and the software process. A violation occurs when a hostile
entity reads or modifies data that has been sent to the software
process.

33
Software Vulnerability Fundamentals Cont...
2. The data received by the software process through the network
interface is from a legitimate client or peer or server and has
expected format and length. This is a violable assumption,
because any entity, even a hostile one, can send data using the
network interface if it knows the IP address of the host machine
and the port number being used by the software process. A
violation occurs when a hostile entity sends corrupt data to the
software process.
3. The data sent by the software process via the network interface
will not be read/modified before it reaches its destination. This is a
violable assumption, because a hostile entity can intercept and
read or modify data being sent by the software process by sitting
between the software process and receiver of data. A violation
occurs when a hostile entity reads or modifies data being sent by
the software process.
4. The software process will be able to utilize the network
interface to send and receive data. This is a violable constraint,
because no guarantee that the software process will always be
provided with access to the network interface exists. A violation
occurs if the software process does not take this constraint into
account and is not able to use the network interface.
5. The byte order of numerical data accepted from the network
34
Software Vulnerability Fundamentals Cont...
Cryptographic Resources
 Cryptographic resources, refers to the algorithms and
protocols contributed by the discipline of cryptography. In
computer systems, cryptography provides the capability for
securing data and resources in the areas of confidentiality,
authentication, integrity, and nonrepudiation.
Encryption algorithms are used to ensure confidentiality, which
means that only authorized entities are able to understand data.
 Authentication amounts to ensuring that only authorized
entities are able to access data/resources or to supply data.
 Algorithms, such as hashes, checksums and so forth, are used
to ensure data integrity, which means only authorized entities
should be able to modify data.
 Nonrepudiation, which concerns the communication of
messages, means that the sender of a message should be able to
prove that the receiver received the message; conversely the
receiver should be able to prove that the sender actually sent the
message.
 Nonrepudiation is very difficult to prove, given current
cryptographic algorithms and protocols. However, digital
35
signatures, PKI, and so forth, do solve the problem to a certain
Software Vulnerability Fundamentals Cont...
a) Randomness resources
 Randomness, the first subcategory of cryptographic resources, refers to
the generation and use of random numbers, which are a series of numbers
whose values are uniformly distributed over a set and where it is impossible
to predict the next number in the series.
 True random numbers can be generated only by using physical
phenomena like radioactive decays.
 Since using physical phenomena as a source of random numbers is not
practical, algorithms have been developed for generating random numbers.
These algorithms, called Pseudo-Random Number Generators
(PRNGs), accept input, called a seed, and use it to generate a series of
random numbers. The series generated is dependent on the seed provided
to the PRNG. To generate an unpredictable series of numbers, an
unpredictable seed should be used.
 The seed for PRNGs is generated using hard to guess events that occur in
a computer system, such as press of a key on the keyboard, CPU
scheduling, difference between CPU timer and interrupt timer, and so forth.
The generation and use of random numbers is one of the least understood
aspects in the design and implementation of a software system.
 Consequently, software processes tend to ignore the constraints
associated with randomness and make assumptions that can be violated
easily.

36
Software Vulnerability Fundamentals Cont...
1. The series of random data being produced by the PRNG is
unpredictable (assuming unpredictable seed). This is a violable
assumption, because there exist PRNGs that produce predictable
random data series. These PRNGs produce data that is random in
the sense that each number has an equal probability of being the
next number in the series, but it is computationally feasible to
predict the next number in the series. Hence, the random data
produced by these generators is predictable and cannot be used
for cryptographic purposes.
2. The seed being used by the PRNG is unpredictable. This
assumption is violated if the seed being used by the PRNG is
predictable. Random data being produced by the PRNG is
dependent upon the seed provided as input to the PRNG. Thus,
using an unpredictable seed is a key requirement for producing
unpredictable random data series. A predictable seed would
result in a predictable random data series, which cannot be used
for cryptographic purposes.
3. The process will have easy access to entropic data on a computer
system. This is a violable assumption, because computers are
deterministic machines. Therefore, it is very difficult to have to
access entropic data on a computer system.

37
Software Vulnerability Fundamentals Cont...
4. The process will be able to accurately estimate entropy of a data set.
This is a violable assumption, because currently available approaches for
estimating entropy are very difficult to implement and provide only a
coarse approximation of the value of entropy for a data set. Hence, a
process, even while employing these approaches, should be conservative
in estimating entropy, because estimating a higher value of entropy than is
actually present leads to a false sense of security.
5. User selected passwords/keys will have a sufficient amount of entropy.
This is a violable assumption, because user selected passwords/keys do
not have sufficient amount of entropy and typically are highly predictable.
Hence, they should not be used in cryptographic algorithms or protocols
that require highly entropic keys/passwords.
6. If two different seeds are provided to the PRNG, it is computationally
infeasible to produce the same series of data both times. This is a violable
assumption because of the structure of the data series some PRNGs
produce, which can be visualized as a series of numbers on the
circumference of a circle. The seed, given as input to these PRNGs, selects
a point on this circle from where the PRNG starts to output random
numbers. It is possible, especially if the circle is small, for two different
seeds to select the same point on the circle (modulo arithmetic) and
produce the same random data series. These PRNGs are not suitable for
cryptographic purposes and can be identified from current cryptographic
literature.

38
Software Vulnerability Fundamentals Cont...
7. Given that the PRNG is continuously producing random data, it
is computationally infeasible to produce the same sequence of
random data after some time. This is a violable assumption,
because it is possible that PRNGs, which adhere to the circular
structure visualized in item 6 and produce random data
continuously, will repeat the data series after some time.
b) Cryptographic algorithms and protocols
 The second subcategory of cryptographic resources,
cryptographic algorithms and protocols, refers to algorithms and
protocols that provide to the software process the services of
confidentiality, integrity, authentication and nonrepudiation.
 This subcategory includes encryption algorithms, such as DES,
RC4 and RSA, authentication protocols, such as Kerberos, and
cryptographic checksums and hashing algorithms, such as MD5
and SHA1.
 It also includes protocols and algorithms, such as digital
signatures and PKI that promote nonrepudiation.
 This taxonomy treats cryptographic algorithms and protocols as
resources, just as it does randomness.

39
Software Vulnerability Fundamentals Cont...
1. Random data being used by the cryptographic algorithm/protocol is
unpredictable. This is a violable assumption, because there exist PRNGs
that produce predictable random data series. Cryptographic algorithms
and protocols require a random data series to exhibit two properties: (1)
the data series is statistically random. That is, each number in the set of
numbers has an equal probability of being the next number in the
random data series, and (2) the next number in the series is
unpredictable. There are PRNGs that produce random data that is
statistically random but not unpredictable. These PRNGs cannot be used
as a source of random data for cryptographic purposes.
2. The length of the key being used by the cryptographic algorithm and
protocol is sufficient. This assumption is violated if the minimum length
of the key, being used for a cryptographic algorithm or protocol, is less
than the current standard. Most cryptographic algorithms and protocols
use keys for a variety of purposes, such as keeping secrets, restricting
access and so on. The length of these keys is a critical factor in security
of these algorithms or protocols. Typically, if a small key is used, the
algorithm or protocol can be compromised easily. The length of the key
required for keeping the algorithm or protocol secure changes with time
and can be found in the current cryptographic literature. Using a key
that is smaller than the current standard puts the algorithm and
protocol at risk of being compromised.

40
Software Vulnerability Fundamentals Cont...
3. The hashing algorithm will not produce same hash for two
different inputs. This is a violable assumption, because there is a
possibility that a hashing algorithm produces same hash for two
different input texts. These algorithms are considered as
compromised and cannot be used for cryptographic purposes.
Information about compromised hashing algorithms is available in
current cryptographic literature.
4. The process cannot use encryption to ensure data integrity.
This is a violable constraint, because encryption only ensures
data confidentiality by changing plain text into cipher text. No one
can make sense of cipher text without changing it back to plain
text. But anyone with access to cipher text can change it even if it
does not make sense. Hence, encryption just ensures data
confidentiality and not data integrity.
5. The process cannot use a key more than once for a stream
cipher. This is a violable constraint, because using a key more
than once for a stream cipher can compromise the cipher text.
Stream ciphers use a key to produce a stream of random data,
which they xor with the plain text to produce cipher text. If the
process uses the same key more than once, then the resultant
random data series is same, which, in turn, implies that multiple
instances of plain text data will be xored with the same random 41
Software Vulnerability Fundamentals Cont...
6. The process cannot use one time pads to encrypt a large
quantity of data. This is a violable constraint, because one time
pads require truly random or close to truly random data that has
bit length equal to that of the plain text. It is very difficult to have
access to large quantity of high quality random data. Hence, one
time pads cannot be used to encrypt large quantity of data.
7. The process cannot use keys that are self reported by a client
or a server. This is a violable constraint, because a hostile entity
can masquerade as a client or a server and send its own keys to
the process. Hence, any key that is self reported by a client or
server cannot be trusted without some form of validation.
8. The process cannot use obfuscation instead of encryption to
ensure confidentiality. This is a violable constraint, because there
exist tools and techniques that we can use to reverse engineer
obfuscated data to reveal any privileged data. Obfuscation is the
process of changing data so as to make it difficult to perceive or
understand. In this way, it can be visualized as a much weaker
form of encryption, which can be compromised easily. Thus, a
process should not use obfuscation to ensure data confidentiality.
9. The process cannot store keys/passwords in clear text. This
constraint is violated if a process stores keys/passwords in plain
text. Keys/ passwords are very important for the security of 42
Software Vulnerability Fundamentals Cont...
The vulnerabilities classified by the taxonomy pertain to any
software process executing above the level of the operating
system.
 However, the taxonomy does not directly classify
vulnerabilities; instead, it classifies violable constraints and
assumptions.
 These constraints and assumptions are sources of
vulnerabilities, and a vulnerability exists if they can be violated.
 This taxonomy is grounded in a theoretical model of computing.
The model provides not only the foundation for characterizing
vulnerabilities, but also the taxonomy with its classification
scheme, which uses resources utilized by the software process as
categories and subcategories.
Exceptional Conditions
 Exceptional conditions are things that occur in a system that
are not expected or are not a part of normal system operation.
 When the system handles these exceptional conditions
improperly, it can lead to failures and system crashes.
 Exception failures are estimated to cause two thirds of
system crashes and fifty percent of computer system
43
Software Vulnerability Fundamentals Cont...
ExceptionHandling is a mechanism to handle runtime errors such
as ClassNotFoundException, IOException, SQLException,
RemoteException, etc.
 The core advantage of exception handling is to maintain the
normal flow of the application.
Robust exception handling in software can improve software fault
tolerance and fault avoidance, but no structured techniques exist for
implementing dependable exception handling.
 However, many exceptional conditions can be anticipated when
the system is designed, and protection against these conditions can
be incorporated into the system.
Traditional software engineering techniques such as code
walkthroughs and software testing can illuminate more exceptional
conditions to be caught, such as bad input for functions and
memory and data errors.
 However, it is impossible to cover all exceptional cases. It is also
difficult to design a dependable system that can tolerate truly
unexpected conditions. In these cases, some form of graceful
degradation is necessary to safely bring down the system without
causing major hazards.

44

You might also like