SlideShare a Scribd company logo
Master of Sciences
Nurul Haszeli Ahmad
Student Id: 2009625912
Supervisor: Dr Syed Ahmad Aljunid (FSMK, UiTM Shah Alam
Co-Supervisor: Dr Jamalul-Lail Ab Manan (MIMOS Berhad)
 Works on reliable and trustworthy system starts since late 70s
 Extensively after Morris Worm
 After three decades, reliable and trustworthy system are yet to achieve
with software vulnerabilities still being reported and C overflow
vulnerabilities is still one of the top ranked vulnerabilities
 This thesis focus on understanding vulnerabilities via taxonomy as one of
ways to improve system reliability and trustworthy
 The thesis
◦ Taxonomy and Criteria of well-defined taxonomy
◦ Method to evaluate taxonomy
◦ Method to evaluate OS reliability
◦ Result of evaluating static analysis tools
Introduction
Review of Literature
Research Methodology
Results and Discussion
Conclusion and Recommendation
Q & A
 Background of Study
 Problem Statement
 RQ & RO
 Research Significance
 Assumptions
 Scope and Limitations
Introduction – Background of Study
• First vulnerability was discovered unintentionally by Robert Morris in
1988
• However the hype of the vulnerabilities only starts after 1996 after an
article written and published by hackers known as Aleph One.
• Since then, the vulnerabilities and exploitation moves to different stage.
• 87% increase in terms of exploitation on vulnerabilities (CSM, 2009)
• the intensity of attack on web application vulnerability (Cenzic, 2009)
• 90% of web application is vulnerable with Adobe ranked the top
contributor (Cenzic, 2010)
• First sophisticated malware exploited Windows vulnerability reported
(Symantec Corporation, 2010), (Falliere, et. al., 2011), (Chen, 2010)
• The malware evolved
Introduction – Background of Study
• First vulnerability was discovered unintentionally by Robert Morris in
1988
• However the hype of the vulnerabilities only starts after 1996 after an
article written and published by hackers known as Aleph One.
• Since then, the vulnerabilities and exploitation moves to different stage.
• 87% increase in terms of exploitation on vulnerabilities (CSM, 2009)
• the intensity of attack on web application vulnerability (Cenzic, 2009)
• 90% of web application is vulnerable with Adobe ranked the top
contributor (Cenzic, 2010)
• First sophisticated malware exploited Windows vulnerability reported
(Symantec Corporation, 2010), (Falliere, et. al., 2011), (Chen, 2010)
• The malware evolved
4842
4644
5562
4814
6253
0
1000
2000
3000
4000
5000
6000
7000
2006 2007 2008 2009 2010
No. of Vulnerabilities
References: Symantec Corporation, Internet Security Threat Report, Volume 16, 2011
Introduction – Background of Study
• First vulnerability was discovered unintentionally by Robert Morris in
1988
• However the hype of the vulnerabilities only starts after 1996 after an
article written and published by hackers known as Aleph One.
• Since then, the vulnerabilities and exploitation moves to different stage.
• 87% increase in terms of exploitation on vulnerabilities (CSM, 2009)
• the intensity of attack on web application vulnerability (Cenzic, 2009)
• 90% of web application is vulnerable with Adobe ranked the top
contributor (Cenzic, 2010)
• First sophisticated malware exploited Windows vulnerability reported
(Symantec Corporation, 2010), (Falliere, et. al., 2011), (Chen, 2010)
• The malware evolved
#ofvulnerabilities
Year
References: Stefan Frei (NSS Labs), Analyst Brief: Vulnerability Threat Trends – A decade in view,
Transition on the way, Feb 04, 2013
Introduction – Background of Study
References: Microsoft Corporation, Software Vulnerability Exploitation Trends, Technical Report,
2013. http://www.microsoft.com/en-us/download/details.aspx?id=39680
Introduction – Background of Study
References: MITRE Corporation, CVE Details – Vulnerabilities by type and Year [Online], Published at
http://www.cvedetails.com/vulnerabilities-by-types.php, 2014. Retrieved on April 20th, 2014.
Introduction – Background of Study
• Vulnerabilities in applications is higher compared to hardware and network
(Sans Institute, Top Cyber Security Risks - Vulnerability Exploitation
Trends, 2009)
• 80% is ranked with high and medium severity
(Microsoft Corporation, Microsoft Security Intelligence Report, 2011)
• 50% is contributed in major-used applications compared to web-based
vulnerabilities
(IBM, 2011; HP, 2011)
• 3 – 15% (or more) vulnerabilities are capable to trigger memory overflow
(Cenzic, 2010; Cisco, 2010; HP, 2011; SANS Institute, 2010; NIST,
2011)
Introduction – Background of Study
References: Stefan Frei (NSS Labs), Analyst Brief: Vulnerability Threat Trends – A decade in view,
Transition on the way, Feb 04, 2013
Introduction – Background of Study
Reference: MITRE Corporation, Current CVSS Score Distribution For All Vulnerabilities, CVE
Details [online]. 2014, Accessed on April 20, 2014 at http://www.cvedetails.com/
Introduction – Background of Study
Reference: MITRE Corporation, Current CVSS Score Distribution For All Vulnerabilities, CVE Details
[online]. 2014, Accessed on April 20, 2014 at http://www.cvedetails.com/
Introduction – Background of Study
Reference: Microsoft Corporation, Software Vulnerability Exploitation Trends, 2013
Introduction – Problem Statement
• Vulnerabilities continue to persist
• Requires only 0.1% from total vulnerabilities to cause problem to computer system
(Symantec Corporation, 2013)
Introduction – Problem Statement
• Vulnerabilities continue to persist
• Requires only 0.1% from total vulnerabilities to cause problem to computer system
(Symantec Corporation, 2013)
Introduction – Problem Statement
• Vulnerabilities continue to persist
• Requires only 0.1% from total vulnerabilities to cause problem to computer system
(Symantec Corporation, 2013)
• Vulnerabilities exist in software, hardware and networks.
• Software vulnerabilities are due to programming errors, misconfiguration, invalid
process flow and because the nature of the programs it self (Beizer, 1990), (Aslam,
1995), (Longstaff et al, 1997), (Howard and Longstaff, 1988), (Krsul, 1988),
(Piessens, 2002), (Vipindeep and Jalote, 2005), (Alhazmi et al, 2006), (Moore,
2007)
• There are many programming errors
• Impact of this errors are software abnormal behavior, integrity error or being
utilize for exploitation
• The most persistent is the classical C overflow vulnerabilities.
• How serious it is?
• Viega & McGraw dedicated a chapter
• Howard, LeBlanc and Viega discussed in two books
• MITRE Corporation and SANS release a report
• Microsoft and Apple stress this on their development process.
• Work to resolve this started by Wagner and progressive since then
• But C overflow vulnerabilities continue to persist.
Introduction – Problem Statement
C overflow vulnerabilities continue to
persist (more than 3 decades) despite many
works have been done (in the area of
analysis tools, policies and knowledge
improvement).
Therefore, there are still gaps in this few
area which requires further analysis and
proposed solution with the objective of
solving this prolonged security nightmare.
Introduction … continue
Research Question Research Objective
1. Why C overflow vulnerabilities still
persist although it is common
knowledge, and there are numerous
methods and tools available to
overcome them?
1. To identify the reasons why C
overflow vulnerabilities, despite more
than three decades, still persist
although there are various methods
and tools available.
2. How to improve the understanding
and knowledge of software developer
on C overflow vulnerabilities from
source code perspective?
2. To construct a well-defined C
overflow vulnerabilities exploit
taxonomy from source code
perspective.
3. How to evaluate the well-defined C
overflow vulnerabilities taxonomy from
source code perspective?
3. To evaluate the constructed
taxonomy against the well-defined
criteria.
4. What is the effectiveness of static
analysis tools in detecting the C
overflow vulnerabilities exploit based
on the well-defined taxonomy?
4. To evaluate the effectiveness of
static analysis tools in detecting C
overflow vulnerabilities based on the
classes in the constructed taxonomy.
5. Which Windows-based operating
systems is critical and vulnerable to
exploit using C overflow vulnerabilities?
5. To evaluate the criticality of
windows-based operating system
concerning on its capability to avoid C
overflow vulnerabilities exploit.
Introduction … continue
Research Significance:
C overflow vulnerabilities is still relevant
C language is widely used in various mission-critical applications
It ease the understanding of software developers for writing
secure codes and security analyst to analyse codes
Direct the focus security research initiatives on most critical and
relevant types of C overflow vulnerabilities
Determine the identified static analysis tools capability, strength
and weaknesses
Introduction … continue
Assumptions:
Most exploits
happen on
32-bit OS
especially
Windows
Number of
exploits on
Linux or Unix
vulnerabilities
are relatively
small
Major concern
is on 32-bit
OS as 64-bit
OS is claimed
more secure
Other
programming
language are
highly not
critical
Introduction … continue
Scope and
Limitation:
1. Focus on Windows 32-bit
2. Limited to Windows XP and 7 based on market
share
3. Focus on C programming language
4. Evaluation is limited to five freely available open-
source code applications
5. Evaluation is limited to five static analysis tools
freely available
6. Focus on static analysis
 Software Vulnerabilities
 C Overflow vulnerabilities
 Static Analysis and Methods
 Dynamic Analysis: Another Perspective
 Structuring the Knowledge of C Overflow
Vulnerabilities
 Conclusion
Review of Literature – Software Vulnerabilities
• Software vulnerabilities is security flaws that exist within software due to use of
unsafe function, insecure coding, insufficient validation, and or improper exception
handling (Stoneburner, Goguen, & Feringa, 2002), (Viega & McGraw, 2002), (Seacord R.
, 2005) and (Kaspersky Lab ZAO, 2013)
• A vulnerable software can cause harm or maliciously used to cause harm especially to
human beings (Kaspersky Lab ZAO, 2013), (OWASP Organization, 2011), as happened in
Poland (Baker & Graeme, 2008), Iran (Chen T. M., 2010), and in the case of Toyota
break failure (Carty, 2010)
• There are many types of software vulnerabilities depending on perspectives (Beizer,
1990), (Aslam, 1995), (Longstaff, et al., 1997), (Krsul, 1998), (Piessens, 2002),
(Vipindeep & Jalote, 2005), (Alhazmi, Woo, & Malaiya, 2006), (Moore H. D., 2007),
(Howard, LeBlanc, & Viega, 2010), (SANS Institute, 2010), (OWASP Organization, 2011),
(Kaspersky Lab ZAO, 2013), (MITRE Corporation, 2013), (Wikipedia Foundation, Inc,
2011)
• The most mentioned and most critical – Programming errors (Cenzic Inc., 2010),
(Secunia, 2011), (NIST, 2014), (OSVDB, 2014)
• Impact of programming errors - causing abnormal behaviour, display incorrect result,
Denial of Service (DoS) and/or memory violation (Chen T. M., 2010), (IBM X-Force,
2011), (HewlettPackard, 2011), (Baker & Graeme, 2008), (Carty, 2010), (Telang &
Wattal, 2005)
References: (Microsoft, 2009), (Android Developer, 2012), (Stack Exchange Inc, 2012), (Oracle Corporation, 2012), (Apple Inc, 2012), (Symantec Corporation,
2013), (Kaspersky Lab, 2013), (Gong, 2009), (Fritzinger & Mueller, 1996), (Chechik, 2011), (Mandalia, 2011), (Kundu & Bertino, 2011),
Review of Literature – Software Vulnerabilities
• Many ways of exploiting programming errors – XSSi, SQLi, Command Injection, Path
Traversal and Overflow exploitation (Martin B. , Brown, Parker, & Kirby, 2011),
(Siddharth & Doshi, 2010), (Shariar & Zulkernine, 2011), (IMPERVA, 2011)
• Overflow happen in PHP, Java and persistently in C (Cenzic Inc, 2009), (MITRE
Corporation, 2012), (Mandalia, 2011), (TIOBE Software, 2012), (Tiwebb Ltd., 2007),
(DedaSys LLC, 2011) and (Krill, 2011)
Comparison Matrix C Java
Exploitation Impact Use in mission critical system and
proven track record
Used in various application but yet to
see any serious exploitation impact
Persistent and Dominance Since late 80s Since early year 2000
Number of exploitation = low
Severity Perspective 60% is medium to critical (MITRE, IBM,
Secunia & NIST)
70% medium to critical (OSVDB)
Defense and Preventive Insecure functions
Limitation on defensive and preventive
mechanism
All preventive tools developed to
detect and prevent Overflow
vulnerabilities
JVM, applet & type safety
Frequent patches/fix
All preventive tools focus on coding
standard and policy
References: (Microsoft, 2009), (Android Developer, 2012), (Stack Exchange Inc, 2012), (Oracle Corporation, 2012), (Apple Inc, 2012), (Symantec Corporation,
2013), (Kaspersky Lab, 2013), (Gong, 2009), (Fritzinger & Mueller, 1996), (Chechik, 2011), (Mandalia, 2011), (Kundu & Bertino, 2011),
Review of Literature – C Overflow Vulnerabilities
• C Overflow vulnerabilities is persistent with proven track record
• The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996),
(Longstaff et al, 1997)
• Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller,
& Kirby, 2010) and (MITRE Corporation, 2013)
• The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000),
(Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, &
Viega, 2010), etc.
Review of Literature – C Overflow Vulnerabilities
• C Overflow vulnerabilities is persistent with proven track record
• The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996),
(Longstaff et al, 1997)
• Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller,
& Kirby, 2010) and (MITRE Corporation, 2013)
• The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000),
(Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, &
Viega, 2010), etc.
Expert UF AOB RIL IO MF FP VTC UV NT
Wagner et. al. (2000) ( ϴ
Viega et.al. (2000) ϴ √
Grenier (2002) ϴ √ √ √
Zitser (2003) ϴ √
Chess and McGraw ((2004) √
Tevis and Hamilton (2004) ϴ √ √ √
Zhivich (2005) ϴ √
Kratkiewicz (2005) ϴ √
Sotirov (2005) ϴ √ √
Tsipenyuk et.al. (2005) ϴ √ √ ϴ √ √ √
Alhazmi et. al. (2006) √
Kolmonen (2007) √ √ √ √
Moore (2007) ϴ √ √
Akritidis et.al. (2008) √ √ √
Pozza and Sisto (2008) ( √ √ √ √ √
Nagy and Mancoridis (2009) √ √
Shahriar and Zulkernine (2011) ϴ √
Review of Literature – C Overflow Vulnerabilities
• C Overflow vulnerabilities is persistent with proven track record
• The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996),
(Longstaff et al, 1997)
• Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller,
& Kirby, 2010) and (MITRE Corporation, 2013)
• The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000),
(Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, &
Viega, 2010), etc.
• Most of these 9 classes carries at least medium severity and impact.
Review of Literature – C Overflow Vulnerabilities
Class of C Overflow
Vulnerability
Severity Complexity Impact
Unsafe Function Critical Low Complete
Array out-of-bound Critical Low Complete
Return-into-libC Critical High Partial
Integer Overflow Critical Low Complete
Memory Functions Critical Medium Partial
Function Pointer Moderate Medium Partial
Variable Type Conversion Moderate Medium Complete
Uninitialized Variable Moderate Low Complete
Null Termination Moderate Low Partial
Review of Literature – C Overflow Vulnerabilities
• C Overflow vulnerabilities is persistent with proven track record
• The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996),
(Longstaff et al, 1997)
• Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller,
& Kirby, 2010) and (MITRE Corporation, 2013)
• The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000),
(Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, &
Viega, 2010), etc.
• Most of these 9 classes carries at least medium severity and impact.
• Additional class is Pointer Scaling/Mixing. (NIST, 2007), (Searcord R. C., 2008), (Black et al, 2011)
(OWASP, 2012)
Review of Literature – Static Analysis Tools and MethodsProgress
Year
1970
Kings, 1970
- Debugging and understanding
program
- Pattern-matching
Lexical analysis (first method)
- Implemented in grep
Cousots, 1977
- Implements Abstract Syntax
Tree (Abstract Interpretation)
- Purpose – debug & understand
Andersens, 1994
- Introduce Inter-Procedural &
Pointer Analysis
- Purpose – debug & understand
1990 2000 2010
Wagner, 2000
- Integer Range Analysis
- Array out-of-bound / Unsafe
functions
- BOON
Viega et al, 2000
- Lexical Analysis
- Brief Data Flow and
- Unsafe Functions
- ITS4
Larochelle & Evans, 2001
- Annotation-based
- All overflow class
Ball & Rajamani, 2002
- AI (utilize CFG, SDG and PDG)
- All overflow class
- SLAM
Cousots, 2005
- Advance of AI (include complex
mathematical model)
- All overflow class
- ASTREE
Venet, 2005
- Symbolic Analysis + IRA
- Only utilize CFG
- All overflow class
Burgstaller, 2006
- Symbolic Analysis + new rules and
algorithm
- All overflow class
Ball, 2006
- Software Model Checking (based
on SA)
- Utilize CFG + SDG and built
software model
- All overflow class
Akritidis et al, 2008
- AI (point-to analysis)
- Embed identified potential
vulnerabilities during
compile with protection
- All overflow class
Pozza and Sisto, 2008
- DFA + IRA
- Combine static and
dynamic
- All overflow class
Wang et al, 2009
- DFA + AB
- Combine static and
dynamic
- All overflow class
NEC Lab, 2004 – 2014
- AI, IRA, SMC, Bounded
Model Checking (SAT
capability)
- All overflow class
Review of Literature – Static Analysis Tools and Methods
Technique
Researcher
LA Inter-PA Intra-PA AI DFA SA IRA AB TM SMC BMC
Wagner
(2000)
√
Viega et. al.
(2000)
√ √ √
Larochelle
and Evans
(2001)
√
Venet (2005) √ √
Ball et. al.
(2006)
√ √
Akritidis et.
al. (2008)
√
Pozza and
Sisto (2008)
√ √ √ √
Wang et. al.
(2009)
√ √
Nagy and
Mancoridis
(2009)
√ √ √ √
NEC Lab.
(2004-2014)
√ √ √ √ √
Review of Literature – Static Analysis Tools and Methods
No. Tool Latest Version Latest Release DateTechnique Language COV Coverage
1 ITS4 (Cigital,
2011)
1.1.1 February 17, 2000 LA C / C++ Unsafe Function
Return-into-lib
2 BOON (Wagner
D., 2002)
1.0 July 5, 2002 IRA C Array Out-of-bound
Unsafe Function
3 ARCHER Not available 2003 SA, DFA, InterPA C Array Out-of-bound
Unsafe Function
4 CSSV (Dor,
Rodeh, & Sagiv,
2003)
Not Available 2003 DFA, InterPA, IRA, AB C Array Out-of-bound,
Unsafe Function
5 CQual (Foster,
2004)
0.981 June 24, 2004 DFA, AB, InterPA C Unsafe Function,
Function Pointer,
6 MOPS (Chen &
Wagner, 2002)
0.9.1 2005 SMC C No specific (rules
dependent)
7 PScan (DeKok,
2007)
1.3 January 4, 2007 LA C Unsafe Function
(limited to printf)
8 FlawFinder
(Wheeler, 2007)
1.27 January 16, 2007 LA C / C++, Java No specific (list
dependent)
9 UNO (Bell Labs,
2007)
2.13 October 26, 2007 DFA, SA, SMC C (ANSI C) Uninitialized variables
Function pointer
Array Out-of-bound
Memory Function
10 Saturn (Aiken,et
al., 2008)
1.2 2008 InterPA, Intra-PA, DFA,
SA
C Function Pointer
Review of Literature – Static Analysis Tools and Methods
No. Tool Latest Version Latest Release DateTechnique Language COV Coverage
11 GCC Security
Analyzer (Pozza
& Sisto, 2008)
Not available 2008 DFA, InterPA, IntraPA,
AB, IRA
C Unsafe Function,
Array Out-of-Bound,
Integer Overflow,
Variable Type
Conversion, Memory
Function
12 C Global
Surveyor (Brat &
Thompson, 2008)
Not Available 2008 AI, DFA, Program
Slicing
C Uninitialized Variable,
Function Pointer,
Array Out-of-bound
13 BLAST
(Henzinger,
Beyer, Majumdar,
& Jhala, 2008)
2.5 July 11, 2008 DFA, SMC C No specific (model
dependent)
14 PC-Lint (Gimpel
Software, 2013)
9.0 September, 2008 LA, DFA, AB C / C++ No specific (rule
dependent)
15 Splint (National
Science
Foundation,
2010)
3.1.2 August 5, 2010 AB, InterPA ANSI C No specific
(annotation
dependent)
16 RATS (HP Fortify,
2011)
2.3 2011 LA C / C++, Perl,
PHP, Python
Array Out-of-bound
Unsafe Function
Review of Literature – Static Analysis Tools and Methods
No. Tool Latest Version Latest Release DateTechnique Language COV Coverage
17 PolySpace
(MathWorks, Inc,
2014)
V8.2 (R2011b) 2011 AI C Integer Overflow
Array Out-of-bound,
Uninitialized Variable,
Functio Pointer /
Pointer Aliasing, and
Variable Type
Conversion
18 F-Soft (platform)
(NEC
Laboratories,
2012)
Not Available 2011 AI, DFA, BMC, SA, SMC C / C++ Function Pointer,
Memory Function,
Return-into-lib,
Variable type
conversion,
Null termination,
Unsafe function
19 VARVEL
(Hashimoto &
Nakajima, 2009)
Not Available 2011 DFA, SMC C / C++ Function Pointer,
Array Out-of-bound,
Unsafe Function,
Memory Function
Review of Literature – Static Analysis Tools and Methods
No. Tool Latest Version Latest Release DateTechnique Language COV Coverage
20 CodeSonar
(GrammaTech,
Inc, 2011)
Not Available September 27,
2011
DFA, SMC, InterPA C Function pointer,
Integer overflow,
Memory function,
Array out-of-bound,
Unsafe function,
Variable Type
Conversion,
Uninitialized Variable
21 CodeWizard
(Parasoft, 2011)
9.2.2.17 December 12, 2011 LA, DFA C / C++ All (No specific)
22 ASTREE (Cousot
P. , Cousot, Feret,
Miné, & Rival,
2006)
11.12 December 15, 2011 AI, InterPA C / C++ Integer Overflow,
Array Out-of-bound,
Function Pointer,
Uninitialized Variable
23 SLAM (Microsoft
Corporation,
2011)
Not Available 2011 SMC C/C++ No specific (rule
dependent)
Review of Literature – Static Analysis Tools and Methods
No. Methods Strength Weaknesses
1. LA Basic and simple method Ignore program semantics and pattern/rules
dependent
2. InterPA Understand function relationship Dependent on other method
3. IntraPA Understand the internal process Ignore program semantics and dependent on
other
4. AI 1. Semantics based
2. Use approximation to reduce
analysis time
1. Complex algorithm and process hence ignore
some properties during the analysis.
2. Approximation may lead to false alarm.
3. Is not scalable for large programs.
5. DFA 1. Understand the flow
2. Provide a clear direction of
vulnerabilities flow path and reduce
false alarm
1.Suffer overhead due to extensive in-depth
analysis,
2.Ignore some properties or vulnerabilities and
hence reduce precision.
6. SA Represent the actual program in
symbolic execution form and
implements refinement process.
1 Input and model dependent
2. Overhead due to many refinement processes.
7. IRA 1.Straight forward.
2.Strong constraint-based algorithm.
1.Ignores program semantics
2.Limited scope on C overflow vulnerabilities.
References: (Aho, Lam, Sethi, & Ullman, 1986), (Andersen, 1994), (Bae, 2003), (Ball, et al., 2006), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Bakera, et al.,
2009), (Biere, Cimatti, Clarke, & Strichman, 2003), (Burgstaller, Scholz, & Blieberger, 2006), (Chess & McGraw, 2004), (Clarke E. M., 2006), (Clarke E., 2009), (Clarke, Biere,
Raimi, & Zhu, 2001), (Cousot & Cousot, 1977), (Cousot P. , et al., 2005), (D’Silva, Kroening, & Weissenbacher, 2008), (Deursen & Kuipers, 1999),
(Dor, Rodeh, & Sagiv, 2001), (Erkkinen & Hote, 2006), (Evans, Guttag, Horning, & Tan, 1994), (Ferrara, 2010), (Ferrara, Logozzo, & Fanhdrich, 2008), (Gopan & Reps, 2007),
(Haghighat & Polychronopoulos, 1993), (Holzmann G. J., 2002), (Ivančić, et al., 2005), (Jhala & Majumdar, 2009), (Kolmonen, 2007), (Kunst, 1988), (Ku, Hart, Chechik, &
Lie, 2007), (Larochelle & Evans, 2001), (Li & Cui, 2010), (Lim, Lal, & Reps, 2009), (Logozzo, 2004), (Parasoft Corporation, 2009), (Pozza & Sisto, 2008),
(Qadeer & Rehof, 2005), (Rinard, Cadar, Dumitran, Roy, & Leu, 2004), (Sotirov, 2005), (Tevis & Hamilton, 2004), (Vassev, Hinchey, & Quigley, 2009, (Venet, 2005), (Viega J.
, Bloch, Kohno, & McGraw, 2000), (Wang, Guo, & Chen, 2009), (Wang, Zhang, & Zhao, 2008), (Wagner, Foster, Brewer, & Aiken, 2000), (Wenkai, 2002), (Zafar & Ali, 2007),
(Zaks, et al., 2006)
Review of Literature – Static Analysis Tools and Methods
No. Methods Strength Weaknesses
8. AB 1.Utilize the annotation written in
the code and hence reduce the
complexity to convert and analyse.
1.Dependent on correct syntax of annotation
2.Depends on willingness of developer to
annotate.
9. Type
Matching
Semantically understand the type
conversion
1. Depends on valid source code abstraction.
2. Limited to two types of C overflows
vulnerabilities that relates to type conversion.
10. SMC Semantically understand the
programs.
Precise analysis with counterexample
algorithm.
1. Suffer state explosion and model dependent.
2. Not scalable for large or complex source code.
Limited coverage on C overflow vulnerabilities
11. BMC 1.Semantically understand the
programs.
2.Precise analysis with
counterexample capability.
3.Less state to analyse compare to
software model checking.
4.Reduce state explosion
1.Still has the possibility of state explosion.
2.Model dependent.
3.Not scalable for large and complex source code.
4.Limited coverage on C overflow vulnerabilities
References: (Aho, Lam, Sethi, & Ullman, 1986), (Andersen, 1994), (Bae, 2003), (Ball, et al., 2006), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Bakera, et al.,
2009), (Biere, Cimatti, Clarke, & Strichman, 2003), (Burgstaller, Scholz, & Blieberger, 2006), (Chess & McGraw, 2004), (Clarke E. M., 2006), (Clarke E., 2009), (Clarke, Biere,
Raimi, & Zhu, 2001), (Cousot & Cousot, 1977), (Cousot P. , et al., 2005), (D’Silva, Kroening, & Weissenbacher, 2008), (Deursen & Kuipers, 1999),
(Dor, Rodeh, & Sagiv, 2001), (Erkkinen & Hote, 2006), (Evans, Guttag, Horning, & Tan, 1994), (Ferrara, 2010), (Ferrara, Logozzo, & Fanhdrich, 2008), (Gopan & Reps, 2007),
(Haghighat & Polychronopoulos, 1993), (Holzmann G. J., 2002), (Ivančić, et al., 2005), (Jhala & Majumdar, 2009), (Kolmonen, 2007), (Kunst, 1988), (Ku, Hart, Chechik, &
Lie, 2007), (Larochelle & Evans, 2001), (Li & Cui, 2010), (Lim, Lal, & Reps, 2009), (Logozzo, 2004), (Parasoft Corporation, 2009), (Pozza & Sisto, 2008),
(Qadeer & Rehof, 2005), (Rinard, Cadar, Dumitran, Roy, & Leu, 2004), (Sotirov, 2005), (Tevis & Hamilton, 2004), (Vassev, Hinchey, & Quigley, 2009, (Venet, 2005), (Viega J.
, Bloch, Kohno, & McGraw, 2000), (Wang, Guo, & Chen, 2009), (Wang, Zhang, & Zhao, 2008), (Wagner, Foster, Brewer, & Aiken, 2000), (Wenkai, 2002), (Zafar & Ali, 2007),
(Zaks, et al., 2006)
Review of Literature – Static Analysis Tools and Methods
No. Tool Strength Limitation COV Coverage
1 FlawFinder Faster and supported with risk level
assessment
High false alarms and limited type
of C overflows vulnerabilities
Unsafe functions
(database dependent)
2 RATS Faster, supported multiple language,
and memory location identification
High false alarms and limited type
of C overflows vulnerabilities
Unsafe functions
(database dependent)
3 BOON Efficient in range analysis of variable
and does not dependent on database or
list for detection.
High false alarm due to ignorance
of program semantics
Unsafe functions and array
out-of-bound.
4 ARCHER Symbolically understand the source
code and ability to analyze large source
code.
Still produce false alarm, does not
understand C language completely,
and limited vulnerabilities
Unsafe functions and array
out-of-bound
5 MOPS Analyzing model of program which
constructed to the closest possible of
execution form for precise detection
Complexity of modelling No specific C overflows
vulnerabilities (model
dependent)
6 UNO Efficient in analyzing complex program
and low false alarm with
counterexample capability
State explosion and limited
vulnerabilities
Uninitialized variables,
function pointer, and array
out-of-bound
7 F-SOFT Combines many technique that increase
detection capability and continuous
improvement from NEC Labs.
Still suffer state explosion and
produce false alarms.
Time consuming due to many
analysis stage.
Unsafe functions, function
pointers, memory
functions, return-into-libC,
variable type conversion,
and null termination.
References: (Tevis & Hamilton, 2004), (Kolmonen, 2007), (Zafar & Ali, 2007),(Viega J. , Bloch, Kohno, & McGraw, 2002), (Sotirov, 2005), (HP Fortify, 2011), (Chess & McGraw, 2004), (Zitser,
Lippmann, & Leek, 2004), (Xie, Chou, & Engler, 2003), (Lim, Lal, & Reps, 2011), (Pozza & Sisto, 2008), (Engler & Musuvathi, 2004), (Li & Cui, 2010), (Holzmann G. J., 2002), (Jhala & Majumdar,
2009), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Ganai, et al., 2008), (Balakrishnan, et al., 2010) and (Ivančić, et al., 2005), (Xie & Aiken, 2005), (Wang, Gupta, & Ivančić, 2007),
(Ku, Hart, Chechik, & Lie, 2007), (Jhala & McMillan, 2005), (Henzinger T. A., Jhala, Majumdar, & Qadeer, 2003), (Beyer, Henzinger, Jhala, & Majumdar, 2007), (Marchenko & Abrahamsson,
2007), (Plösch, Gruber, Pomberger, Saft, & Schiffer, 2008) and (Anderson J. L., 2005), (Wang, Ding, & Zhong, 2008), (Evans & Larochelle, 2002), (Karthik & Jayakumar, 2005), (Venet, 2005), (Ball
& Rajamani, 2002), (Microsoft Corporation, 2014), (Microsoft Corporation, 2011), (NIST, 2012).
Review of Literature – Static Analysis Tools and Methods
No. Tool Strength Limitation COV Coverage
9 GCC Security
Analyzer
Reduce time by implementing the
analysis in compiler and high
detection rate.
Compilers overhead and still
produce false alarm.
Limited type of C overflows
vulnerabilities.
Unsafe function, array out-of-
bound, integer overflow,
variable type conversion, and
memory function
10 BLAST Combines variety of abstraction
method, refinement methodology,
and analysis algorithm.
Suffer state explosion, produce
false alarm, and dependent on
defined properties.
No specific C overflows
vulnerabilities (model
dependent)
11 PC-Lint Lots of error message to help user
understand the errors found and
implementation of data flow
analysis and annotation based to
reduce false alarms.
Faster analysis time.
Dependent on rules and standard
defined to verify the source code.
Limited capability due to
inheritance of lexical analysis
weaknesses.
No specific C overflows
vulnerabilities (rule
dependent)
12 SPLINT Very fast and low false alarm (with
proper implementation of code
annotation)
Limited vulnerabilities (dependent
on rule and annotation).
False alarm can be very high when
code is incorrectly annotated or
limited rule.
No specific C overflows
vulnerabilities (rule and
annotation dependent)
13 ASTREE Precisely detecting four types of C
overflows vulnerabilities.
Low false alarm.
Scalability issues and limited to only
four types of C overflows
vulnerabilities.
Integer overflow, array out-of-
bound, function pointer, and
uninitialized variable
14 SLAM Very precise in detecting violation
of device drivers.
Model dependent and limited to
small size programs.
No specific C overflows
vulnerabilities (model
dependent)
References: (Tevis & Hamilton, 2004), (Kolmonen, 2007), (Zafar & Ali, 2007),(Viega J. , Bloch, Kohno, & McGraw, 2002), (Sotirov, 2005), (HP Fortify, 2011), (Chess & McGraw, 2004), (Zitser,
Lippmann, & Leek, 2004), (Xie, Chou, & Engler, 2003), (Lim, Lal, & Reps, 2011), (Pozza & Sisto, 2008), (Engler & Musuvathi, 2004), (Li & Cui, 2010), (Holzmann G. J., 2002), (Jhala & Majumdar,
2009), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Ganai, et al., 2008), (Balakrishnan, et al., 2010) and (Ivančić, et al., 2005), (Xie & Aiken, 2005), (Wang, Gupta, & Ivančić, 2007),
(Ku, Hart, Chechik, & Lie, 2007), (Jhala & McMillan, 2005), (Henzinger T. A., Jhala, Majumdar, & Qadeer, 2003), (Beyer, Henzinger, Jhala, & Majumdar, 2007), (Marchenko & Abrahamsson,
2007), (Plösch, Gruber, Pomberger, Saft, & Schiffer, 2008) and (Anderson J. L., 2005), (Wang, Ding, & Zhong, 2008), (Evans & Larochelle, 2002), (Karthik & Jayakumar, 2005), (Venet, 2005), (Ball
& Rajamani, 2002), (Microsoft Corporation, 2014), (Microsoft Corporation, 2011), (NIST, 2012).
Review of Literature – Static Analysis Tools and Methods In
Summary
1. There are 11 methods known so far
2. More than 40 static analysis tools developed
3. C overflow vulnerabilities continue to persist
1. All static analysis methods and tools focusing on solving the
symptoms of C overflow vulnerabilities rather than the root cause of
it.
2. Although there are tools interpret the source code, but, due to failure
to understand the root cause and the structures of C, the tools is yet
to understand the actual problem
3. Impacting the precision @ effectiveness of the methods and tools.
Can we solve the problem with Dynamic Analysis?
Review of Literature – Dynamic Analysis
Dynamic analysis:
1. Analyze program during program’s execution (Li and Chiueh, 2007)
2. Due to that, it only analyze only executed path (Ernst, 2003)
3. Tools that implement dynamic analysis – SigFree (Wang et al 2008), StackGuard (Lhee &
Chapin, 2002), StackShield (Lhee & Chapin, 2002), CCured (Sidiroglou & Keromytis, 2004), etc.
Strength:
1. Source Code independent (Ernst, 2003), (Goichi et al, 2005), (Cornell, 2008) (Wang et al,
2008)
2. Due to that and analyzed only executed path - analyze only important path (Ernst, 2003).
Hence, it reduces the time and false positive (Graham, Leroux, & Landry, 2008), (Cornell,
2008) and (Goichi et al, 2005).
3. Input dependent analysis and therefore not specifics to any C overflow vulnerabilities classes
(Haugh & Bishop, 2003) and (Liang & Sekar, 2005)
Review of Literature – Dynamic Analysis
Dynamic analysis:
1. Analyze program during program’s execution (Li and Chiueh, 2007)
2. Due to that, it only analyze only executed path (Ernst, 2003)
3. Tools that implement dynamic analysis – SigFree (Wang et al 2008), StackGuard (Lhee &
Chapin, 2002), StackShield (Lhee & Chapin, 2002), CCured (Sidiroglou & Keromytis, 2004), etc.
Weakness:
1. Frame-pointer dependent; for instance LibSafe (Lhlee & Chaplin, 2002) and (Sidiroglou &
Keromytis, 2004).
2. Share the same issues as static analysis such as:
1. Annotation dependent (Newsome & Song, 2005) and (Zhivich, Leek & Lippmann, 2005)
2. Limited to certain C overflow vulnerabilities classes (Sidiroglou & Keromytis, 2004),
(Akritidis et al, 2008), (Zhivich et al, 2005), (Sidiroglou & Keromytis, 2004), (Wang et al,
2008) and (Newsome & Song, 2005), (Lhee & Chapin, 2002), (Liang & Sekar, 2005),
(Kratkiewicz & Lippmann, 2005), (Haugh & Bishop, 2003) and (Cornell, 2008).
3. Overhead analysis (Li & Chiueh, 2007), (Wang et al, 2008), (Rinard et al, 2004), (Lhee &
Chapin, 2002), (Liang & Sekar, 2005), (Sidiroglou & Keromytis, 2004), (Aggarwal & Jalote,
2006), (Akritidis et al, 2008), (Graham et al, 2008), (Ernst, 2003), (Goichi et al, 2005) and
(Zhivich et al, 2005).
4. Contributes to DoS or DDoS attack (Wang et al, 2008) and (Evans and Larochelle, 2002).
5. Test Case dependent (Aggarwal & Jalote, 2006), (Graham et al, 2008) and (Cornell, 2008)
6. Detection is based on known execution path (Ernst, 2003), (Goichi et al, 2005)
Review of Literature – Static versus Dynamic Analysis
No. Static Dynamic Choosen
1. Ability to analyze unfinished code,
fraction of complete system or
performs unit test
Able to analyze complete compiled
code or finished system.
Static
2. Detect at early stage and modified
code before completion (Shapiro,
2008)
Detect after system is released and
requires complete cycle of system
modifications
Static
3. Cost effective (Shapiro, 2008),
(Verifysoft Technology GmbH, 2013)
and (Terry & Levin, 2006)
Cost of changing code is 100 times
(Shapiro, 2008) and (Graham et al,
2008)
Static
4. Detection effectiveness is still an issue
especially with numerous weaknesses
in the methods
Yet to prove its ability and there is
no guarantee on its effectiveness
(Goichi et al, 2005) and (Ernst,
2003)
Draw
• Static Analysis is preferred analysis technique, hence this research focus on issues behind
static analysis
Methods Tools
Understanding/
Knowledge
Review of Literature – Taxonomy and Classifications
1. Vulnerabilities understanding is the process of educating and building the knowledge on
vulnerabilities (Krsul, 1998).
2. A major step towards enhancement of tools and implementation for better defense mechanism
(Krsul, 1998) and (Tsipenyuk, Chess, & McGraw, 2005).
3. Three areas related to improving understanding and knowledge:
1. Guidelines
2. Books
3. Taxonomy
4. The taxonomy and classifications on software vulnerabilities (previously known as bugs) was started
based on RISOS (Research in Secure OS) project by National Bureau of Standards (Abbot, et al., 1976)
and PA (Program Analysis) by Information Science Institute, University of California (Bisbey &
Hollingworth, 1978).
5. Well-defined taxonomy fulfills the set of well-defined criteria and be able to define the objects and
field of studies without doubt (Igure & Williams, 2008) and (Axelsson, 2000)
A well-defined taxonomy has significant impact in having
good guidelines and books especially to understand C
overflow vulnerabilities (Igure & Williams, 2008)
Review of Literature – Taxonomy and Classifications
- Criteria of well-defined taxonomy
1996
Bishop & Bailey
1. Deterministic
2. Specificity
1998
Howard & Longstaff
1. Mutually Exclusive
2. Exhaustive
3. Unambiguous
4. Repeatable
5. Accepted
6. Useful
Ivan Krsul
1. Objectivity
2. Determinism
3. Repeatability
4. Specificity.
1999
Bishop
1. Mutually exclusive
2. Exhaustive
3. Unambiguous
4. Repeatable
5. Accepted Useful
2001
Lough
1. Accepted [Howa1997]
2. Appropriateness [Amor1994]
3. Based on the code, environment, or other
technical details [Bish1999]
4. Comprehensible [Lind1997]
5. Completeness [Amor1994]
6. Determinism [Krsu1998]
7. Exhaustive [Howa1997, Lind1997]
8. Internal versus external threats [Amor1994]
9. Mutually exclusive [Howa1997, Lind1997]
10. Objectivity [Krsu1998]
11. Primitive [Bish1999]
12. Repeatable [Howa1997, Krsu1998]
13. Similar vulnerabilities classified similarly
[Bish1999]
14. Specific [Krsu1998]
15. Terminology complying with established
security terminology [Lind1997]
16. Terms well defined [Bish1999]
17. Unambiguous [Howa1997, Lind1997]
18. Useful [Howa1997, Lind1997]
2003
Vijayaraghavan
1. Appropriate
2. Comprehensible
3. Specific
4. Useful
5. Expandable
Hannan, Turner & Broucek
1. Specificity
2. Exhaustive
3. Deterministic
4. Accepted
5. Terminology sompliant
Hansmann
1. Accepted
2. Comprehensible
3. Completeness
4. Determinism
5. Mutual Exclusive
6. Repeatable
7. Terminology compliant
8. Terms
9. Well-defined
10. Unambiguous
11. Useful
2004
Killourhy, Maxion and Tan
1. Mutually Exclusivity
2. Exhaustivity
3. Replicability
(based on Lough)
2005
Polepeddi
1. Mutually exclusive
2. Exhaustive
3. Unambiguous
4. Repeatability
5. Acceptability
6. Utility / Useful
Tsipenyuk et al
1. Simplicity
2. Mutually exclusive
3. Intuitive
4. Category name is less generic
5. Specific
Berghe et al
1. Mutually exclusive
2. Deterministic
3. Exhaustive
4. Objective
5. Useful
2006
Alhazmi, Woo and Malaiya
1. Mutual Exclusiveness
2. Clear and Unique definition
3. Repeatability
4. Covers all vulnerabilities
(Comprehensive)
2008
Igure and Williams
1. Specificity
2. Ambiguous
3. Layered or hierarchy
4. Unique definition on each level
Review of Literature – Taxonomy and Classifications
- Criteria of well-defined taxonomy
No. Researchers Definition of well-defined taxonomy Comments
1 Bishop and Bailey
(1996)
A taxonomy that is well-structured with well-defined procedures. Too few
2 Krsul (1998) A taxonomy that has characteristics to ensure classifications can be
successfully done
Lack of completeness
3 Howard and Longstaff (
1998)
Not available There are redundant and arguable criterion
4 Bishop (1999) Not available Criterions conflict each other, hence
inconsistent.
5 Lough (2001) Refer to Krsul definition of well-defined taxonomy Collection of previous criteria, which consist
of repetitive or irrelevant to certain fields of
taxonomy.
6 Vijayaraghavan (2003) A good taxonomy should have a characteristics specific to its
purposes.
Specific to e-commerce environment and
ignore mutually exclusive criterion.
7 Hannan, Turner, and
Broucek (2003)
A good forensic taxonomy should be able to leverage the strength of
multiple disciplines on any forensic issues.
Focusing on forensic issue and ignore
repeatability and mutually exclusive.
8 Hansmann (2003) Not available. Based on Lough’s criteria. Same issues as Lough
9 Killhourhy et. al. (2004) Should have sensible criteria but consistent with general terms of
taxonomy.
Lack of objectivity and specificity.
10 Polepeddi (2005) Should reduce difficulties to users to study and manage the objects
of studies
Lack of deterministic and objectivity.
11 Tsipenyuk et. al. (2005) Well-defined taxonomy must be simple and easy to understand. Lack of deterministic, objectivity and
repeatability.
12 Berghe et. al. (2005) Not Available Lack of specificity, repeatability and accepted
criterion.
13 Igure and Williams A good taxonomy provides a common language for the study of the Allowed ambiguity which causing non-
Review of Literature – Taxonomy and Classifications
- Previous works on taxonomy or classifications
No Experts Purpose Criterions failed to
fulfil
Impact C overflows
vulnerabilities
1 Lough (Lough,
2001)
To ease security experts to
design security protocol for
network communications.
Unambiguous and
repeatability
Confusing to user and hence
resulting in different
understanding for the same
vulnerabilities.
Not available
2 Piessens (2002) As guidelines to software
developers, tester and
security implementer.
Unambiguous,
specificity and
repeatability
Confusing to user and hence
resulting in different
understanding for the same
vulnerabilities.
Not available
3 Vijayaraghavan
(2003)
To educate and help security
practitioners in developing a
suitable test case
Unambiguous,
specificity, useful
and repeatability,
Does not specified web
applications vulnerabilities and
not practical due to confusing
and ambiguity
Not available
4 Hannan et al
(2003)
To leverage multi-disciplines
in information security for
forensic process.
Obviousness and
completeness.
Not useful and difficult to apply in
forensic computing
Not applicable.
5 Hansman
(2003)
As guidelines for security
expert
Specificity and
repeatability
Confuse and causing non-
repeatable result.
Generic
6 Seacord and
Householder
(2005)
As guidelines to classify
vulnerabilities based on
behaviour for
countermeasures
Specificity,
repeatability,
unambiguous and
useful.
Not repetitive in vulnerability
classifications process and does
not improve understanding on
vulnerabilities.
Generic
7 Weber et al
(2005)
As reference for security
developers to develop security
analysis tool.
Unambiguous,
useful and
repeatability
Difficult to understand and
repeating the results.
Generic
8 Tsipenyuk et al
(2005)
As reference for software
developers to avoid
vulnerabilities in coding
Unambiguous,
repeatability and
specific.
Too general and covers too many
languages and therefore affecting
repeatability.
Generic but covers
almost all C overflow
vulnerabilities classes
Review of Literature – Taxonomy and Classifications
- Previous works on taxonomy or classifications
No Experts Purpose Criterions failed to
fulfil
Impact C overflows
vulnerabilities
1 Lough (Lough,
2001)
To ease security experts to
design security protocol for
network communications.
Unambiguous and
repeatability
Confusing to user and hence
resulting in different
understanding for the same
vulnerabilities.
Not available
2 Piessens (2002) As guidelines to software
developers, tester and
security implementer.
Unambiguous,
specificity and
repeatability
Confusing to user and hence
resulting in different
understanding for the same
vulnerabilities.
Not available
3 Vijayaraghavan
(2003)
To educate and help security
practitioners in developing a
suitable test case
Unambiguous,
specificity, useful
and repeatability,
Does not specified web
applications vulnerabilities and
not practical due to confusing
and ambiguity
Not available
4 Hannan et al
(2003)
To leverage multi-disciplines
in information security for
forensic process.
Obviousness and
completeness.
Not useful and difficult to apply in
forensic computing
Not applicable.
5 Hansman
(2003)
As guidelines for security
expert
Specificity and
repeatability
Confuse and causing non-
repeatable result.
Generic
6 Seacord and
Householder
(2005)
As guidelines to classify
vulnerabilities based on
behaviour for
countermeasures
Specificity,
repeatability,
unambiguous and
useful.
Not repetitive in vulnerability
classifications process and does
not improve understanding on
vulnerabilities.
Generic
7 Weber et al
(2005)
As reference for security
developers to develop security
analysis tool.
Unambiguous,
useful and
repeatability
Difficult to understand and
repeating the results.
Generic
8 Tsipenyuk et al
(2005)
As reference for software
developers to avoid
vulnerabilities in coding
Unambiguous,
repeatability and
specific.
Too general and covers too many
languages and therefore affecting
repeatability.
Generic but covers
almost all C overflow
vulnerabilities classes
No. Experts Purpose Criterions failed to
fulfil
Impact C overflows vulnerabilities
9 Sotirov
(2005)
To construct and evaluate
static analysis tools.
Unambiguous,
repeatability and
completeness
Failed to improve
understanding on
vulnerabilities and behaviour
that triggers exploitation.
Out-of-bound, format string,
and integer overflow, memory
function, variable conversion
(signed to unsigned)
10 Berghe et al
(2005)
To evaluate constructed
methodology for
producing vulnerabilities
taxonomy
Unambiguous,
repeatability and
completeness
Different users produce
different result and affecting
usefulness of the taxonomy.
Generic
11 Gegick and
Williams
(2005)
To identify and classify
vulnerabilities in report or
advisories
Repeatability and useful. User will have difficulties in
classifying vulnerabilities as
there is no specific class and
affecting classifying
consistency.
Generic
12 Alhazmi et.
al. (2006)
To identify and classify
exploitable vulnerabilities
for security enhancement
Completeness Users tends to wrongly classify
the studied object or failed to
classify at all. This will affect
their understanding and
knowledge.
Generic
13 Bazaz and
Arthur
(2007)
To analyze software
security
Repeatability,
completeness,
unambiguous and
obvious.
Different users produce
different result and affecting
usefulness of the taxonomy.
Generic
14 Shahriar
and
Zulkernine
(2011)
To ease monitoring
program vulnerability
exploitation
Repeatability,
completeness,
unambiguous and
obvious.
Different users produce
different result and affecting
usefulness of the taxonomy.
Generic
Review of Literature – Taxonomy and Classifications
- Previous works on taxonomy or classifications
No Experts Purpose Criterions failed to
fulfil
Impact C overflows
vulnerabilities
1 M. Zitser (2003) To evaluate static
analysis tool
Unambiguous,
completeness and
repeatability
Confusing to user and hence
resulting in different
understanding for the same
vulnerabilities.
Array out-of-
bound and unsafe
functions
2 K. Kratkiewicz
(2005)
To evaluate static
analysis tool
Unambiguous,
completeness and
repeatability
Confusing to user and hence
resulting in different
understanding for the same
vulnerabilities.
Array out-of-
bound and unsafe
functions
3 M. A. Zhivich
(2005)
To evaluate
dynamic analysis
tool.
Unambiguous,
completeness and
repeatability
Limited vulnerabilities and for
the purpose of assessing
dynamic tools and not for
understanding C overflow
vulnerabilities.
Array out-of-
bound and unsafe
functions
4 A. I. Sotirov
(2005)
To evaluate static
analysis tool
Repeatability and
completeness.
Lack of classes and has a class
for unknown vulnerabilities
which resulted inconsistency
and hence does not ease the
understanding.
Array out-of-
bound, integer
overflow and
unsafe functions
5 H. D. Moore
(2007)
For studies and
understanding of
overflow
vulnerabilities
Specificity,
completeness and
repeatability
Confuse and causing non-
repeatable result.
Array out-of-
bound and integer
overflow
Review of Literature – Summary of Review
1. There are gaps in software security areas pertaining to software vulnerabilities issues specifically
to C Overflow Vulnerabilities:
1. Analysis methods and tools
2. Security implementation and policies
3. Understanding/knowledge on software vulnerabilities
2. As mentioned by Krsul (1998) and Tsipenyuk et al (2005), improving knowledge and
understanding on vulnerabilities is a major step towards enhancement of tools and
implementation for better defence mechanism. Hence, the focus is on taxonomy and
classifications.
1. There are still no well-defined taxonomy constructed from source code perspective which
consider developers point-of-view and covers all C overflow vulnerabilities classes
Research Question Research Objective
1. Why C overflow vulnerabilities still persist
although it is common knowledge, and there
are numerous methods and tools available to
overcome them?
1. To identify the reasons why C overflow
vulnerabilities, despite more than three
decades, still persist although there are various
methods and tools available.
Review of Literature – Summary of Review
Research Question Research Objective
1. Why C overflow vulnerabilities still persist
although it is common knowledge, and there
are numerous methods and tools available to
overcome them?
1. To identify the reasons why C overflow
vulnerabilities, despite more than three
decades, still persist although there are various
methods and tools available.
Review of Literature – Summary of Review
 Research Framework and Phases
 Research Activities
Research Methodology – Research Framework & Phases
Research Methodology – Research Framework & Phases
Theoretical
Studies
Taxonomy
Construction
Taxonomy
Evaluation
Research Methodology – Research Framework & Phases
Phases
Section
Phase 1 – Theoretical
Studies
Phase 2 – Taxonomy
Construction
Phase 3 – Taxonomy
Evaluation
Research
Question
(RQ)
RQ 1: Why C overflow
vulnerabilities still persist
although it is common and
known for more than two
decades?
RQ 2: How to identify and
improve understanding of C
overflow vulnerabilities and
prevent from occurring
again?
RQ 4: Is the taxonomy
effectives in improving
understanding of C overflow
vulnerabilities?
RQ 5: Is the taxonomy
comprehensive and useful?
Research
Objectives
(RO)
RO 1: To identify strength
and weaknesses of current
mechanisms in detecting and
preventing C overflow
vulnerabilities from occurring
RO 2: To identify list of C
overflows vulnerabilities
class
RO 3: To compile and
construct criteria for well-
defined taxonomy
RO 4: To construct taxonomy
specifically addressing C
overflow vulnerabilities
RO 5: To evaluate taxonomy
based on criteria
RO 6: To evaluate the
criticality and relevancies of C
overflows vulnerabilities
based on the taxonomy
Phase
Deliverables
/ Output
(RR)
RR 1: Strength and
weaknesses of current
detection and prevention
mechanism
RR 2: List of C overflow
vulnerabilities classes
RR 3: Criteria of well-defined
taxonomy
RR 4: Taxonomy of C
overflow vulnerabilities
exploit
RR 5: Taxonomy validated
RR 6: Significant findings of
the research
Research Methodology – Research Activities
1. Pre-analysis on vulnerabilities and information security
issues
2. In-depth review on software vulnerabilities
3. Critical review:
3.1 C overflow vulnerabilities and exploitation
3.2 C overflow vulnerabilities defensive mechanism
3.3 C overflow vulnerabilities detection and prevention
mechanism
4. Critical review:
4.1 Static Analysis techniques and tools
4.2 Dynamic Analysis techniques and tools
5. Critical review:
5.1 Vulnerabilities taxonomy
5.2 Criteria of well-defined taxonomy
Start
Is the
theoretical
studies
comprehensive?
6. Conclude the theoretical studies
phase.
End
No
Yes
Phase 1: Theoretical
Studies
Research Methodology – Research Activities
Start
1. Critical review on relevant publications
2. Extracted the criteria for constructing
taxonomy
3. Detail analysis on the identified criteria
4. Construct criteria for well-defined
taxonomy
5. Review the constructed criteria for well-
defined taxonomy
Completed
review? Criteria
satisfied?
End
No
Yes
Yes
Phase 2: Taxonomy
Construction –
Construct well-
defined criteria
Research Methodology – Research Activities
Phase 2: Taxonomy
Construction –
Construct Taxonomy
Start
1. Critical review on relevant reports
2. Formation of Classes
3. Detail analysis on related publications
4. Organized and constructed the taxonomy
5. Review the constructed taxonomy with all
reports and publications
Taxonomy
satisfied?
End
No
Yes
Start
1. Evaluate the taxonomy against the
constructed well-defined criteria
2. Measure the criticality and significant of
each identified class.
3. Measure the criticality of OS and
vulnerabilities exploitation impact.
4. Evaluate the static analysis tools
effectiveness in detecting the identified
classes
5. Evaluate and verify the static analysis tools
effectiveness and efficiencies in analyzing
open-source code for comparative analysis
against earlier works
Evaluation
satisfied?
End
No
Yes
Research Methodology – Research Activities
Phase 3: Taxonomy
Evaluation
Research Methodology – Research Activities
Start
1. Suitable tester is selected
2. Select 100 advisories/reports
3.1.2 Tester matched
the vulnerability in
advisories with class
in taxonomy End
First
iteration
?
3.1.1 Shared the
taxonomy with tester
3. Collected and
compiled the result
3. Measured and
presented the result
Second
iteration
?
Third
iteration
?
3.2.1 Explain and
educate the tester
on the taxonomy
3.2.2 Tester matched
the vulnerability in
advisories with class
in taxonomy
3.3.1. Select a
different set of
advisories or reports.
3.3.2 Tester used the
same taxonomy and
matched the
vulnerabilities in the
report. No guidance
provided.
No
Yes Yes Yes
No No
Phase 3: 3.1.1
Evaluation on Taxonomy
against the well-defined
criteria - Measurement
on Effectiveness and
Completeness
Research Methodology – Research Activities
Start
1. Critical review on relevant reports
2. Formation of Classes
3. Detail analysis on related publications
4. Organized and constructed the taxonomy
5. Review the constructed taxonomy with all
reports and publications
Taxonomy
satisfied?
End
No
Yes
Phase 3: 3.1.2 Evaluation on
Taxonomy against well-
defined criteria - with Selected
Well-known Online
Vulnerability Database and
Organization
Research Methodology – Research Activities
Phase 3: 3.2 Evaluation on the
Significances and Relevancies
of Classes Defined in the
Taxonomy of C Overflow
Vulnerabilities Exploit
Start
1. Get the class and its characteristics from
the taxonomy
2. Searched the selected databases and
organization based on the class name and its
characteristics.
End
Found
advisories
or report?
More
search
synonym?
3. Critical review on
the advisory/report.
Complete
10
iteration?
4. Conclude and map the
result in table.
Complete
all class?
5. Compiled and analyzed the
result.
Complete all
databases or
organization?
No
Yes
No
No
No
No
Yes
Yes
Yes
Yes
Research Methodology – Research Activities
Phase 3: 3.3 Evaluation on
Significant and Relevancies of
C Overflow Vulnerabilities
Classes and Impact to OS
Criticality
Start
1. Identify the relevant OS.
End
Complete
execute all test
cases?
2. Develop vulnerable programs
based on the classes in the taxonomy.
4. Execute the activities.
5. Compile result and perform
analysis
No
Yes
3. Construct test activities for OS
evaluation.
Research Methodology – Research Activities
Start
1. Critically reviewed and identified
static analysis tools
End
Complete
analyze all
programs?
2. Select static analysis tools from the
identified list
4. Execute the selected static analysis
tool with the selected program as
input.
5. Compile result and perform
analysis
No
Yes
3. Select a program to analyze.
All static
analysis tool
used?
No
Yes
Phase 3: 3.4 Evaluate the
static analysis tools
effectiveness in detecting the
identified classes
Research Methodology – Research Activities
Start
1. Critically reviewed and identified
open-source codes/applications.
End
Complete
analyze all
programs?
2. Prepare the machine to be used for
the testing
5. Execute the selected static analysis
tool with the selected program as
input.
7. Compile result and perform
analysis
No
Yes
4. Select a program to analyze from
identified list.
All static
analysis tool
used?
No
Yes
3. Select the static analysis tool from
identified list
6. Get the result and normalize the
machine.
Phase 3: 3.5 Evaluate the
Static Analysis Tools
Efficiencies in Analysing Open-
source Code for Comparison
with Previous Works
Research Methodology – Summary
1. Pre analysis on vulnerabilities and
information security issues/cases.
2. In-depth review on software
vulnerabilities
3. Critical review on C overflows
vulnerabilities, its current detection
and prevention mechanism
4. Critical review on program
analysis techniques and tools focusing
on static analysis
5. Critical review on vulnerabilities
taxonomy
1. A thorough, careful and
significant review on vulnerabilities
taxonomy covering the:
i. criteria of well-defined
taxonomy,
ii. vulnerabilities taxonomy,
and
iii. C overflow vulnerabilities
taxonomy
2. Identified and formulate the
criteria of well-defined taxonomy
3. Construct ‘C Overflow
Vulnerabilities Attack’ taxonomy from
source code perspectives.
1. Evaluate the taxonomy against
the well-defined criteria.
2. Measure the criticality and
significant of each identified class.
3. Measure the criticality of OS and
vulnerabilities exploitation impact.
4. Evaluate the static analysis tools
effectiveness in detecting the
identified classes
5. Evaluate and verify the static
analysis tools effectiveness and
efficiencies in analyzing open-source
code for comparative analysis against
earlier works.
6. Analyze the findings
Theoretical
Studies
Taxonomy
Construction
Taxonomy
Evaluation
 Taxonomy Construction
 Taxonomy Evaluation
Result and Discussions: Taxonomy Construction – Criteria for
Well-Defined Taxonomy
No Criteria Description Purpose
1. Simplicity Simplified collection of studied objects or subject into
readable diagram or structures
To ease in understanding the studied
objects or subject.
2. Organized
structures
Organized into viewable, readable, and
understandable format.
To demonstrate the relationship
between objects and ease the process
of understanding.
3. Obvious Objective is clear, measureable, and observable
without doubt. Process flow is clear and easily
followed. Structure or format is easily understood
and able to stand on its own.
To ease the process of classifications.
4. Repeatability Result of classifying studied object by any
independent user can easily be duplicated by others.
For consistency purposes and reliability
of the result.
5. Specificity / Mutual
exclusive / Primitive
Value of object must be specific and explicit. Object
of study must belong to only one class.
To remove the ambiguity, ease the
classification process and support
repetitive result.
6. Similarity Object in the same class must have similar behavior
or characteristics.
For consistency, repeatability and ease
the understanding of studied objects.
7. Completeness Able to capture all objects studied without any doubt
no matter when it is applied.
To ensure user are able to apply any
new object into the classification
whenever required without any doubt.
8. Knowledge
compliant
Built using known existing terminology. To ease learning and classifying.
Result and Discussions: Taxonomy Construction – C
Overflow Vulnerabilities Exploit Taxonomy
Taxonomy of C Overflow Vulnerabilities Exploit
Unsafe Functions
Array Out-of-Bound
Integer Range/Overflow
Return-into-libc
Memory Function
Function Pointer / Pointer Aliasing
Variable Type Conversion
Pointer Scaling / Pointer Mixing
Uninitialized Variable
Null Termination
Result and Discussions: Taxonomy Construction – C
Overflow Vulnerabilities Exploit Taxonomy
No Unsafe Functions Array out-of-bound Return-into-libc
1 Use of unsafe C/C++
functions
Use of array Use a pointer to string and
does not requires unsafe
functions or array.
2 Used input either by user
or by system that is pass
into the function as input
Involves index which is
supplied by an operation in
a program
Use a pointer to point to a
string of character
3 Unsafe function is
executed to trigger the
overflow
To trigger overflow, index
used is beyond the upper
and lower bound of the
array
The string of character
contains new system or
function call or new
memory addressing or
large string to overflow the
next reference address
until it reach the new
intended memory
addressing which contains
a function call based on
available functions in libC.
Result and Discussions: Taxonomy Evaluation
Evaluation 1: Result of evaluating the effectiveness and completeness in classifying vulnerabilities using the
taxonomy
No. Methods Tester Result Successful
mapping rate
(%)
S M F
1 5 selected tester perform the
classifications using the taxonomy
without guidance on 100
vulnerabilities reports
Tester 1 60 5 35 0.60
Tester 2 51 28 21 0.51
Tester 3 71 14 15 0.71
Tester 4 68 5 27 0.68
Tester 5 80 9 11 0.80
Average successful rate of using taxonomy to classify vulnerabilities 0.66
2 Using the same reports and tester but
with guidance and explanation on
taxonomy
Tester 1 92 0 8 0.92
Tester 2 85 10 5 0.85
Tester 3 98 0 2 0.98
Tester 4 93 5 2 0.93
Tester 5 89 3 8 0.89
Average successful rate of using taxonomy to classify vulnerabilities 0.914
3 Using the same tester but the tester
was given different sets of
vulnerabilities reports from previous
test.
Tester 1 90 4 6 0.90
Tester 2 78 7 15 0.78
Tester 3 96 0 4 0.96
Tester 4 85 8 7 0.85
Tester 5 92 1 7 0.92
Average successful rate of using taxonomy to classify vulnerabilities 0.882
Average success 0.819
Result and Discussions: Taxonomy Evaluation
Evaluation 2: Result of Comparison of Taxonomy with Advisories/Reports for Completeness Measurement
Vulnerabilities
Database
Website
Class of C
Overflow
Vulnerabilities
NIST OWASP OSVDB Symantec Microsoft Secunia
Dept of
Homeland
Security
%
Matches
Unsafe Functions Yes Yes Yes Yes Yes Yes Yes 100%
Array Out-of-bound Yes Yes Yes Yes Yes Yes Yes 100%
Integer
Range/overflow
Yes Yes Yes Yes Yes Yes Yes 100%
Return-into-LibC Yes Yes Not
Applicable
Not
Applicable
Yes Not
Applicable
Yes 100%
Memory Function Yes Yes Yes Yes Yes Yes Yes 100%
Function Pointer /
Pointer Aliasing
Yes Yes Not
Applicable
Yes Yes Yes Yes 100%
Variable Type
Conversion
Yes Yes Not
Applicable
Not
Applicable
Yes Yes Yes 100%
Pointer Scaling /
Pointer Mixing
Yes Yes Yes Not
Applicable
Yes Yes Yes 100%
Uninitialized Variable Yes Yes Yes Yes Yes Not
Applicable
Yes 100%
Null Termination Yes Yes Yes Yes Yes Yes Yes 100%
Unknown class
(Unable to match
with the taxonomy)
No No No No No No No 0%
Result and Discussions: Taxonomy Evaluation
Evaluation 3: Evaluation on Relevancies and Significant of Classes in C Overflow Vulnerabilities Exploit Taxonomy -
Severity
Vulnerabilities
Database
C overflow
vulnerabilities class
NIST OWASP OSVDB Symantec Microsoft Secunia
Dept of
Homeland
Security
Unsafe functions High High High High High High High
Array out-of-bound High High High High High High High
Integer
range/overflow
High High High High High High High
Return-into-LibC High Medium NA NA Medium NA Medium
Memory Function High Medium Medium High Medium Medium High
Function Pointer /
Pointer Aliasing
Medium Medium NA Low Medium High Medium
Variable Type
Conversion
Medium Medium NA NA Low Medium Medium
Pointer Scaling /
Pointer Mixing
High Medium Low NA High High High
Uninitialized
Variable
Medium Low Low Low Low Low Medium
Null Termination Medium Low Low Medium Low Medium Medium
Result and Discussions: Taxonomy Evaluation
Evaluation 3: Evaluation on Relevancies and Significant of Classes in C Overflow Vulnerabilities Exploit Taxonomy -
Persistency
Vulnerabilities
Database
C overflow
vulnerabilities
class
NIST OWASP OSVDB Symantec Microsoft Secunia
Dept of
Homeland
Security
Unsafe functions 2013 2013 2013 2013 2013 2013 2013
Array out-of-
bound
2014 2013 2012 2012 2013 2013 2013
Integer
range/overflow
2013 2012 2012 2012 2013 2013 2013
Return-into-LibC 2013 2012 NA NA 2013 NA 2010
Memory Function 2014 2012 2012 2012 2013 2013 2012
Function Pointer /
Pointer Aliasing
2013 2010 NA 2010 2013 2013 2010
Variable Type
Conversion
2013 2010 NA NA 2013 2010 2010
Pointer Scaling /
Pointer Mixing
2010 2009 2009 NA 2009 2009 2009
Uninitialized
Variable
2013 2012 2012 2012 2013 2013 2012
Null Termination 2009 2009 2009 2009 2009 2009 2009
Result and Discussions: Taxonomy Evaluation
Evaluation 3: Evaluation on Relevancies and Significant of Classes in C Overflow Vulnerabilities Exploit Taxonomy - Impact
Vulnerabilities
Database
C overflow
vulnerabilities
class
NIST OWASP OSVDB Symantec Microsoft Secunia
Dept of
Homeland
Security
Unsafe functions High High High High High High High
Array out-of-
bound
High High High High High High High
Integer
range/overflow
High High High High High High High
Return-into-LibC High Medium NA NA High NA Medium
Memory Function High High High High High High High
Function Pointer /
Pointer Aliasing
Medium High NA Medium Medium Medium Medium
Variable Type
Conversion
Medium Medium NA NA Medium Medium Medium
Pointer Scaling /
Pointer Mixing
Medium Medium High NA High High High
Uninitialized
Variable
Medium High Medium Medium Medium Medium Medium
Null Termination Medium Medium Medium High Medium Medium Medium
Result and Discussions: Taxonomy Evaluation
Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality
- Significance and Relevancies
Operating System
C overflow vulnerabilities class
Windows XP
32-bits
Windows 7
32-bits
Windows 7
64-bits
Linux (Centos
5.5) 32-bits
Unsafe functions Yes Yes Yes Yes
Array out-of-bound Yes Yes Yes Yes
Integer range/overflow Yes Yes Yes Yes
Return-into-LibC Yes Yes Yes Yes
Memory Function Yes Yes Yes Yes
Function Pointer / Pointer Aliasing Yes Yes Yes Yes
Variable Type Conversion Yes Yes Yes Yes
Pointer Scaling / Pointer Mixing Yes Yes Yes Yes
Uninitialized Variable Yes Yes Yes Yes
Null Termination Yes Yes Yes Yes
Legend:
Vulnerable (Is the OS vulnerable): Yes – vulnerable, No – Not vulnerable
Difficulties (Level of difficulties to exploit): L – Low, M – Medium, H – Very difficult to exploit, NA – Not Applicable
Result and Discussions: Taxonomy Evaluation
Types of C overflows
vulnerabilities attack
Windows XP 32-bits Windows 7 32-bits Windows 7 64-bits Centos 5.5 Linux 32-
bits
V D V D V D V D
Unsafe Functions Yes L Yes L Yes M Yes L
Array Out-of-bound Yes L Yes L Yes M Yes L
Integer Range/overflow Yes L Yes L Yes M Yes L
Return-into-LibC Yes H Yes H Yes H Yes H
Memory Function Yes H Yes H Yes H Yes H
Function Pointer / Pointer
Aliasing
Yes M Yes H Yes H Yes M
Variable Type Conversion Yes M Yes M Yes H Yes M
Pointer Scaling / Pointer
Mixing
Yes H Yes H Yes H Yes H
Uninitialized Variable Yes L Yes L Yes L Yes L
Null Termination Yes L Yes L Yes M Yes L
Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality
- Vulnerable and Difficulties
Legend:
Vulnerable (Is the OS vulnerable): Yes – vulnerable, No – Not vulnerable
Difficulties (Level of difficulties to exploit): L – Low, M – Medium, H – Very difficult to exploit, NA – Not Applicable
Result and Discussions: Taxonomy Evaluation
Legend:
Vulnerable (Is the OS vulnerable): Yes – vulnerable, No – Not vulnerable
Difficulties (Level of difficulties to exploit): L – Low, M – Medium, H – Very difficult to exploit, NA – Not Applicable
Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality
- Input and Outcome for Windows XP 32-bits
Class of C overflows
vulnerabilities exploit
Input characteristics Effect on system / outcome
Unsafe Functions Few characters more
than given size
Overflow , loop without ending or throw memory exception.
Array Out-of-bound Few characters more
than given size
Overflow happen but program continue. Printed empty space, adding
last character ('n') and print double spacing instead of one.
Integer
Range/overflow
Variable except integer
larger than n32.
Overflow happen.
Return-into-LibC No specific length Nothing happen and the program continues.
Memory Function Depends on type of
functions
memset() triggers overflows whereas double free() will result in
vulnerable for exploitation.
Function Pointer /
Pointer Aliasing
Few characters bigger
than given size
Display weird characters. Length of second variable extended to the
same size of first variable. Possible use for exploitation to trigger
overflow
Variable Type
Conversion
Larger integer to small
character
Larger integer will force to display weird character. If the conversion is
from int to char and then to int again; overflow happen and vulnerable
for exploitation.
Pointer Scaling /
Pointer Mixing
No specific length A memory violation is shown (overflow). For struct pointer, it will still
assign memory address which later can be use to exploit by attackers.
Uninitialized Variable Not requires any length Assign a value to the variable and memory address, which can be used
for exploitation.
Null Termination Length is equivalent to
defined length
Nothing happen on the system and no additional characters added.
Result and Discussions: Taxonomy Evaluation
Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality
- Input and Outcome for Windows 7 32-bits
Class of C
overflows
vulnerabilities
exploit
Length of attacks Effect on system / outcome
Unsafe Functions Few characters more than
given size
Overflow, loop without ending or throw memory exception
Array Out-of-
bound
Few characters more than
given size
Overflow happen but program continue. Printed empty space, adding last
character ('n') and print double spacing instead of one.
Integer
Range/overflow
Variable except integer
larger than n32.
Overflow happen.
Return-into-LibC No specific length Nothing happen and the program continues.
Memory
Function
Depends on type of
functions
memset() triggers overflows whereas double free() will result in vulnerable
for exploitation.
Function Pointer
/ Pointer Aliasing
Few characters bigger than
given size
Display weird characters. Length of second variable extended to the same
size of first variable. Possible use for exploitation to trigger overflow
Variable Type
Conversion
Larger integer to small
character
Display weird character, convert to negative integer (unsigned value) thus
becoming overflow and vulnerable for exploitation.
Pointer Scaling /
Pointer Mixing
No specific length A memory violation is shown (overflow).
Uninitialized
Variable
Not requires any length Assign a value to the variable and memory address, which can be used for
exploitation.
Null Termination Length is equivalent to
defined length
Nothing happen on the system and no additional characters added.
Result and Discussions: Taxonomy Evaluation
Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality
- Input and Outcome for Windows 7 64-bits
Class of C overflows
vulnerabilities exploit
Length of attacks Effect on system / outcome
Unsafe Functions Few characters more than
the size of n64.
Overflow when the input to first variable is larger than n64 bit sizes.
Array Out-of-bound Few characters more than
given size
Overflow happen but program continue.
Integer
Range/overflow
Variable except integer larger
than n64.
Overflow happen.
Return-into-LibC No specific length Nothing happen and the program continues.
Memory Function Depends on type of functions Memory error violation (overflow) and after few processes, the
program aborted.
Function Pointer /
Pointer Aliasing
Few characters bigger than
given size
Nothing happen.
Variable Type
Conversion
Larger integer to small
character
Display weird character, convert to negative integer (unsigned value)
thus becoming overflow and vulnerable for exploitation.
Pointer Scaling /
Pointer Mixing
No specific length The system will still assign value and memory address which later
can be used to exploit by attackers.
Uninitialized Variable Not requires any length Assign a value to the variable and memory address, which can be
used for exploitation.
Null Termination Length is equivalent to
defined length
Nothing happen on the system and no additional characters added.
Result and Discussions: Taxonomy Evaluation
Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality
- Input and Outcome for Linux (Centos 5.5) 32-bits
Class of C overflows
vulnerabilities exploit
Length of attacks Effect on system / outcome
Unsafe Functions Few characters more
than the size of n32.
Overflow only happen when the input to first variable is larger than n32
bit sizes.
Array Out-of-bound Few characters more
than given size
Overflow happen but program continue.
Integer Range/overflow Variable except integer
larger than n32.
Overflow happen.
Return-into-LibC No specific length Nothing happen and the program continues.
Memory Function Depends on type of
functions
Segmentation fault, which is equivalent to memory overflow. Program
aborted.
Function Pointer /
Pointer Aliasing
Few characters bigger
than given size
Overflow on first variable after copy second variable with value from
first variable
Variable Type
Conversion
Larger integer to small
character
Conversion successful if the given integer is within range of character.
Larger integer will force to display weird character.
Pointer Scaling /
Pointer Mixing
No specific length The system will still assign value and memory address which later can
be used to exploit by attackers.
Uninitialized Variable Not requires any length Assign a value to the variable and memory address, which can be used
for exploitation.
Null Termination Length is equivalent to
defined length
Nothing happen on the system and no additional characters added.
Result and Discussions: Taxonomy Evaluation
Evaluation 5: Evaluation on Static Analysis Tools Effectiveness in Detecting Vulnerabilities based on C Overflow
Vulnerabilities Exploit Taxonomy
Vulnerability class Program Types SPLINT RATS ITS4 BOON BLAST
Unsafe Function Intra-procedural Yes Yes Yes Yes Yes
Inter-procedural Yes Yes Yes Yes Yes
Array Out-of-bound Intra-procedural Yes Yes Yes Yes Yes
Inter-procedural Yes Yes Yes Yes Yes
Integer Ranger /
Overflow
Intra-procedural Yes Yes Yes Yes Yes
Inter-procedural Yes Yes Yes Yes Yes
Return-into-LibC Intra-procedural No No No No No
Inter-procedural No No No No No
Memory Function Intra-procedural Yes Yes Yes Yes Yes
Inter-procedural Yes Yes Yes Yes Yes
Function Pointer /
Pointer Aliasing
Intra-procedural No No No No Yes
Inter-procedural No No No No Yes
Variable Type
Conversion
Intra-procedural No No No No Yes
Inter-procedural No No No No Yes
Pointer Scaling /
Pointer Mixing
Intra-procedural No Yes No No Yes
Inter-procedural No Yes No No Yes
Uninitialized Variable Intra-procedural No No No No No
Inter-procedural No No No No No
Null Termination Intra-procedural No No No No No
Inter-procedural No No No No No
Result and Discussions: Taxonomy Evaluation
Evaluation 6: Evaluation on Static Analysis Tools Effectiveness in Detecting Vulnerabilities on Open-Source Code
Tools Program Time
(minutes)
Detect? False alarm True alarm Detection rate
SPLINT Apache HTTPD 280 5 5 0 0%
Google Chrome 180 8 7 1 12.5%
MySQL CE 300 9 7 2 22.2%
Open VM Tool 130 8 5 3 37.5%
TPM Emulator 20 4 4 0 0%
RATS Apache HTTPD 270 6 6 0 0%
Google Chrome 200 9 6 3 33.3%
MySQL CE 330 15 13 2 13.3%
Open VM Tool 110 5 4 1 20%
TPM Emulator 35 6 6 0 0%
ITS4 Apache HTTPD 290 8 7 1 12.5%
Google Chrome 230 11 10 1 9.0%
MySQL CE 310 13 8 5 38.5%
Open VM Tool 100 9 7 2 22.2%
TPM Emulator 20 6 6 0 0%
Boon Apache HTTPD 315 6 6 0 0%
Google Chrome 190 5 4 1 20%
MySQL CE 400 9 6 3 33.3%
Open VM Tool 170 13 12 1 7.7%
TPM Emulator 25 13 11 2 15.4%
BLAST Apache HTTPD 650 24 21 3 12.5%
Google Chrome 490 14 11 3 21.4%
MySQL CE 410 29 19 10 34.5%
Open VM Tool 320 23 8 15 65.2%
TPM Emulator 80 14 4 10 71.4%
Result and Discussions: Summary
Phases
Section
Phase 2 – Taxonomy Construction Phase 3 – Taxonomy Evaluation
Research Question
(RQ)
RQ 2: How to identify and improve
understanding of C overflow vulnerabilities
and prevent from occurring again?
RQ 4: Is the taxonomy effectives in improving
understanding of C overflow vulnerabilities?
RQ 5: Is the taxonomy comprehensive and
useful?
Research Objectives
(RO)
RO 3: To compile and construct criteria for
well-defined taxonomy
RO 4: To construct taxonomy specifically
addressing C overflow vulnerabilities
RO 5: To evaluate taxonomy based on criteria
RO 6: To evaluate the criticality and
relevancies of C overflows vulnerabilities
based on the taxonomy
Phase Deliverables /
Output (RR)
RR 3: Criteria of well-defined taxonomy
RR 4: Taxonomy of C overflow
vulnerabilities exploit
RR 5: Taxonomy validated
RR 6: Significant findings of the research
1. Consolidate and construct a list of criterion for well-defined taxonomy
2. Construct C Overflow Vulnerabilities Exploit Taxonomy
3. Perform 5 Evaluation which presented in 6 evaluation results
Based on the evaluations, it is concluded that
1. The taxonomy is well-defined and inline will the criteria
2. The evaluations proved that the classes in the C Overflow Vulnerabilities Exploit
taxonomy is significant and relevant
C Overflows Vulnerabilities Exploit Taxonomy And Evaluation on Static Analysis Tools - Mock Viva for Msc Studies
Phases
Section
Phase 1 – Theoretical Studies Phase 2 – Taxonomy
Construction
Phase 3 – Taxonomy
Evaluation
Research Question
(RQ)
RQ 1: Why C overflow
vulnerabilities still persist
although it is common and
known for more than two
decades?
RQ 2: How to identify and
improve understanding of C
overflow vulnerabilities and
prevent from occurring
again?
RQ 4: Is the taxonomy
effectives in improving
understanding of C overflow
vulnerabilities?
RQ 5: Is the taxonomy
comprehensive and useful?
Research Objectives
(RO)
RO 1: To identify strength and
weaknesses of current
mechanisms in detecting and
preventing C overflow
vulnerabilities from occurring
RO 2: To identify list of C
overflows vulnerabilities class
RO 3: To compile and
construct criteria for well-
defined taxonomy
RO 4: To construct taxonomy
specifically addressing C
overflow vulnerabilities
RO 5: To evaluate taxonomy
based on criteria
RO 6: To evaluate the
criticality and relevancies of
C overflows vulnerabilities
based on the taxonomy
Phase Deliverables /
Output (RR)
RR 1: Strength and weaknesses
of current detection and
prevention mechanism
RR 2: List of C overflow
vulnerabilities classes
RR 3: Criteria of well-defined
taxonomy
RR 4: Taxonomy of C
overflow vulnerabilities
exploit
RR 5: Taxonomy validated
RR 6: Significant findings of
the research
Conclusion
Conclusion
1. C Overflow Vulnerabilities is still relevant
2. There is NO well-defined taxonomy specifically focusing on complete C Overflow
Vulnerabilities from source-code perspective for improvement of understanding and
knowledge of C developers which looks into the root cause of the problem.
Therefore, this is a taxonomy; “C Overflow Vulnerabilities Exploit” Taxonomy; that is
proven from the evaluation done to be helpful and useful.
3. 5 evaluations done that shows the significant and relevancies of each classes in the
constructed taxonomy
Recommendation
1. To further develop a method using the taxonomy as guidelines for static analysis
tools improvement
2. To further evaluate the effectiveness of the taxonomy
C Overflows Vulnerabilities Exploit Taxonomy And Evaluation on Static Analysis Tools - Mock Viva for Msc Studies
Nurul Haszeli Ahmad
Student Id: 2009625912
FSMK, UiTM Shah Alam
masteramuk@yahoo.com
Supervisor: Dr Syed Ahmad Aljunid (FSMK, UiTM Shah Alam)
Co-supervisor: Dr Jamalul-Lail Ab Manan (MIMOS Berhad)
Ad

More Related Content

What's hot (6)

Software Security Testing
Software Security TestingSoftware Security Testing
Software Security Testing
ankitmehta21
 
Tech Report: On the Effectiveness of Malware Protection on Android
Tech Report: On the Effectiveness of Malware Protection on AndroidTech Report: On the Effectiveness of Malware Protection on Android
Tech Report: On the Effectiveness of Malware Protection on Android
Fraunhofer AISEC
 
The job of software tester - How do I see software testing
The job of software tester - How do I see software testingThe job of software tester - How do I see software testing
The job of software tester - How do I see software testing
Ali LABBENE
 
VMRay intro video
VMRay intro videoVMRay intro video
VMRay intro video
Chad Loeven
 
20160831_app_storesecurity_Seminar
20160831_app_storesecurity_Seminar20160831_app_storesecurity_Seminar
20160831_app_storesecurity_Seminar
Jisoo Park
 
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...
IJCNCJournal
 
Software Security Testing
Software Security TestingSoftware Security Testing
Software Security Testing
ankitmehta21
 
Tech Report: On the Effectiveness of Malware Protection on Android
Tech Report: On the Effectiveness of Malware Protection on AndroidTech Report: On the Effectiveness of Malware Protection on Android
Tech Report: On the Effectiveness of Malware Protection on Android
Fraunhofer AISEC
 
The job of software tester - How do I see software testing
The job of software tester - How do I see software testingThe job of software tester - How do I see software testing
The job of software tester - How do I see software testing
Ali LABBENE
 
VMRay intro video
VMRay intro videoVMRay intro video
VMRay intro video
Chad Loeven
 
20160831_app_storesecurity_Seminar
20160831_app_storesecurity_Seminar20160831_app_storesecurity_Seminar
20160831_app_storesecurity_Seminar
Jisoo Park
 
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...
IJCNCJournal
 

Viewers also liked (13)

Technology buffet for new teachers march 2012
Technology buffet for new teachers march 2012Technology buffet for new teachers march 2012
Technology buffet for new teachers march 2012
Karen Brooks
 
Identifying Cross Site Scripting Vulnerabilities in Web Applications
Identifying Cross Site Scripting Vulnerabilities in Web ApplicationsIdentifying Cross Site Scripting Vulnerabilities in Web Applications
Identifying Cross Site Scripting Vulnerabilities in Web Applications
Porfirio Tramontana
 
Analysis of field data on web security vulnerabilities
Analysis of field data on web security vulnerabilities Analysis of field data on web security vulnerabilities
Analysis of field data on web security vulnerabilities
Papitha Velumani
 
Analysis of Field Data on Web Security Vulnerabilities
Analysis of Field Data on Web Security VulnerabilitiesAnalysis of Field Data on Web Security Vulnerabilities
Analysis of Field Data on Web Security Vulnerabilities
KaashivInfoTech Company
 
Армия освобождения домохозяек: структура, состав вооружений, методы коммуникации
Армия освобождения домохозяек: структура, состав вооружений, методы коммуникацииАрмия освобождения домохозяек: структура, состав вооружений, методы коммуникации
Армия освобождения домохозяек: структура, состав вооружений, методы коммуникации
Andrew Petukhov
 
2012 04 Analysis Techniques for Mobile OS Security
2012 04 Analysis Techniques for Mobile OS Security2012 04 Analysis Techniques for Mobile OS Security
2012 04 Analysis Techniques for Mobile OS Security
Raleigh ISSA
 
A Study on Dynamic Detection of Web Application Vulnerabilities
A Study on Dynamic Detection of Web Application VulnerabilitiesA Study on Dynamic Detection of Web Application Vulnerabilities
A Study on Dynamic Detection of Web Application Vulnerabilities
Yuji Kosuga
 
Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...
Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...
Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...
Andrew Petukhov
 
WEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSE
WEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSEWEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSE
WEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSE
Ajith Kp
 
No locked doors, no windows barred: hacking OpenAM infrastructure
No locked doors, no windows barred: hacking OpenAM infrastructureNo locked doors, no windows barred: hacking OpenAM infrastructure
No locked doors, no windows barred: hacking OpenAM infrastructure
Andrew Petukhov
 
CODE BLUE 2016 - Method of Detecting Vulnerability in Web Apps
CODE BLUE 2016 - Method of Detecting Vulnerability in Web AppsCODE BLUE 2016 - Method of Detecting Vulnerability in Web Apps
CODE BLUE 2016 - Method of Detecting Vulnerability in Web Apps
Isao Takaesu
 
Attributes based encryption with verifiable outsourced decryption
Attributes based encryption with verifiable outsourced decryptionAttributes based encryption with verifiable outsourced decryption
Attributes based encryption with verifiable outsourced decryption
KaashivInfoTech Company
 
data mining for security application
data mining for security applicationdata mining for security application
data mining for security application
bharatsvnit
 
Technology buffet for new teachers march 2012
Technology buffet for new teachers march 2012Technology buffet for new teachers march 2012
Technology buffet for new teachers march 2012
Karen Brooks
 
Identifying Cross Site Scripting Vulnerabilities in Web Applications
Identifying Cross Site Scripting Vulnerabilities in Web ApplicationsIdentifying Cross Site Scripting Vulnerabilities in Web Applications
Identifying Cross Site Scripting Vulnerabilities in Web Applications
Porfirio Tramontana
 
Analysis of field data on web security vulnerabilities
Analysis of field data on web security vulnerabilities Analysis of field data on web security vulnerabilities
Analysis of field data on web security vulnerabilities
Papitha Velumani
 
Analysis of Field Data on Web Security Vulnerabilities
Analysis of Field Data on Web Security VulnerabilitiesAnalysis of Field Data on Web Security Vulnerabilities
Analysis of Field Data on Web Security Vulnerabilities
KaashivInfoTech Company
 
Армия освобождения домохозяек: структура, состав вооружений, методы коммуникации
Армия освобождения домохозяек: структура, состав вооружений, методы коммуникацииАрмия освобождения домохозяек: структура, состав вооружений, методы коммуникации
Армия освобождения домохозяек: структура, состав вооружений, методы коммуникации
Andrew Petukhov
 
2012 04 Analysis Techniques for Mobile OS Security
2012 04 Analysis Techniques for Mobile OS Security2012 04 Analysis Techniques for Mobile OS Security
2012 04 Analysis Techniques for Mobile OS Security
Raleigh ISSA
 
A Study on Dynamic Detection of Web Application Vulnerabilities
A Study on Dynamic Detection of Web Application VulnerabilitiesA Study on Dynamic Detection of Web Application Vulnerabilities
A Study on Dynamic Detection of Web Application Vulnerabilities
Yuji Kosuga
 
Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...
Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...
Detecting Security Vulnerabilities in Web Applications Using Dynamic Analysis...
Andrew Petukhov
 
WEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSE
WEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSEWEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSE
WEB APPLICATION VULNERABILITIES: DAWN, DETECTION, EXPLOITATION AND DEFENSE
Ajith Kp
 
No locked doors, no windows barred: hacking OpenAM infrastructure
No locked doors, no windows barred: hacking OpenAM infrastructureNo locked doors, no windows barred: hacking OpenAM infrastructure
No locked doors, no windows barred: hacking OpenAM infrastructure
Andrew Petukhov
 
CODE BLUE 2016 - Method of Detecting Vulnerability in Web Apps
CODE BLUE 2016 - Method of Detecting Vulnerability in Web AppsCODE BLUE 2016 - Method of Detecting Vulnerability in Web Apps
CODE BLUE 2016 - Method of Detecting Vulnerability in Web Apps
Isao Takaesu
 
Attributes based encryption with verifiable outsourced decryption
Attributes based encryption with verifiable outsourced decryptionAttributes based encryption with verifiable outsourced decryption
Attributes based encryption with verifiable outsourced decryption
KaashivInfoTech Company
 
data mining for security application
data mining for security applicationdata mining for security application
data mining for security application
bharatsvnit
 
Ad

Similar to C Overflows Vulnerabilities Exploit Taxonomy And Evaluation on Static Analysis Tools - Mock Viva for Msc Studies (20)

The unprecedented state of web insecurity
The unprecedented state of web insecurityThe unprecedented state of web insecurity
The unprecedented state of web insecurity
Vincent Kwon
 
Solnet dev secops meetup
Solnet dev secops meetupSolnet dev secops meetup
Solnet dev secops meetup
pbink
 
Research challenges and issues in web security
Research challenges and issues in web securityResearch challenges and issues in web security
Research challenges and issues in web security
IAEME Publication
 
When do software issues get reported in large open source software
When do software issues get reported in large open source softwareWhen do software issues get reported in large open source software
When do software issues get reported in large open source software
RAKESH RANA
 
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
University of Antwerp
 
When do software issues get reported in large open source software - Rakesh Rana
When do software issues get reported in large open source software - Rakesh RanaWhen do software issues get reported in large open source software - Rakesh Rana
When do software issues get reported in large open source software - Rakesh Rana
IWSM Mensura
 
Owasp top 10 2013
Owasp top 10   2013Owasp top 10   2013
Owasp top 10 2013
Aryan G
 
Owasp top 10 2013
Owasp top 10 2013Owasp top 10 2013
Owasp top 10 2013
Bee_Ware
 
Owasp top 10_-_2013
Owasp top 10_-_2013Owasp top 10_-_2013
Owasp top 10_-_2013
Edho Armando
 
Matteo Meucci Software Security in practice - Aiea torino - 30-10-2015
Matteo Meucci   Software Security in practice - Aiea torino - 30-10-2015Matteo Meucci   Software Security in practice - Aiea torino - 30-10-2015
Matteo Meucci Software Security in practice - Aiea torino - 30-10-2015
Minded Security
 
Security against Web Application Attacks Using Ontology Based Intrusion Detec...
Security against Web Application Attacks Using Ontology Based Intrusion Detec...Security against Web Application Attacks Using Ontology Based Intrusion Detec...
Security against Web Application Attacks Using Ontology Based Intrusion Detec...
IRJET Journal
 
Secure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous deliverySecure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous delivery
Tim Mackey
 
Secure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous deliverySecure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous delivery
Black Duck by Synopsys
 
Fendley how secure is your e learning
Fendley how secure is your e learningFendley how secure is your e learning
Fendley how secure is your e learning
Bryan Fendley
 
OWASP Top Ten 2013
OWASP Top Ten 2013OWASP Top Ten 2013
OWASP Top Ten 2013
Alessandro Bonu
 
Secure Coding and Threat Modeling
Secure Coding and Threat ModelingSecure Coding and Threat Modeling
Secure Coding and Threat Modeling
Miriam Celi, CISSP, GISP, MSCS, MBA
 
The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour...
 The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour... The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour...
The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour...
WhiteSource
 
Owasp o
Owasp oOwasp o
Owasp o
Sagar Nangare
 
Audit and security application
Audit and security applicationAudit and security application
Audit and security application
Rihab Chebbah
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
The unprecedented state of web insecurity
The unprecedented state of web insecurityThe unprecedented state of web insecurity
The unprecedented state of web insecurity
Vincent Kwon
 
Solnet dev secops meetup
Solnet dev secops meetupSolnet dev secops meetup
Solnet dev secops meetup
pbink
 
Research challenges and issues in web security
Research challenges and issues in web securityResearch challenges and issues in web security
Research challenges and issues in web security
IAEME Publication
 
When do software issues get reported in large open source software
When do software issues get reported in large open source softwareWhen do software issues get reported in large open source software
When do software issues get reported in large open source software
RAKESH RANA
 
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...
University of Antwerp
 
When do software issues get reported in large open source software - Rakesh Rana
When do software issues get reported in large open source software - Rakesh RanaWhen do software issues get reported in large open source software - Rakesh Rana
When do software issues get reported in large open source software - Rakesh Rana
IWSM Mensura
 
Owasp top 10 2013
Owasp top 10   2013Owasp top 10   2013
Owasp top 10 2013
Aryan G
 
Owasp top 10 2013
Owasp top 10 2013Owasp top 10 2013
Owasp top 10 2013
Bee_Ware
 
Owasp top 10_-_2013
Owasp top 10_-_2013Owasp top 10_-_2013
Owasp top 10_-_2013
Edho Armando
 
Matteo Meucci Software Security in practice - Aiea torino - 30-10-2015
Matteo Meucci   Software Security in practice - Aiea torino - 30-10-2015Matteo Meucci   Software Security in practice - Aiea torino - 30-10-2015
Matteo Meucci Software Security in practice - Aiea torino - 30-10-2015
Minded Security
 
Security against Web Application Attacks Using Ontology Based Intrusion Detec...
Security against Web Application Attacks Using Ontology Based Intrusion Detec...Security against Web Application Attacks Using Ontology Based Intrusion Detec...
Security against Web Application Attacks Using Ontology Based Intrusion Detec...
IRJET Journal
 
Secure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous deliverySecure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous delivery
Tim Mackey
 
Secure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous deliverySecure application deployment in the age of continuous delivery
Secure application deployment in the age of continuous delivery
Black Duck by Synopsys
 
Fendley how secure is your e learning
Fendley how secure is your e learningFendley how secure is your e learning
Fendley how secure is your e learning
Bryan Fendley
 
The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour...
 The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour... The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour...
The Top 3 Strategies To Reduce Your Open Source Security Risks - A WhiteSour...
WhiteSource
 
Audit and security application
Audit and security applicationAudit and security application
Audit and security application
Rihab Chebbah
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
Ad

More from Nurul Haszeli Ahmad (8)

Ontology model for c overflow vulnerabilities attack
Ontology model for c overflow vulnerabilities attackOntology model for c overflow vulnerabilities attack
Ontology model for c overflow vulnerabilities attack
Nurul Haszeli Ahmad
 
Agile Project Management: Introduction to AGILE - The Basic 101
Agile Project Management: Introduction to AGILE - The Basic 101Agile Project Management: Introduction to AGILE - The Basic 101
Agile Project Management: Introduction to AGILE - The Basic 101
Nurul Haszeli Ahmad
 
Windows Services 101
Windows Services 101Windows Services 101
Windows Services 101
Nurul Haszeli Ahmad
 
Introduction to UML
Introduction to UMLIntroduction to UML
Introduction to UML
Nurul Haszeli Ahmad
 
Introduction To TRIZ
Introduction To TRIZIntroduction To TRIZ
Introduction To TRIZ
Nurul Haszeli Ahmad
 
Understanding Vulnerability by Refining Taxonomy
Understanding Vulnerability by Refining TaxonomyUnderstanding Vulnerability by Refining Taxonomy
Understanding Vulnerability by Refining Taxonomy
Nurul Haszeli Ahmad
 
Amazing quran by Dr Milller
Amazing quran by Dr MilllerAmazing quran by Dr Milller
Amazing quran by Dr Milller
Nurul Haszeli Ahmad
 
2013 Security Report by Sophos
2013 Security Report by Sophos2013 Security Report by Sophos
2013 Security Report by Sophos
Nurul Haszeli Ahmad
 
Ontology model for c overflow vulnerabilities attack
Ontology model for c overflow vulnerabilities attackOntology model for c overflow vulnerabilities attack
Ontology model for c overflow vulnerabilities attack
Nurul Haszeli Ahmad
 
Agile Project Management: Introduction to AGILE - The Basic 101
Agile Project Management: Introduction to AGILE - The Basic 101Agile Project Management: Introduction to AGILE - The Basic 101
Agile Project Management: Introduction to AGILE - The Basic 101
Nurul Haszeli Ahmad
 
Understanding Vulnerability by Refining Taxonomy
Understanding Vulnerability by Refining TaxonomyUnderstanding Vulnerability by Refining Taxonomy
Understanding Vulnerability by Refining Taxonomy
Nurul Haszeli Ahmad
 

Recently uploaded (20)

Engage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdfEngage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdf
TechSoup
 
Link your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRMLink your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRM
Celine George
 
dynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south Indiadynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south India
PrachiSontakke5
 
Metamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative JourneyMetamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative Journey
Arshad Shaikh
 
Understanding P–N Junction Semiconductors: A Beginner’s Guide
Understanding P–N Junction Semiconductors: A Beginner’s GuideUnderstanding P–N Junction Semiconductors: A Beginner’s Guide
Understanding P–N Junction Semiconductors: A Beginner’s Guide
GS Virdi
 
Odoo Inventory Rules and Routes v17 - Odoo Slides
Odoo Inventory Rules and Routes v17 - Odoo SlidesOdoo Inventory Rules and Routes v17 - Odoo Slides
Odoo Inventory Rules and Routes v17 - Odoo Slides
Celine George
 
2541William_McCollough_DigitalDetox.docx
2541William_McCollough_DigitalDetox.docx2541William_McCollough_DigitalDetox.docx
2541William_McCollough_DigitalDetox.docx
contactwilliamm2546
 
BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...
BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...
BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...
Nguyen Thanh Tu Collection
 
Herbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptxHerbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptx
RAJU THENGE
 
Presentation of the MIPLM subject matter expert Erdem Kaya
Presentation of the MIPLM subject matter expert Erdem KayaPresentation of the MIPLM subject matter expert Erdem Kaya
Presentation of the MIPLM subject matter expert Erdem Kaya
MIPLM
 
Sinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_NameSinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_Name
keshanf79
 
Sugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptxSugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptx
Dr. Renu Jangid
 
How to Customize Your Financial Reports & Tax Reports With Odoo 17 Accounting
How to Customize Your Financial Reports & Tax Reports With Odoo 17 AccountingHow to Customize Your Financial Reports & Tax Reports With Odoo 17 Accounting
How to Customize Your Financial Reports & Tax Reports With Odoo 17 Accounting
Celine George
 
How to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of saleHow to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of sale
Celine George
 
Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...
Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...
Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...
National Information Standards Organization (NISO)
 
apa-style-referencing-visual-guide-2025.pdf
apa-style-referencing-visual-guide-2025.pdfapa-style-referencing-visual-guide-2025.pdf
apa-style-referencing-visual-guide-2025.pdf
Ishika Ghosh
 
Stein, Hunt, Green letter to Congress April 2025
Stein, Hunt, Green letter to Congress April 2025Stein, Hunt, Green letter to Congress April 2025
Stein, Hunt, Green letter to Congress April 2025
Mebane Rash
 
How to Manage Purchase Alternatives in Odoo 18
How to Manage Purchase Alternatives in Odoo 18How to Manage Purchase Alternatives in Odoo 18
How to Manage Purchase Alternatives in Odoo 18
Celine George
 
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public SchoolsK12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
dogden2
 
How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...
How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...
How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...
Celine George
 
Engage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdfEngage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdf
TechSoup
 
Link your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRMLink your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRM
Celine George
 
dynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south Indiadynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south India
PrachiSontakke5
 
Metamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative JourneyMetamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative Journey
Arshad Shaikh
 
Understanding P–N Junction Semiconductors: A Beginner’s Guide
Understanding P–N Junction Semiconductors: A Beginner’s GuideUnderstanding P–N Junction Semiconductors: A Beginner’s Guide
Understanding P–N Junction Semiconductors: A Beginner’s Guide
GS Virdi
 
Odoo Inventory Rules and Routes v17 - Odoo Slides
Odoo Inventory Rules and Routes v17 - Odoo SlidesOdoo Inventory Rules and Routes v17 - Odoo Slides
Odoo Inventory Rules and Routes v17 - Odoo Slides
Celine George
 
2541William_McCollough_DigitalDetox.docx
2541William_McCollough_DigitalDetox.docx2541William_McCollough_DigitalDetox.docx
2541William_McCollough_DigitalDetox.docx
contactwilliamm2546
 
BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...
BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...
BỘ ĐỀ TUYỂN SINH VÀO LỚP 10 TIẾNG ANH - 25 ĐỀ THI BÁM SÁT CẤU TRÚC MỚI NHẤT, ...
Nguyen Thanh Tu Collection
 
Herbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptxHerbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptx
RAJU THENGE
 
Presentation of the MIPLM subject matter expert Erdem Kaya
Presentation of the MIPLM subject matter expert Erdem KayaPresentation of the MIPLM subject matter expert Erdem Kaya
Presentation of the MIPLM subject matter expert Erdem Kaya
MIPLM
 
Sinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_NameSinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_Name
keshanf79
 
Sugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptxSugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptx
Dr. Renu Jangid
 
How to Customize Your Financial Reports & Tax Reports With Odoo 17 Accounting
How to Customize Your Financial Reports & Tax Reports With Odoo 17 AccountingHow to Customize Your Financial Reports & Tax Reports With Odoo 17 Accounting
How to Customize Your Financial Reports & Tax Reports With Odoo 17 Accounting
Celine George
 
How to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of saleHow to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of sale
Celine George
 
apa-style-referencing-visual-guide-2025.pdf
apa-style-referencing-visual-guide-2025.pdfapa-style-referencing-visual-guide-2025.pdf
apa-style-referencing-visual-guide-2025.pdf
Ishika Ghosh
 
Stein, Hunt, Green letter to Congress April 2025
Stein, Hunt, Green letter to Congress April 2025Stein, Hunt, Green letter to Congress April 2025
Stein, Hunt, Green letter to Congress April 2025
Mebane Rash
 
How to Manage Purchase Alternatives in Odoo 18
How to Manage Purchase Alternatives in Odoo 18How to Manage Purchase Alternatives in Odoo 18
How to Manage Purchase Alternatives in Odoo 18
Celine George
 
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public SchoolsK12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
dogden2
 
How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...
How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...
How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...
Celine George
 

C Overflows Vulnerabilities Exploit Taxonomy And Evaluation on Static Analysis Tools - Mock Viva for Msc Studies

  • 1. Master of Sciences Nurul Haszeli Ahmad Student Id: 2009625912 Supervisor: Dr Syed Ahmad Aljunid (FSMK, UiTM Shah Alam Co-Supervisor: Dr Jamalul-Lail Ab Manan (MIMOS Berhad)
  • 2.  Works on reliable and trustworthy system starts since late 70s  Extensively after Morris Worm  After three decades, reliable and trustworthy system are yet to achieve with software vulnerabilities still being reported and C overflow vulnerabilities is still one of the top ranked vulnerabilities  This thesis focus on understanding vulnerabilities via taxonomy as one of ways to improve system reliability and trustworthy  The thesis ◦ Taxonomy and Criteria of well-defined taxonomy ◦ Method to evaluate taxonomy ◦ Method to evaluate OS reliability ◦ Result of evaluating static analysis tools
  • 3. Introduction Review of Literature Research Methodology Results and Discussion Conclusion and Recommendation Q & A
  • 4.  Background of Study  Problem Statement  RQ & RO  Research Significance  Assumptions  Scope and Limitations
  • 5. Introduction – Background of Study • First vulnerability was discovered unintentionally by Robert Morris in 1988 • However the hype of the vulnerabilities only starts after 1996 after an article written and published by hackers known as Aleph One. • Since then, the vulnerabilities and exploitation moves to different stage. • 87% increase in terms of exploitation on vulnerabilities (CSM, 2009) • the intensity of attack on web application vulnerability (Cenzic, 2009) • 90% of web application is vulnerable with Adobe ranked the top contributor (Cenzic, 2010) • First sophisticated malware exploited Windows vulnerability reported (Symantec Corporation, 2010), (Falliere, et. al., 2011), (Chen, 2010) • The malware evolved
  • 6. Introduction – Background of Study • First vulnerability was discovered unintentionally by Robert Morris in 1988 • However the hype of the vulnerabilities only starts after 1996 after an article written and published by hackers known as Aleph One. • Since then, the vulnerabilities and exploitation moves to different stage. • 87% increase in terms of exploitation on vulnerabilities (CSM, 2009) • the intensity of attack on web application vulnerability (Cenzic, 2009) • 90% of web application is vulnerable with Adobe ranked the top contributor (Cenzic, 2010) • First sophisticated malware exploited Windows vulnerability reported (Symantec Corporation, 2010), (Falliere, et. al., 2011), (Chen, 2010) • The malware evolved 4842 4644 5562 4814 6253 0 1000 2000 3000 4000 5000 6000 7000 2006 2007 2008 2009 2010 No. of Vulnerabilities References: Symantec Corporation, Internet Security Threat Report, Volume 16, 2011
  • 7. Introduction – Background of Study • First vulnerability was discovered unintentionally by Robert Morris in 1988 • However the hype of the vulnerabilities only starts after 1996 after an article written and published by hackers known as Aleph One. • Since then, the vulnerabilities and exploitation moves to different stage. • 87% increase in terms of exploitation on vulnerabilities (CSM, 2009) • the intensity of attack on web application vulnerability (Cenzic, 2009) • 90% of web application is vulnerable with Adobe ranked the top contributor (Cenzic, 2010) • First sophisticated malware exploited Windows vulnerability reported (Symantec Corporation, 2010), (Falliere, et. al., 2011), (Chen, 2010) • The malware evolved #ofvulnerabilities Year References: Stefan Frei (NSS Labs), Analyst Brief: Vulnerability Threat Trends – A decade in view, Transition on the way, Feb 04, 2013
  • 8. Introduction – Background of Study References: Microsoft Corporation, Software Vulnerability Exploitation Trends, Technical Report, 2013. http://www.microsoft.com/en-us/download/details.aspx?id=39680
  • 9. Introduction – Background of Study References: MITRE Corporation, CVE Details – Vulnerabilities by type and Year [Online], Published at http://www.cvedetails.com/vulnerabilities-by-types.php, 2014. Retrieved on April 20th, 2014.
  • 10. Introduction – Background of Study • Vulnerabilities in applications is higher compared to hardware and network (Sans Institute, Top Cyber Security Risks - Vulnerability Exploitation Trends, 2009) • 80% is ranked with high and medium severity (Microsoft Corporation, Microsoft Security Intelligence Report, 2011) • 50% is contributed in major-used applications compared to web-based vulnerabilities (IBM, 2011; HP, 2011) • 3 – 15% (or more) vulnerabilities are capable to trigger memory overflow (Cenzic, 2010; Cisco, 2010; HP, 2011; SANS Institute, 2010; NIST, 2011)
  • 11. Introduction – Background of Study References: Stefan Frei (NSS Labs), Analyst Brief: Vulnerability Threat Trends – A decade in view, Transition on the way, Feb 04, 2013
  • 12. Introduction – Background of Study Reference: MITRE Corporation, Current CVSS Score Distribution For All Vulnerabilities, CVE Details [online]. 2014, Accessed on April 20, 2014 at http://www.cvedetails.com/
  • 13. Introduction – Background of Study Reference: MITRE Corporation, Current CVSS Score Distribution For All Vulnerabilities, CVE Details [online]. 2014, Accessed on April 20, 2014 at http://www.cvedetails.com/
  • 14. Introduction – Background of Study Reference: Microsoft Corporation, Software Vulnerability Exploitation Trends, 2013
  • 15. Introduction – Problem Statement • Vulnerabilities continue to persist • Requires only 0.1% from total vulnerabilities to cause problem to computer system (Symantec Corporation, 2013)
  • 16. Introduction – Problem Statement • Vulnerabilities continue to persist • Requires only 0.1% from total vulnerabilities to cause problem to computer system (Symantec Corporation, 2013)
  • 17. Introduction – Problem Statement • Vulnerabilities continue to persist • Requires only 0.1% from total vulnerabilities to cause problem to computer system (Symantec Corporation, 2013) • Vulnerabilities exist in software, hardware and networks. • Software vulnerabilities are due to programming errors, misconfiguration, invalid process flow and because the nature of the programs it self (Beizer, 1990), (Aslam, 1995), (Longstaff et al, 1997), (Howard and Longstaff, 1988), (Krsul, 1988), (Piessens, 2002), (Vipindeep and Jalote, 2005), (Alhazmi et al, 2006), (Moore, 2007) • There are many programming errors • Impact of this errors are software abnormal behavior, integrity error or being utilize for exploitation • The most persistent is the classical C overflow vulnerabilities. • How serious it is? • Viega & McGraw dedicated a chapter • Howard, LeBlanc and Viega discussed in two books • MITRE Corporation and SANS release a report • Microsoft and Apple stress this on their development process. • Work to resolve this started by Wagner and progressive since then • But C overflow vulnerabilities continue to persist.
  • 18. Introduction – Problem Statement C overflow vulnerabilities continue to persist (more than 3 decades) despite many works have been done (in the area of analysis tools, policies and knowledge improvement). Therefore, there are still gaps in this few area which requires further analysis and proposed solution with the objective of solving this prolonged security nightmare.
  • 19. Introduction … continue Research Question Research Objective 1. Why C overflow vulnerabilities still persist although it is common knowledge, and there are numerous methods and tools available to overcome them? 1. To identify the reasons why C overflow vulnerabilities, despite more than three decades, still persist although there are various methods and tools available. 2. How to improve the understanding and knowledge of software developer on C overflow vulnerabilities from source code perspective? 2. To construct a well-defined C overflow vulnerabilities exploit taxonomy from source code perspective. 3. How to evaluate the well-defined C overflow vulnerabilities taxonomy from source code perspective? 3. To evaluate the constructed taxonomy against the well-defined criteria. 4. What is the effectiveness of static analysis tools in detecting the C overflow vulnerabilities exploit based on the well-defined taxonomy? 4. To evaluate the effectiveness of static analysis tools in detecting C overflow vulnerabilities based on the classes in the constructed taxonomy. 5. Which Windows-based operating systems is critical and vulnerable to exploit using C overflow vulnerabilities? 5. To evaluate the criticality of windows-based operating system concerning on its capability to avoid C overflow vulnerabilities exploit.
  • 20. Introduction … continue Research Significance: C overflow vulnerabilities is still relevant C language is widely used in various mission-critical applications It ease the understanding of software developers for writing secure codes and security analyst to analyse codes Direct the focus security research initiatives on most critical and relevant types of C overflow vulnerabilities Determine the identified static analysis tools capability, strength and weaknesses
  • 21. Introduction … continue Assumptions: Most exploits happen on 32-bit OS especially Windows Number of exploits on Linux or Unix vulnerabilities are relatively small Major concern is on 32-bit OS as 64-bit OS is claimed more secure Other programming language are highly not critical
  • 22. Introduction … continue Scope and Limitation: 1. Focus on Windows 32-bit 2. Limited to Windows XP and 7 based on market share 3. Focus on C programming language 4. Evaluation is limited to five freely available open- source code applications 5. Evaluation is limited to five static analysis tools freely available 6. Focus on static analysis
  • 23.  Software Vulnerabilities  C Overflow vulnerabilities  Static Analysis and Methods  Dynamic Analysis: Another Perspective  Structuring the Knowledge of C Overflow Vulnerabilities  Conclusion
  • 24. Review of Literature – Software Vulnerabilities • Software vulnerabilities is security flaws that exist within software due to use of unsafe function, insecure coding, insufficient validation, and or improper exception handling (Stoneburner, Goguen, & Feringa, 2002), (Viega & McGraw, 2002), (Seacord R. , 2005) and (Kaspersky Lab ZAO, 2013) • A vulnerable software can cause harm or maliciously used to cause harm especially to human beings (Kaspersky Lab ZAO, 2013), (OWASP Organization, 2011), as happened in Poland (Baker & Graeme, 2008), Iran (Chen T. M., 2010), and in the case of Toyota break failure (Carty, 2010) • There are many types of software vulnerabilities depending on perspectives (Beizer, 1990), (Aslam, 1995), (Longstaff, et al., 1997), (Krsul, 1998), (Piessens, 2002), (Vipindeep & Jalote, 2005), (Alhazmi, Woo, & Malaiya, 2006), (Moore H. D., 2007), (Howard, LeBlanc, & Viega, 2010), (SANS Institute, 2010), (OWASP Organization, 2011), (Kaspersky Lab ZAO, 2013), (MITRE Corporation, 2013), (Wikipedia Foundation, Inc, 2011) • The most mentioned and most critical – Programming errors (Cenzic Inc., 2010), (Secunia, 2011), (NIST, 2014), (OSVDB, 2014) • Impact of programming errors - causing abnormal behaviour, display incorrect result, Denial of Service (DoS) and/or memory violation (Chen T. M., 2010), (IBM X-Force, 2011), (HewlettPackard, 2011), (Baker & Graeme, 2008), (Carty, 2010), (Telang & Wattal, 2005) References: (Microsoft, 2009), (Android Developer, 2012), (Stack Exchange Inc, 2012), (Oracle Corporation, 2012), (Apple Inc, 2012), (Symantec Corporation, 2013), (Kaspersky Lab, 2013), (Gong, 2009), (Fritzinger & Mueller, 1996), (Chechik, 2011), (Mandalia, 2011), (Kundu & Bertino, 2011),
  • 25. Review of Literature – Software Vulnerabilities • Many ways of exploiting programming errors – XSSi, SQLi, Command Injection, Path Traversal and Overflow exploitation (Martin B. , Brown, Parker, & Kirby, 2011), (Siddharth & Doshi, 2010), (Shariar & Zulkernine, 2011), (IMPERVA, 2011) • Overflow happen in PHP, Java and persistently in C (Cenzic Inc, 2009), (MITRE Corporation, 2012), (Mandalia, 2011), (TIOBE Software, 2012), (Tiwebb Ltd., 2007), (DedaSys LLC, 2011) and (Krill, 2011) Comparison Matrix C Java Exploitation Impact Use in mission critical system and proven track record Used in various application but yet to see any serious exploitation impact Persistent and Dominance Since late 80s Since early year 2000 Number of exploitation = low Severity Perspective 60% is medium to critical (MITRE, IBM, Secunia & NIST) 70% medium to critical (OSVDB) Defense and Preventive Insecure functions Limitation on defensive and preventive mechanism All preventive tools developed to detect and prevent Overflow vulnerabilities JVM, applet & type safety Frequent patches/fix All preventive tools focus on coding standard and policy References: (Microsoft, 2009), (Android Developer, 2012), (Stack Exchange Inc, 2012), (Oracle Corporation, 2012), (Apple Inc, 2012), (Symantec Corporation, 2013), (Kaspersky Lab, 2013), (Gong, 2009), (Fritzinger & Mueller, 1996), (Chechik, 2011), (Mandalia, 2011), (Kundu & Bertino, 2011),
  • 26. Review of Literature – C Overflow Vulnerabilities • C Overflow vulnerabilities is persistent with proven track record • The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996), (Longstaff et al, 1997) • Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller, & Kirby, 2010) and (MITRE Corporation, 2013) • The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000), (Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, & Viega, 2010), etc.
  • 27. Review of Literature – C Overflow Vulnerabilities • C Overflow vulnerabilities is persistent with proven track record • The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996), (Longstaff et al, 1997) • Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller, & Kirby, 2010) and (MITRE Corporation, 2013) • The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000), (Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, & Viega, 2010), etc. Expert UF AOB RIL IO MF FP VTC UV NT Wagner et. al. (2000) ( ϴ Viega et.al. (2000) ϴ √ Grenier (2002) ϴ √ √ √ Zitser (2003) ϴ √ Chess and McGraw ((2004) √ Tevis and Hamilton (2004) ϴ √ √ √ Zhivich (2005) ϴ √ Kratkiewicz (2005) ϴ √ Sotirov (2005) ϴ √ √ Tsipenyuk et.al. (2005) ϴ √ √ ϴ √ √ √ Alhazmi et. al. (2006) √ Kolmonen (2007) √ √ √ √ Moore (2007) ϴ √ √ Akritidis et.al. (2008) √ √ √ Pozza and Sisto (2008) ( √ √ √ √ √ Nagy and Mancoridis (2009) √ √ Shahriar and Zulkernine (2011) ϴ √
  • 28. Review of Literature – C Overflow Vulnerabilities • C Overflow vulnerabilities is persistent with proven track record • The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996), (Longstaff et al, 1997) • Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller, & Kirby, 2010) and (MITRE Corporation, 2013) • The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000), (Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, & Viega, 2010), etc. • Most of these 9 classes carries at least medium severity and impact.
  • 29. Review of Literature – C Overflow Vulnerabilities Class of C Overflow Vulnerability Severity Complexity Impact Unsafe Function Critical Low Complete Array out-of-bound Critical Low Complete Return-into-libC Critical High Partial Integer Overflow Critical Low Complete Memory Functions Critical Medium Partial Function Pointer Moderate Medium Partial Variable Type Conversion Moderate Medium Complete Uninitialized Variable Moderate Low Complete Null Termination Moderate Low Partial
  • 30. Review of Literature – C Overflow Vulnerabilities • C Overflow vulnerabilities is persistent with proven track record • The first known C overflow vulnerabilities exploit technique – format string attack (One, 1996), (Longstaff et al, 1997) • Well known for more than 3 decades BUT still persist (SANS Institute, 2010), (Martin, Brown, Paller, & Kirby, 2010) and (MITRE Corporation, 2013) • The first identified C overflow vulnerabilities exploit class – unsafe functions (Wagner D. A., 2000), (Zitser, 2003), (Chess & McGraw, 2004), (Kratkiewicz K. J), (Seacord R. , 2005), (Howard, LeBlanc, & Viega, 2010), etc. • Most of these 9 classes carries at least medium severity and impact. • Additional class is Pointer Scaling/Mixing. (NIST, 2007), (Searcord R. C., 2008), (Black et al, 2011) (OWASP, 2012)
  • 31. Review of Literature – Static Analysis Tools and MethodsProgress Year 1970 Kings, 1970 - Debugging and understanding program - Pattern-matching Lexical analysis (first method) - Implemented in grep Cousots, 1977 - Implements Abstract Syntax Tree (Abstract Interpretation) - Purpose – debug & understand Andersens, 1994 - Introduce Inter-Procedural & Pointer Analysis - Purpose – debug & understand 1990 2000 2010 Wagner, 2000 - Integer Range Analysis - Array out-of-bound / Unsafe functions - BOON Viega et al, 2000 - Lexical Analysis - Brief Data Flow and - Unsafe Functions - ITS4 Larochelle & Evans, 2001 - Annotation-based - All overflow class Ball & Rajamani, 2002 - AI (utilize CFG, SDG and PDG) - All overflow class - SLAM Cousots, 2005 - Advance of AI (include complex mathematical model) - All overflow class - ASTREE Venet, 2005 - Symbolic Analysis + IRA - Only utilize CFG - All overflow class Burgstaller, 2006 - Symbolic Analysis + new rules and algorithm - All overflow class Ball, 2006 - Software Model Checking (based on SA) - Utilize CFG + SDG and built software model - All overflow class Akritidis et al, 2008 - AI (point-to analysis) - Embed identified potential vulnerabilities during compile with protection - All overflow class Pozza and Sisto, 2008 - DFA + IRA - Combine static and dynamic - All overflow class Wang et al, 2009 - DFA + AB - Combine static and dynamic - All overflow class NEC Lab, 2004 – 2014 - AI, IRA, SMC, Bounded Model Checking (SAT capability) - All overflow class
  • 32. Review of Literature – Static Analysis Tools and Methods Technique Researcher LA Inter-PA Intra-PA AI DFA SA IRA AB TM SMC BMC Wagner (2000) √ Viega et. al. (2000) √ √ √ Larochelle and Evans (2001) √ Venet (2005) √ √ Ball et. al. (2006) √ √ Akritidis et. al. (2008) √ Pozza and Sisto (2008) √ √ √ √ Wang et. al. (2009) √ √ Nagy and Mancoridis (2009) √ √ √ √ NEC Lab. (2004-2014) √ √ √ √ √
  • 33. Review of Literature – Static Analysis Tools and Methods No. Tool Latest Version Latest Release DateTechnique Language COV Coverage 1 ITS4 (Cigital, 2011) 1.1.1 February 17, 2000 LA C / C++ Unsafe Function Return-into-lib 2 BOON (Wagner D., 2002) 1.0 July 5, 2002 IRA C Array Out-of-bound Unsafe Function 3 ARCHER Not available 2003 SA, DFA, InterPA C Array Out-of-bound Unsafe Function 4 CSSV (Dor, Rodeh, & Sagiv, 2003) Not Available 2003 DFA, InterPA, IRA, AB C Array Out-of-bound, Unsafe Function 5 CQual (Foster, 2004) 0.981 June 24, 2004 DFA, AB, InterPA C Unsafe Function, Function Pointer, 6 MOPS (Chen & Wagner, 2002) 0.9.1 2005 SMC C No specific (rules dependent) 7 PScan (DeKok, 2007) 1.3 January 4, 2007 LA C Unsafe Function (limited to printf) 8 FlawFinder (Wheeler, 2007) 1.27 January 16, 2007 LA C / C++, Java No specific (list dependent) 9 UNO (Bell Labs, 2007) 2.13 October 26, 2007 DFA, SA, SMC C (ANSI C) Uninitialized variables Function pointer Array Out-of-bound Memory Function 10 Saturn (Aiken,et al., 2008) 1.2 2008 InterPA, Intra-PA, DFA, SA C Function Pointer
  • 34. Review of Literature – Static Analysis Tools and Methods No. Tool Latest Version Latest Release DateTechnique Language COV Coverage 11 GCC Security Analyzer (Pozza & Sisto, 2008) Not available 2008 DFA, InterPA, IntraPA, AB, IRA C Unsafe Function, Array Out-of-Bound, Integer Overflow, Variable Type Conversion, Memory Function 12 C Global Surveyor (Brat & Thompson, 2008) Not Available 2008 AI, DFA, Program Slicing C Uninitialized Variable, Function Pointer, Array Out-of-bound 13 BLAST (Henzinger, Beyer, Majumdar, & Jhala, 2008) 2.5 July 11, 2008 DFA, SMC C No specific (model dependent) 14 PC-Lint (Gimpel Software, 2013) 9.0 September, 2008 LA, DFA, AB C / C++ No specific (rule dependent) 15 Splint (National Science Foundation, 2010) 3.1.2 August 5, 2010 AB, InterPA ANSI C No specific (annotation dependent) 16 RATS (HP Fortify, 2011) 2.3 2011 LA C / C++, Perl, PHP, Python Array Out-of-bound Unsafe Function
  • 35. Review of Literature – Static Analysis Tools and Methods No. Tool Latest Version Latest Release DateTechnique Language COV Coverage 17 PolySpace (MathWorks, Inc, 2014) V8.2 (R2011b) 2011 AI C Integer Overflow Array Out-of-bound, Uninitialized Variable, Functio Pointer / Pointer Aliasing, and Variable Type Conversion 18 F-Soft (platform) (NEC Laboratories, 2012) Not Available 2011 AI, DFA, BMC, SA, SMC C / C++ Function Pointer, Memory Function, Return-into-lib, Variable type conversion, Null termination, Unsafe function 19 VARVEL (Hashimoto & Nakajima, 2009) Not Available 2011 DFA, SMC C / C++ Function Pointer, Array Out-of-bound, Unsafe Function, Memory Function
  • 36. Review of Literature – Static Analysis Tools and Methods No. Tool Latest Version Latest Release DateTechnique Language COV Coverage 20 CodeSonar (GrammaTech, Inc, 2011) Not Available September 27, 2011 DFA, SMC, InterPA C Function pointer, Integer overflow, Memory function, Array out-of-bound, Unsafe function, Variable Type Conversion, Uninitialized Variable 21 CodeWizard (Parasoft, 2011) 9.2.2.17 December 12, 2011 LA, DFA C / C++ All (No specific) 22 ASTREE (Cousot P. , Cousot, Feret, Miné, & Rival, 2006) 11.12 December 15, 2011 AI, InterPA C / C++ Integer Overflow, Array Out-of-bound, Function Pointer, Uninitialized Variable 23 SLAM (Microsoft Corporation, 2011) Not Available 2011 SMC C/C++ No specific (rule dependent)
  • 37. Review of Literature – Static Analysis Tools and Methods No. Methods Strength Weaknesses 1. LA Basic and simple method Ignore program semantics and pattern/rules dependent 2. InterPA Understand function relationship Dependent on other method 3. IntraPA Understand the internal process Ignore program semantics and dependent on other 4. AI 1. Semantics based 2. Use approximation to reduce analysis time 1. Complex algorithm and process hence ignore some properties during the analysis. 2. Approximation may lead to false alarm. 3. Is not scalable for large programs. 5. DFA 1. Understand the flow 2. Provide a clear direction of vulnerabilities flow path and reduce false alarm 1.Suffer overhead due to extensive in-depth analysis, 2.Ignore some properties or vulnerabilities and hence reduce precision. 6. SA Represent the actual program in symbolic execution form and implements refinement process. 1 Input and model dependent 2. Overhead due to many refinement processes. 7. IRA 1.Straight forward. 2.Strong constraint-based algorithm. 1.Ignores program semantics 2.Limited scope on C overflow vulnerabilities. References: (Aho, Lam, Sethi, & Ullman, 1986), (Andersen, 1994), (Bae, 2003), (Ball, et al., 2006), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Bakera, et al., 2009), (Biere, Cimatti, Clarke, & Strichman, 2003), (Burgstaller, Scholz, & Blieberger, 2006), (Chess & McGraw, 2004), (Clarke E. M., 2006), (Clarke E., 2009), (Clarke, Biere, Raimi, & Zhu, 2001), (Cousot & Cousot, 1977), (Cousot P. , et al., 2005), (D’Silva, Kroening, & Weissenbacher, 2008), (Deursen & Kuipers, 1999), (Dor, Rodeh, & Sagiv, 2001), (Erkkinen & Hote, 2006), (Evans, Guttag, Horning, & Tan, 1994), (Ferrara, 2010), (Ferrara, Logozzo, & Fanhdrich, 2008), (Gopan & Reps, 2007), (Haghighat & Polychronopoulos, 1993), (Holzmann G. J., 2002), (Ivančić, et al., 2005), (Jhala & Majumdar, 2009), (Kolmonen, 2007), (Kunst, 1988), (Ku, Hart, Chechik, & Lie, 2007), (Larochelle & Evans, 2001), (Li & Cui, 2010), (Lim, Lal, & Reps, 2009), (Logozzo, 2004), (Parasoft Corporation, 2009), (Pozza & Sisto, 2008), (Qadeer & Rehof, 2005), (Rinard, Cadar, Dumitran, Roy, & Leu, 2004), (Sotirov, 2005), (Tevis & Hamilton, 2004), (Vassev, Hinchey, & Quigley, 2009, (Venet, 2005), (Viega J. , Bloch, Kohno, & McGraw, 2000), (Wang, Guo, & Chen, 2009), (Wang, Zhang, & Zhao, 2008), (Wagner, Foster, Brewer, & Aiken, 2000), (Wenkai, 2002), (Zafar & Ali, 2007), (Zaks, et al., 2006)
  • 38. Review of Literature – Static Analysis Tools and Methods No. Methods Strength Weaknesses 8. AB 1.Utilize the annotation written in the code and hence reduce the complexity to convert and analyse. 1.Dependent on correct syntax of annotation 2.Depends on willingness of developer to annotate. 9. Type Matching Semantically understand the type conversion 1. Depends on valid source code abstraction. 2. Limited to two types of C overflows vulnerabilities that relates to type conversion. 10. SMC Semantically understand the programs. Precise analysis with counterexample algorithm. 1. Suffer state explosion and model dependent. 2. Not scalable for large or complex source code. Limited coverage on C overflow vulnerabilities 11. BMC 1.Semantically understand the programs. 2.Precise analysis with counterexample capability. 3.Less state to analyse compare to software model checking. 4.Reduce state explosion 1.Still has the possibility of state explosion. 2.Model dependent. 3.Not scalable for large and complex source code. 4.Limited coverage on C overflow vulnerabilities References: (Aho, Lam, Sethi, & Ullman, 1986), (Andersen, 1994), (Bae, 2003), (Ball, et al., 2006), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Bakera, et al., 2009), (Biere, Cimatti, Clarke, & Strichman, 2003), (Burgstaller, Scholz, & Blieberger, 2006), (Chess & McGraw, 2004), (Clarke E. M., 2006), (Clarke E., 2009), (Clarke, Biere, Raimi, & Zhu, 2001), (Cousot & Cousot, 1977), (Cousot P. , et al., 2005), (D’Silva, Kroening, & Weissenbacher, 2008), (Deursen & Kuipers, 1999), (Dor, Rodeh, & Sagiv, 2001), (Erkkinen & Hote, 2006), (Evans, Guttag, Horning, & Tan, 1994), (Ferrara, 2010), (Ferrara, Logozzo, & Fanhdrich, 2008), (Gopan & Reps, 2007), (Haghighat & Polychronopoulos, 1993), (Holzmann G. J., 2002), (Ivančić, et al., 2005), (Jhala & Majumdar, 2009), (Kolmonen, 2007), (Kunst, 1988), (Ku, Hart, Chechik, & Lie, 2007), (Larochelle & Evans, 2001), (Li & Cui, 2010), (Lim, Lal, & Reps, 2009), (Logozzo, 2004), (Parasoft Corporation, 2009), (Pozza & Sisto, 2008), (Qadeer & Rehof, 2005), (Rinard, Cadar, Dumitran, Roy, & Leu, 2004), (Sotirov, 2005), (Tevis & Hamilton, 2004), (Vassev, Hinchey, & Quigley, 2009, (Venet, 2005), (Viega J. , Bloch, Kohno, & McGraw, 2000), (Wang, Guo, & Chen, 2009), (Wang, Zhang, & Zhao, 2008), (Wagner, Foster, Brewer, & Aiken, 2000), (Wenkai, 2002), (Zafar & Ali, 2007), (Zaks, et al., 2006)
  • 39. Review of Literature – Static Analysis Tools and Methods No. Tool Strength Limitation COV Coverage 1 FlawFinder Faster and supported with risk level assessment High false alarms and limited type of C overflows vulnerabilities Unsafe functions (database dependent) 2 RATS Faster, supported multiple language, and memory location identification High false alarms and limited type of C overflows vulnerabilities Unsafe functions (database dependent) 3 BOON Efficient in range analysis of variable and does not dependent on database or list for detection. High false alarm due to ignorance of program semantics Unsafe functions and array out-of-bound. 4 ARCHER Symbolically understand the source code and ability to analyze large source code. Still produce false alarm, does not understand C language completely, and limited vulnerabilities Unsafe functions and array out-of-bound 5 MOPS Analyzing model of program which constructed to the closest possible of execution form for precise detection Complexity of modelling No specific C overflows vulnerabilities (model dependent) 6 UNO Efficient in analyzing complex program and low false alarm with counterexample capability State explosion and limited vulnerabilities Uninitialized variables, function pointer, and array out-of-bound 7 F-SOFT Combines many technique that increase detection capability and continuous improvement from NEC Labs. Still suffer state explosion and produce false alarms. Time consuming due to many analysis stage. Unsafe functions, function pointers, memory functions, return-into-libC, variable type conversion, and null termination. References: (Tevis & Hamilton, 2004), (Kolmonen, 2007), (Zafar & Ali, 2007),(Viega J. , Bloch, Kohno, & McGraw, 2002), (Sotirov, 2005), (HP Fortify, 2011), (Chess & McGraw, 2004), (Zitser, Lippmann, & Leek, 2004), (Xie, Chou, & Engler, 2003), (Lim, Lal, & Reps, 2011), (Pozza & Sisto, 2008), (Engler & Musuvathi, 2004), (Li & Cui, 2010), (Holzmann G. J., 2002), (Jhala & Majumdar, 2009), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Ganai, et al., 2008), (Balakrishnan, et al., 2010) and (Ivančić, et al., 2005), (Xie & Aiken, 2005), (Wang, Gupta, & Ivančić, 2007), (Ku, Hart, Chechik, & Lie, 2007), (Jhala & McMillan, 2005), (Henzinger T. A., Jhala, Majumdar, & Qadeer, 2003), (Beyer, Henzinger, Jhala, & Majumdar, 2007), (Marchenko & Abrahamsson, 2007), (Plösch, Gruber, Pomberger, Saft, & Schiffer, 2008) and (Anderson J. L., 2005), (Wang, Ding, & Zhong, 2008), (Evans & Larochelle, 2002), (Karthik & Jayakumar, 2005), (Venet, 2005), (Ball & Rajamani, 2002), (Microsoft Corporation, 2014), (Microsoft Corporation, 2011), (NIST, 2012).
  • 40. Review of Literature – Static Analysis Tools and Methods No. Tool Strength Limitation COV Coverage 9 GCC Security Analyzer Reduce time by implementing the analysis in compiler and high detection rate. Compilers overhead and still produce false alarm. Limited type of C overflows vulnerabilities. Unsafe function, array out-of- bound, integer overflow, variable type conversion, and memory function 10 BLAST Combines variety of abstraction method, refinement methodology, and analysis algorithm. Suffer state explosion, produce false alarm, and dependent on defined properties. No specific C overflows vulnerabilities (model dependent) 11 PC-Lint Lots of error message to help user understand the errors found and implementation of data flow analysis and annotation based to reduce false alarms. Faster analysis time. Dependent on rules and standard defined to verify the source code. Limited capability due to inheritance of lexical analysis weaknesses. No specific C overflows vulnerabilities (rule dependent) 12 SPLINT Very fast and low false alarm (with proper implementation of code annotation) Limited vulnerabilities (dependent on rule and annotation). False alarm can be very high when code is incorrectly annotated or limited rule. No specific C overflows vulnerabilities (rule and annotation dependent) 13 ASTREE Precisely detecting four types of C overflows vulnerabilities. Low false alarm. Scalability issues and limited to only four types of C overflows vulnerabilities. Integer overflow, array out-of- bound, function pointer, and uninitialized variable 14 SLAM Very precise in detecting violation of device drivers. Model dependent and limited to small size programs. No specific C overflows vulnerabilities (model dependent) References: (Tevis & Hamilton, 2004), (Kolmonen, 2007), (Zafar & Ali, 2007),(Viega J. , Bloch, Kohno, & McGraw, 2002), (Sotirov, 2005), (HP Fortify, 2011), (Chess & McGraw, 2004), (Zitser, Lippmann, & Leek, 2004), (Xie, Chou, & Engler, 2003), (Lim, Lal, & Reps, 2011), (Pozza & Sisto, 2008), (Engler & Musuvathi, 2004), (Li & Cui, 2010), (Holzmann G. J., 2002), (Jhala & Majumdar, 2009), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009), (Ganai, et al., 2008), (Balakrishnan, et al., 2010) and (Ivančić, et al., 2005), (Xie & Aiken, 2005), (Wang, Gupta, & Ivančić, 2007), (Ku, Hart, Chechik, & Lie, 2007), (Jhala & McMillan, 2005), (Henzinger T. A., Jhala, Majumdar, & Qadeer, 2003), (Beyer, Henzinger, Jhala, & Majumdar, 2007), (Marchenko & Abrahamsson, 2007), (Plösch, Gruber, Pomberger, Saft, & Schiffer, 2008) and (Anderson J. L., 2005), (Wang, Ding, & Zhong, 2008), (Evans & Larochelle, 2002), (Karthik & Jayakumar, 2005), (Venet, 2005), (Ball & Rajamani, 2002), (Microsoft Corporation, 2014), (Microsoft Corporation, 2011), (NIST, 2012).
  • 41. Review of Literature – Static Analysis Tools and Methods In Summary 1. There are 11 methods known so far 2. More than 40 static analysis tools developed 3. C overflow vulnerabilities continue to persist 1. All static analysis methods and tools focusing on solving the symptoms of C overflow vulnerabilities rather than the root cause of it. 2. Although there are tools interpret the source code, but, due to failure to understand the root cause and the structures of C, the tools is yet to understand the actual problem 3. Impacting the precision @ effectiveness of the methods and tools. Can we solve the problem with Dynamic Analysis?
  • 42. Review of Literature – Dynamic Analysis Dynamic analysis: 1. Analyze program during program’s execution (Li and Chiueh, 2007) 2. Due to that, it only analyze only executed path (Ernst, 2003) 3. Tools that implement dynamic analysis – SigFree (Wang et al 2008), StackGuard (Lhee & Chapin, 2002), StackShield (Lhee & Chapin, 2002), CCured (Sidiroglou & Keromytis, 2004), etc. Strength: 1. Source Code independent (Ernst, 2003), (Goichi et al, 2005), (Cornell, 2008) (Wang et al, 2008) 2. Due to that and analyzed only executed path - analyze only important path (Ernst, 2003). Hence, it reduces the time and false positive (Graham, Leroux, & Landry, 2008), (Cornell, 2008) and (Goichi et al, 2005). 3. Input dependent analysis and therefore not specifics to any C overflow vulnerabilities classes (Haugh & Bishop, 2003) and (Liang & Sekar, 2005)
  • 43. Review of Literature – Dynamic Analysis Dynamic analysis: 1. Analyze program during program’s execution (Li and Chiueh, 2007) 2. Due to that, it only analyze only executed path (Ernst, 2003) 3. Tools that implement dynamic analysis – SigFree (Wang et al 2008), StackGuard (Lhee & Chapin, 2002), StackShield (Lhee & Chapin, 2002), CCured (Sidiroglou & Keromytis, 2004), etc. Weakness: 1. Frame-pointer dependent; for instance LibSafe (Lhlee & Chaplin, 2002) and (Sidiroglou & Keromytis, 2004). 2. Share the same issues as static analysis such as: 1. Annotation dependent (Newsome & Song, 2005) and (Zhivich, Leek & Lippmann, 2005) 2. Limited to certain C overflow vulnerabilities classes (Sidiroglou & Keromytis, 2004), (Akritidis et al, 2008), (Zhivich et al, 2005), (Sidiroglou & Keromytis, 2004), (Wang et al, 2008) and (Newsome & Song, 2005), (Lhee & Chapin, 2002), (Liang & Sekar, 2005), (Kratkiewicz & Lippmann, 2005), (Haugh & Bishop, 2003) and (Cornell, 2008). 3. Overhead analysis (Li & Chiueh, 2007), (Wang et al, 2008), (Rinard et al, 2004), (Lhee & Chapin, 2002), (Liang & Sekar, 2005), (Sidiroglou & Keromytis, 2004), (Aggarwal & Jalote, 2006), (Akritidis et al, 2008), (Graham et al, 2008), (Ernst, 2003), (Goichi et al, 2005) and (Zhivich et al, 2005). 4. Contributes to DoS or DDoS attack (Wang et al, 2008) and (Evans and Larochelle, 2002). 5. Test Case dependent (Aggarwal & Jalote, 2006), (Graham et al, 2008) and (Cornell, 2008) 6. Detection is based on known execution path (Ernst, 2003), (Goichi et al, 2005)
  • 44. Review of Literature – Static versus Dynamic Analysis No. Static Dynamic Choosen 1. Ability to analyze unfinished code, fraction of complete system or performs unit test Able to analyze complete compiled code or finished system. Static 2. Detect at early stage and modified code before completion (Shapiro, 2008) Detect after system is released and requires complete cycle of system modifications Static 3. Cost effective (Shapiro, 2008), (Verifysoft Technology GmbH, 2013) and (Terry & Levin, 2006) Cost of changing code is 100 times (Shapiro, 2008) and (Graham et al, 2008) Static 4. Detection effectiveness is still an issue especially with numerous weaknesses in the methods Yet to prove its ability and there is no guarantee on its effectiveness (Goichi et al, 2005) and (Ernst, 2003) Draw • Static Analysis is preferred analysis technique, hence this research focus on issues behind static analysis Methods Tools Understanding/ Knowledge
  • 45. Review of Literature – Taxonomy and Classifications 1. Vulnerabilities understanding is the process of educating and building the knowledge on vulnerabilities (Krsul, 1998). 2. A major step towards enhancement of tools and implementation for better defense mechanism (Krsul, 1998) and (Tsipenyuk, Chess, & McGraw, 2005). 3. Three areas related to improving understanding and knowledge: 1. Guidelines 2. Books 3. Taxonomy 4. The taxonomy and classifications on software vulnerabilities (previously known as bugs) was started based on RISOS (Research in Secure OS) project by National Bureau of Standards (Abbot, et al., 1976) and PA (Program Analysis) by Information Science Institute, University of California (Bisbey & Hollingworth, 1978). 5. Well-defined taxonomy fulfills the set of well-defined criteria and be able to define the objects and field of studies without doubt (Igure & Williams, 2008) and (Axelsson, 2000) A well-defined taxonomy has significant impact in having good guidelines and books especially to understand C overflow vulnerabilities (Igure & Williams, 2008)
  • 46. Review of Literature – Taxonomy and Classifications - Criteria of well-defined taxonomy 1996 Bishop & Bailey 1. Deterministic 2. Specificity 1998 Howard & Longstaff 1. Mutually Exclusive 2. Exhaustive 3. Unambiguous 4. Repeatable 5. Accepted 6. Useful Ivan Krsul 1. Objectivity 2. Determinism 3. Repeatability 4. Specificity. 1999 Bishop 1. Mutually exclusive 2. Exhaustive 3. Unambiguous 4. Repeatable 5. Accepted Useful 2001 Lough 1. Accepted [Howa1997] 2. Appropriateness [Amor1994] 3. Based on the code, environment, or other technical details [Bish1999] 4. Comprehensible [Lind1997] 5. Completeness [Amor1994] 6. Determinism [Krsu1998] 7. Exhaustive [Howa1997, Lind1997] 8. Internal versus external threats [Amor1994] 9. Mutually exclusive [Howa1997, Lind1997] 10. Objectivity [Krsu1998] 11. Primitive [Bish1999] 12. Repeatable [Howa1997, Krsu1998] 13. Similar vulnerabilities classified similarly [Bish1999] 14. Specific [Krsu1998] 15. Terminology complying with established security terminology [Lind1997] 16. Terms well defined [Bish1999] 17. Unambiguous [Howa1997, Lind1997] 18. Useful [Howa1997, Lind1997] 2003 Vijayaraghavan 1. Appropriate 2. Comprehensible 3. Specific 4. Useful 5. Expandable Hannan, Turner & Broucek 1. Specificity 2. Exhaustive 3. Deterministic 4. Accepted 5. Terminology sompliant Hansmann 1. Accepted 2. Comprehensible 3. Completeness 4. Determinism 5. Mutual Exclusive 6. Repeatable 7. Terminology compliant 8. Terms 9. Well-defined 10. Unambiguous 11. Useful 2004 Killourhy, Maxion and Tan 1. Mutually Exclusivity 2. Exhaustivity 3. Replicability (based on Lough) 2005 Polepeddi 1. Mutually exclusive 2. Exhaustive 3. Unambiguous 4. Repeatability 5. Acceptability 6. Utility / Useful Tsipenyuk et al 1. Simplicity 2. Mutually exclusive 3. Intuitive 4. Category name is less generic 5. Specific Berghe et al 1. Mutually exclusive 2. Deterministic 3. Exhaustive 4. Objective 5. Useful 2006 Alhazmi, Woo and Malaiya 1. Mutual Exclusiveness 2. Clear and Unique definition 3. Repeatability 4. Covers all vulnerabilities (Comprehensive) 2008 Igure and Williams 1. Specificity 2. Ambiguous 3. Layered or hierarchy 4. Unique definition on each level
  • 47. Review of Literature – Taxonomy and Classifications - Criteria of well-defined taxonomy No. Researchers Definition of well-defined taxonomy Comments 1 Bishop and Bailey (1996) A taxonomy that is well-structured with well-defined procedures. Too few 2 Krsul (1998) A taxonomy that has characteristics to ensure classifications can be successfully done Lack of completeness 3 Howard and Longstaff ( 1998) Not available There are redundant and arguable criterion 4 Bishop (1999) Not available Criterions conflict each other, hence inconsistent. 5 Lough (2001) Refer to Krsul definition of well-defined taxonomy Collection of previous criteria, which consist of repetitive or irrelevant to certain fields of taxonomy. 6 Vijayaraghavan (2003) A good taxonomy should have a characteristics specific to its purposes. Specific to e-commerce environment and ignore mutually exclusive criterion. 7 Hannan, Turner, and Broucek (2003) A good forensic taxonomy should be able to leverage the strength of multiple disciplines on any forensic issues. Focusing on forensic issue and ignore repeatability and mutually exclusive. 8 Hansmann (2003) Not available. Based on Lough’s criteria. Same issues as Lough 9 Killhourhy et. al. (2004) Should have sensible criteria but consistent with general terms of taxonomy. Lack of objectivity and specificity. 10 Polepeddi (2005) Should reduce difficulties to users to study and manage the objects of studies Lack of deterministic and objectivity. 11 Tsipenyuk et. al. (2005) Well-defined taxonomy must be simple and easy to understand. Lack of deterministic, objectivity and repeatability. 12 Berghe et. al. (2005) Not Available Lack of specificity, repeatability and accepted criterion. 13 Igure and Williams A good taxonomy provides a common language for the study of the Allowed ambiguity which causing non-
  • 48. Review of Literature – Taxonomy and Classifications - Previous works on taxonomy or classifications No Experts Purpose Criterions failed to fulfil Impact C overflows vulnerabilities 1 Lough (Lough, 2001) To ease security experts to design security protocol for network communications. Unambiguous and repeatability Confusing to user and hence resulting in different understanding for the same vulnerabilities. Not available 2 Piessens (2002) As guidelines to software developers, tester and security implementer. Unambiguous, specificity and repeatability Confusing to user and hence resulting in different understanding for the same vulnerabilities. Not available 3 Vijayaraghavan (2003) To educate and help security practitioners in developing a suitable test case Unambiguous, specificity, useful and repeatability, Does not specified web applications vulnerabilities and not practical due to confusing and ambiguity Not available 4 Hannan et al (2003) To leverage multi-disciplines in information security for forensic process. Obviousness and completeness. Not useful and difficult to apply in forensic computing Not applicable. 5 Hansman (2003) As guidelines for security expert Specificity and repeatability Confuse and causing non- repeatable result. Generic 6 Seacord and Householder (2005) As guidelines to classify vulnerabilities based on behaviour for countermeasures Specificity, repeatability, unambiguous and useful. Not repetitive in vulnerability classifications process and does not improve understanding on vulnerabilities. Generic 7 Weber et al (2005) As reference for security developers to develop security analysis tool. Unambiguous, useful and repeatability Difficult to understand and repeating the results. Generic 8 Tsipenyuk et al (2005) As reference for software developers to avoid vulnerabilities in coding Unambiguous, repeatability and specific. Too general and covers too many languages and therefore affecting repeatability. Generic but covers almost all C overflow vulnerabilities classes
  • 49. Review of Literature – Taxonomy and Classifications - Previous works on taxonomy or classifications No Experts Purpose Criterions failed to fulfil Impact C overflows vulnerabilities 1 Lough (Lough, 2001) To ease security experts to design security protocol for network communications. Unambiguous and repeatability Confusing to user and hence resulting in different understanding for the same vulnerabilities. Not available 2 Piessens (2002) As guidelines to software developers, tester and security implementer. Unambiguous, specificity and repeatability Confusing to user and hence resulting in different understanding for the same vulnerabilities. Not available 3 Vijayaraghavan (2003) To educate and help security practitioners in developing a suitable test case Unambiguous, specificity, useful and repeatability, Does not specified web applications vulnerabilities and not practical due to confusing and ambiguity Not available 4 Hannan et al (2003) To leverage multi-disciplines in information security for forensic process. Obviousness and completeness. Not useful and difficult to apply in forensic computing Not applicable. 5 Hansman (2003) As guidelines for security expert Specificity and repeatability Confuse and causing non- repeatable result. Generic 6 Seacord and Householder (2005) As guidelines to classify vulnerabilities based on behaviour for countermeasures Specificity, repeatability, unambiguous and useful. Not repetitive in vulnerability classifications process and does not improve understanding on vulnerabilities. Generic 7 Weber et al (2005) As reference for security developers to develop security analysis tool. Unambiguous, useful and repeatability Difficult to understand and repeating the results. Generic 8 Tsipenyuk et al (2005) As reference for software developers to avoid vulnerabilities in coding Unambiguous, repeatability and specific. Too general and covers too many languages and therefore affecting repeatability. Generic but covers almost all C overflow vulnerabilities classes No. Experts Purpose Criterions failed to fulfil Impact C overflows vulnerabilities 9 Sotirov (2005) To construct and evaluate static analysis tools. Unambiguous, repeatability and completeness Failed to improve understanding on vulnerabilities and behaviour that triggers exploitation. Out-of-bound, format string, and integer overflow, memory function, variable conversion (signed to unsigned) 10 Berghe et al (2005) To evaluate constructed methodology for producing vulnerabilities taxonomy Unambiguous, repeatability and completeness Different users produce different result and affecting usefulness of the taxonomy. Generic 11 Gegick and Williams (2005) To identify and classify vulnerabilities in report or advisories Repeatability and useful. User will have difficulties in classifying vulnerabilities as there is no specific class and affecting classifying consistency. Generic 12 Alhazmi et. al. (2006) To identify and classify exploitable vulnerabilities for security enhancement Completeness Users tends to wrongly classify the studied object or failed to classify at all. This will affect their understanding and knowledge. Generic 13 Bazaz and Arthur (2007) To analyze software security Repeatability, completeness, unambiguous and obvious. Different users produce different result and affecting usefulness of the taxonomy. Generic 14 Shahriar and Zulkernine (2011) To ease monitoring program vulnerability exploitation Repeatability, completeness, unambiguous and obvious. Different users produce different result and affecting usefulness of the taxonomy. Generic
  • 50. Review of Literature – Taxonomy and Classifications - Previous works on taxonomy or classifications No Experts Purpose Criterions failed to fulfil Impact C overflows vulnerabilities 1 M. Zitser (2003) To evaluate static analysis tool Unambiguous, completeness and repeatability Confusing to user and hence resulting in different understanding for the same vulnerabilities. Array out-of- bound and unsafe functions 2 K. Kratkiewicz (2005) To evaluate static analysis tool Unambiguous, completeness and repeatability Confusing to user and hence resulting in different understanding for the same vulnerabilities. Array out-of- bound and unsafe functions 3 M. A. Zhivich (2005) To evaluate dynamic analysis tool. Unambiguous, completeness and repeatability Limited vulnerabilities and for the purpose of assessing dynamic tools and not for understanding C overflow vulnerabilities. Array out-of- bound and unsafe functions 4 A. I. Sotirov (2005) To evaluate static analysis tool Repeatability and completeness. Lack of classes and has a class for unknown vulnerabilities which resulted inconsistency and hence does not ease the understanding. Array out-of- bound, integer overflow and unsafe functions 5 H. D. Moore (2007) For studies and understanding of overflow vulnerabilities Specificity, completeness and repeatability Confuse and causing non- repeatable result. Array out-of- bound and integer overflow
  • 51. Review of Literature – Summary of Review 1. There are gaps in software security areas pertaining to software vulnerabilities issues specifically to C Overflow Vulnerabilities: 1. Analysis methods and tools 2. Security implementation and policies 3. Understanding/knowledge on software vulnerabilities 2. As mentioned by Krsul (1998) and Tsipenyuk et al (2005), improving knowledge and understanding on vulnerabilities is a major step towards enhancement of tools and implementation for better defence mechanism. Hence, the focus is on taxonomy and classifications. 1. There are still no well-defined taxonomy constructed from source code perspective which consider developers point-of-view and covers all C overflow vulnerabilities classes Research Question Research Objective 1. Why C overflow vulnerabilities still persist although it is common knowledge, and there are numerous methods and tools available to overcome them? 1. To identify the reasons why C overflow vulnerabilities, despite more than three decades, still persist although there are various methods and tools available.
  • 52. Review of Literature – Summary of Review Research Question Research Objective 1. Why C overflow vulnerabilities still persist although it is common knowledge, and there are numerous methods and tools available to overcome them? 1. To identify the reasons why C overflow vulnerabilities, despite more than three decades, still persist although there are various methods and tools available.
  • 53. Review of Literature – Summary of Review
  • 54.  Research Framework and Phases  Research Activities
  • 55. Research Methodology – Research Framework & Phases
  • 56. Research Methodology – Research Framework & Phases Theoretical Studies Taxonomy Construction Taxonomy Evaluation
  • 57. Research Methodology – Research Framework & Phases Phases Section Phase 1 – Theoretical Studies Phase 2 – Taxonomy Construction Phase 3 – Taxonomy Evaluation Research Question (RQ) RQ 1: Why C overflow vulnerabilities still persist although it is common and known for more than two decades? RQ 2: How to identify and improve understanding of C overflow vulnerabilities and prevent from occurring again? RQ 4: Is the taxonomy effectives in improving understanding of C overflow vulnerabilities? RQ 5: Is the taxonomy comprehensive and useful? Research Objectives (RO) RO 1: To identify strength and weaknesses of current mechanisms in detecting and preventing C overflow vulnerabilities from occurring RO 2: To identify list of C overflows vulnerabilities class RO 3: To compile and construct criteria for well- defined taxonomy RO 4: To construct taxonomy specifically addressing C overflow vulnerabilities RO 5: To evaluate taxonomy based on criteria RO 6: To evaluate the criticality and relevancies of C overflows vulnerabilities based on the taxonomy Phase Deliverables / Output (RR) RR 1: Strength and weaknesses of current detection and prevention mechanism RR 2: List of C overflow vulnerabilities classes RR 3: Criteria of well-defined taxonomy RR 4: Taxonomy of C overflow vulnerabilities exploit RR 5: Taxonomy validated RR 6: Significant findings of the research
  • 58. Research Methodology – Research Activities 1. Pre-analysis on vulnerabilities and information security issues 2. In-depth review on software vulnerabilities 3. Critical review: 3.1 C overflow vulnerabilities and exploitation 3.2 C overflow vulnerabilities defensive mechanism 3.3 C overflow vulnerabilities detection and prevention mechanism 4. Critical review: 4.1 Static Analysis techniques and tools 4.2 Dynamic Analysis techniques and tools 5. Critical review: 5.1 Vulnerabilities taxonomy 5.2 Criteria of well-defined taxonomy Start Is the theoretical studies comprehensive? 6. Conclude the theoretical studies phase. End No Yes Phase 1: Theoretical Studies
  • 59. Research Methodology – Research Activities Start 1. Critical review on relevant publications 2. Extracted the criteria for constructing taxonomy 3. Detail analysis on the identified criteria 4. Construct criteria for well-defined taxonomy 5. Review the constructed criteria for well- defined taxonomy Completed review? Criteria satisfied? End No Yes Yes Phase 2: Taxonomy Construction – Construct well- defined criteria
  • 60. Research Methodology – Research Activities Phase 2: Taxonomy Construction – Construct Taxonomy Start 1. Critical review on relevant reports 2. Formation of Classes 3. Detail analysis on related publications 4. Organized and constructed the taxonomy 5. Review the constructed taxonomy with all reports and publications Taxonomy satisfied? End No Yes
  • 61. Start 1. Evaluate the taxonomy against the constructed well-defined criteria 2. Measure the criticality and significant of each identified class. 3. Measure the criticality of OS and vulnerabilities exploitation impact. 4. Evaluate the static analysis tools effectiveness in detecting the identified classes 5. Evaluate and verify the static analysis tools effectiveness and efficiencies in analyzing open-source code for comparative analysis against earlier works Evaluation satisfied? End No Yes Research Methodology – Research Activities Phase 3: Taxonomy Evaluation
  • 62. Research Methodology – Research Activities Start 1. Suitable tester is selected 2. Select 100 advisories/reports 3.1.2 Tester matched the vulnerability in advisories with class in taxonomy End First iteration ? 3.1.1 Shared the taxonomy with tester 3. Collected and compiled the result 3. Measured and presented the result Second iteration ? Third iteration ? 3.2.1 Explain and educate the tester on the taxonomy 3.2.2 Tester matched the vulnerability in advisories with class in taxonomy 3.3.1. Select a different set of advisories or reports. 3.3.2 Tester used the same taxonomy and matched the vulnerabilities in the report. No guidance provided. No Yes Yes Yes No No Phase 3: 3.1.1 Evaluation on Taxonomy against the well-defined criteria - Measurement on Effectiveness and Completeness
  • 63. Research Methodology – Research Activities Start 1. Critical review on relevant reports 2. Formation of Classes 3. Detail analysis on related publications 4. Organized and constructed the taxonomy 5. Review the constructed taxonomy with all reports and publications Taxonomy satisfied? End No Yes Phase 3: 3.1.2 Evaluation on Taxonomy against well- defined criteria - with Selected Well-known Online Vulnerability Database and Organization
  • 64. Research Methodology – Research Activities Phase 3: 3.2 Evaluation on the Significances and Relevancies of Classes Defined in the Taxonomy of C Overflow Vulnerabilities Exploit Start 1. Get the class and its characteristics from the taxonomy 2. Searched the selected databases and organization based on the class name and its characteristics. End Found advisories or report? More search synonym? 3. Critical review on the advisory/report. Complete 10 iteration? 4. Conclude and map the result in table. Complete all class? 5. Compiled and analyzed the result. Complete all databases or organization? No Yes No No No No Yes Yes Yes Yes
  • 65. Research Methodology – Research Activities Phase 3: 3.3 Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality Start 1. Identify the relevant OS. End Complete execute all test cases? 2. Develop vulnerable programs based on the classes in the taxonomy. 4. Execute the activities. 5. Compile result and perform analysis No Yes 3. Construct test activities for OS evaluation.
  • 66. Research Methodology – Research Activities Start 1. Critically reviewed and identified static analysis tools End Complete analyze all programs? 2. Select static analysis tools from the identified list 4. Execute the selected static analysis tool with the selected program as input. 5. Compile result and perform analysis No Yes 3. Select a program to analyze. All static analysis tool used? No Yes Phase 3: 3.4 Evaluate the static analysis tools effectiveness in detecting the identified classes
  • 67. Research Methodology – Research Activities Start 1. Critically reviewed and identified open-source codes/applications. End Complete analyze all programs? 2. Prepare the machine to be used for the testing 5. Execute the selected static analysis tool with the selected program as input. 7. Compile result and perform analysis No Yes 4. Select a program to analyze from identified list. All static analysis tool used? No Yes 3. Select the static analysis tool from identified list 6. Get the result and normalize the machine. Phase 3: 3.5 Evaluate the Static Analysis Tools Efficiencies in Analysing Open- source Code for Comparison with Previous Works
  • 68. Research Methodology – Summary 1. Pre analysis on vulnerabilities and information security issues/cases. 2. In-depth review on software vulnerabilities 3. Critical review on C overflows vulnerabilities, its current detection and prevention mechanism 4. Critical review on program analysis techniques and tools focusing on static analysis 5. Critical review on vulnerabilities taxonomy 1. A thorough, careful and significant review on vulnerabilities taxonomy covering the: i. criteria of well-defined taxonomy, ii. vulnerabilities taxonomy, and iii. C overflow vulnerabilities taxonomy 2. Identified and formulate the criteria of well-defined taxonomy 3. Construct ‘C Overflow Vulnerabilities Attack’ taxonomy from source code perspectives. 1. Evaluate the taxonomy against the well-defined criteria. 2. Measure the criticality and significant of each identified class. 3. Measure the criticality of OS and vulnerabilities exploitation impact. 4. Evaluate the static analysis tools effectiveness in detecting the identified classes 5. Evaluate and verify the static analysis tools effectiveness and efficiencies in analyzing open-source code for comparative analysis against earlier works. 6. Analyze the findings Theoretical Studies Taxonomy Construction Taxonomy Evaluation
  • 69.  Taxonomy Construction  Taxonomy Evaluation
  • 70. Result and Discussions: Taxonomy Construction – Criteria for Well-Defined Taxonomy No Criteria Description Purpose 1. Simplicity Simplified collection of studied objects or subject into readable diagram or structures To ease in understanding the studied objects or subject. 2. Organized structures Organized into viewable, readable, and understandable format. To demonstrate the relationship between objects and ease the process of understanding. 3. Obvious Objective is clear, measureable, and observable without doubt. Process flow is clear and easily followed. Structure or format is easily understood and able to stand on its own. To ease the process of classifications. 4. Repeatability Result of classifying studied object by any independent user can easily be duplicated by others. For consistency purposes and reliability of the result. 5. Specificity / Mutual exclusive / Primitive Value of object must be specific and explicit. Object of study must belong to only one class. To remove the ambiguity, ease the classification process and support repetitive result. 6. Similarity Object in the same class must have similar behavior or characteristics. For consistency, repeatability and ease the understanding of studied objects. 7. Completeness Able to capture all objects studied without any doubt no matter when it is applied. To ensure user are able to apply any new object into the classification whenever required without any doubt. 8. Knowledge compliant Built using known existing terminology. To ease learning and classifying.
  • 71. Result and Discussions: Taxonomy Construction – C Overflow Vulnerabilities Exploit Taxonomy Taxonomy of C Overflow Vulnerabilities Exploit Unsafe Functions Array Out-of-Bound Integer Range/Overflow Return-into-libc Memory Function Function Pointer / Pointer Aliasing Variable Type Conversion Pointer Scaling / Pointer Mixing Uninitialized Variable Null Termination
  • 72. Result and Discussions: Taxonomy Construction – C Overflow Vulnerabilities Exploit Taxonomy No Unsafe Functions Array out-of-bound Return-into-libc 1 Use of unsafe C/C++ functions Use of array Use a pointer to string and does not requires unsafe functions or array. 2 Used input either by user or by system that is pass into the function as input Involves index which is supplied by an operation in a program Use a pointer to point to a string of character 3 Unsafe function is executed to trigger the overflow To trigger overflow, index used is beyond the upper and lower bound of the array The string of character contains new system or function call or new memory addressing or large string to overflow the next reference address until it reach the new intended memory addressing which contains a function call based on available functions in libC.
  • 73. Result and Discussions: Taxonomy Evaluation Evaluation 1: Result of evaluating the effectiveness and completeness in classifying vulnerabilities using the taxonomy No. Methods Tester Result Successful mapping rate (%) S M F 1 5 selected tester perform the classifications using the taxonomy without guidance on 100 vulnerabilities reports Tester 1 60 5 35 0.60 Tester 2 51 28 21 0.51 Tester 3 71 14 15 0.71 Tester 4 68 5 27 0.68 Tester 5 80 9 11 0.80 Average successful rate of using taxonomy to classify vulnerabilities 0.66 2 Using the same reports and tester but with guidance and explanation on taxonomy Tester 1 92 0 8 0.92 Tester 2 85 10 5 0.85 Tester 3 98 0 2 0.98 Tester 4 93 5 2 0.93 Tester 5 89 3 8 0.89 Average successful rate of using taxonomy to classify vulnerabilities 0.914 3 Using the same tester but the tester was given different sets of vulnerabilities reports from previous test. Tester 1 90 4 6 0.90 Tester 2 78 7 15 0.78 Tester 3 96 0 4 0.96 Tester 4 85 8 7 0.85 Tester 5 92 1 7 0.92 Average successful rate of using taxonomy to classify vulnerabilities 0.882 Average success 0.819
  • 74. Result and Discussions: Taxonomy Evaluation Evaluation 2: Result of Comparison of Taxonomy with Advisories/Reports for Completeness Measurement Vulnerabilities Database Website Class of C Overflow Vulnerabilities NIST OWASP OSVDB Symantec Microsoft Secunia Dept of Homeland Security % Matches Unsafe Functions Yes Yes Yes Yes Yes Yes Yes 100% Array Out-of-bound Yes Yes Yes Yes Yes Yes Yes 100% Integer Range/overflow Yes Yes Yes Yes Yes Yes Yes 100% Return-into-LibC Yes Yes Not Applicable Not Applicable Yes Not Applicable Yes 100% Memory Function Yes Yes Yes Yes Yes Yes Yes 100% Function Pointer / Pointer Aliasing Yes Yes Not Applicable Yes Yes Yes Yes 100% Variable Type Conversion Yes Yes Not Applicable Not Applicable Yes Yes Yes 100% Pointer Scaling / Pointer Mixing Yes Yes Yes Not Applicable Yes Yes Yes 100% Uninitialized Variable Yes Yes Yes Yes Yes Not Applicable Yes 100% Null Termination Yes Yes Yes Yes Yes Yes Yes 100% Unknown class (Unable to match with the taxonomy) No No No No No No No 0%
  • 75. Result and Discussions: Taxonomy Evaluation Evaluation 3: Evaluation on Relevancies and Significant of Classes in C Overflow Vulnerabilities Exploit Taxonomy - Severity Vulnerabilities Database C overflow vulnerabilities class NIST OWASP OSVDB Symantec Microsoft Secunia Dept of Homeland Security Unsafe functions High High High High High High High Array out-of-bound High High High High High High High Integer range/overflow High High High High High High High Return-into-LibC High Medium NA NA Medium NA Medium Memory Function High Medium Medium High Medium Medium High Function Pointer / Pointer Aliasing Medium Medium NA Low Medium High Medium Variable Type Conversion Medium Medium NA NA Low Medium Medium Pointer Scaling / Pointer Mixing High Medium Low NA High High High Uninitialized Variable Medium Low Low Low Low Low Medium Null Termination Medium Low Low Medium Low Medium Medium
  • 76. Result and Discussions: Taxonomy Evaluation Evaluation 3: Evaluation on Relevancies and Significant of Classes in C Overflow Vulnerabilities Exploit Taxonomy - Persistency Vulnerabilities Database C overflow vulnerabilities class NIST OWASP OSVDB Symantec Microsoft Secunia Dept of Homeland Security Unsafe functions 2013 2013 2013 2013 2013 2013 2013 Array out-of- bound 2014 2013 2012 2012 2013 2013 2013 Integer range/overflow 2013 2012 2012 2012 2013 2013 2013 Return-into-LibC 2013 2012 NA NA 2013 NA 2010 Memory Function 2014 2012 2012 2012 2013 2013 2012 Function Pointer / Pointer Aliasing 2013 2010 NA 2010 2013 2013 2010 Variable Type Conversion 2013 2010 NA NA 2013 2010 2010 Pointer Scaling / Pointer Mixing 2010 2009 2009 NA 2009 2009 2009 Uninitialized Variable 2013 2012 2012 2012 2013 2013 2012 Null Termination 2009 2009 2009 2009 2009 2009 2009
  • 77. Result and Discussions: Taxonomy Evaluation Evaluation 3: Evaluation on Relevancies and Significant of Classes in C Overflow Vulnerabilities Exploit Taxonomy - Impact Vulnerabilities Database C overflow vulnerabilities class NIST OWASP OSVDB Symantec Microsoft Secunia Dept of Homeland Security Unsafe functions High High High High High High High Array out-of- bound High High High High High High High Integer range/overflow High High High High High High High Return-into-LibC High Medium NA NA High NA Medium Memory Function High High High High High High High Function Pointer / Pointer Aliasing Medium High NA Medium Medium Medium Medium Variable Type Conversion Medium Medium NA NA Medium Medium Medium Pointer Scaling / Pointer Mixing Medium Medium High NA High High High Uninitialized Variable Medium High Medium Medium Medium Medium Medium Null Termination Medium Medium Medium High Medium Medium Medium
  • 78. Result and Discussions: Taxonomy Evaluation Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality - Significance and Relevancies Operating System C overflow vulnerabilities class Windows XP 32-bits Windows 7 32-bits Windows 7 64-bits Linux (Centos 5.5) 32-bits Unsafe functions Yes Yes Yes Yes Array out-of-bound Yes Yes Yes Yes Integer range/overflow Yes Yes Yes Yes Return-into-LibC Yes Yes Yes Yes Memory Function Yes Yes Yes Yes Function Pointer / Pointer Aliasing Yes Yes Yes Yes Variable Type Conversion Yes Yes Yes Yes Pointer Scaling / Pointer Mixing Yes Yes Yes Yes Uninitialized Variable Yes Yes Yes Yes Null Termination Yes Yes Yes Yes Legend: Vulnerable (Is the OS vulnerable): Yes – vulnerable, No – Not vulnerable Difficulties (Level of difficulties to exploit): L – Low, M – Medium, H – Very difficult to exploit, NA – Not Applicable
  • 79. Result and Discussions: Taxonomy Evaluation Types of C overflows vulnerabilities attack Windows XP 32-bits Windows 7 32-bits Windows 7 64-bits Centos 5.5 Linux 32- bits V D V D V D V D Unsafe Functions Yes L Yes L Yes M Yes L Array Out-of-bound Yes L Yes L Yes M Yes L Integer Range/overflow Yes L Yes L Yes M Yes L Return-into-LibC Yes H Yes H Yes H Yes H Memory Function Yes H Yes H Yes H Yes H Function Pointer / Pointer Aliasing Yes M Yes H Yes H Yes M Variable Type Conversion Yes M Yes M Yes H Yes M Pointer Scaling / Pointer Mixing Yes H Yes H Yes H Yes H Uninitialized Variable Yes L Yes L Yes L Yes L Null Termination Yes L Yes L Yes M Yes L Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality - Vulnerable and Difficulties Legend: Vulnerable (Is the OS vulnerable): Yes – vulnerable, No – Not vulnerable Difficulties (Level of difficulties to exploit): L – Low, M – Medium, H – Very difficult to exploit, NA – Not Applicable
  • 80. Result and Discussions: Taxonomy Evaluation Legend: Vulnerable (Is the OS vulnerable): Yes – vulnerable, No – Not vulnerable Difficulties (Level of difficulties to exploit): L – Low, M – Medium, H – Very difficult to exploit, NA – Not Applicable Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality - Input and Outcome for Windows XP 32-bits Class of C overflows vulnerabilities exploit Input characteristics Effect on system / outcome Unsafe Functions Few characters more than given size Overflow , loop without ending or throw memory exception. Array Out-of-bound Few characters more than given size Overflow happen but program continue. Printed empty space, adding last character ('n') and print double spacing instead of one. Integer Range/overflow Variable except integer larger than n32. Overflow happen. Return-into-LibC No specific length Nothing happen and the program continues. Memory Function Depends on type of functions memset() triggers overflows whereas double free() will result in vulnerable for exploitation. Function Pointer / Pointer Aliasing Few characters bigger than given size Display weird characters. Length of second variable extended to the same size of first variable. Possible use for exploitation to trigger overflow Variable Type Conversion Larger integer to small character Larger integer will force to display weird character. If the conversion is from int to char and then to int again; overflow happen and vulnerable for exploitation. Pointer Scaling / Pointer Mixing No specific length A memory violation is shown (overflow). For struct pointer, it will still assign memory address which later can be use to exploit by attackers. Uninitialized Variable Not requires any length Assign a value to the variable and memory address, which can be used for exploitation. Null Termination Length is equivalent to defined length Nothing happen on the system and no additional characters added.
  • 81. Result and Discussions: Taxonomy Evaluation Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality - Input and Outcome for Windows 7 32-bits Class of C overflows vulnerabilities exploit Length of attacks Effect on system / outcome Unsafe Functions Few characters more than given size Overflow, loop without ending or throw memory exception Array Out-of- bound Few characters more than given size Overflow happen but program continue. Printed empty space, adding last character ('n') and print double spacing instead of one. Integer Range/overflow Variable except integer larger than n32. Overflow happen. Return-into-LibC No specific length Nothing happen and the program continues. Memory Function Depends on type of functions memset() triggers overflows whereas double free() will result in vulnerable for exploitation. Function Pointer / Pointer Aliasing Few characters bigger than given size Display weird characters. Length of second variable extended to the same size of first variable. Possible use for exploitation to trigger overflow Variable Type Conversion Larger integer to small character Display weird character, convert to negative integer (unsigned value) thus becoming overflow and vulnerable for exploitation. Pointer Scaling / Pointer Mixing No specific length A memory violation is shown (overflow). Uninitialized Variable Not requires any length Assign a value to the variable and memory address, which can be used for exploitation. Null Termination Length is equivalent to defined length Nothing happen on the system and no additional characters added.
  • 82. Result and Discussions: Taxonomy Evaluation Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality - Input and Outcome for Windows 7 64-bits Class of C overflows vulnerabilities exploit Length of attacks Effect on system / outcome Unsafe Functions Few characters more than the size of n64. Overflow when the input to first variable is larger than n64 bit sizes. Array Out-of-bound Few characters more than given size Overflow happen but program continue. Integer Range/overflow Variable except integer larger than n64. Overflow happen. Return-into-LibC No specific length Nothing happen and the program continues. Memory Function Depends on type of functions Memory error violation (overflow) and after few processes, the program aborted. Function Pointer / Pointer Aliasing Few characters bigger than given size Nothing happen. Variable Type Conversion Larger integer to small character Display weird character, convert to negative integer (unsigned value) thus becoming overflow and vulnerable for exploitation. Pointer Scaling / Pointer Mixing No specific length The system will still assign value and memory address which later can be used to exploit by attackers. Uninitialized Variable Not requires any length Assign a value to the variable and memory address, which can be used for exploitation. Null Termination Length is equivalent to defined length Nothing happen on the system and no additional characters added.
  • 83. Result and Discussions: Taxonomy Evaluation Evaluation 4: Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality - Input and Outcome for Linux (Centos 5.5) 32-bits Class of C overflows vulnerabilities exploit Length of attacks Effect on system / outcome Unsafe Functions Few characters more than the size of n32. Overflow only happen when the input to first variable is larger than n32 bit sizes. Array Out-of-bound Few characters more than given size Overflow happen but program continue. Integer Range/overflow Variable except integer larger than n32. Overflow happen. Return-into-LibC No specific length Nothing happen and the program continues. Memory Function Depends on type of functions Segmentation fault, which is equivalent to memory overflow. Program aborted. Function Pointer / Pointer Aliasing Few characters bigger than given size Overflow on first variable after copy second variable with value from first variable Variable Type Conversion Larger integer to small character Conversion successful if the given integer is within range of character. Larger integer will force to display weird character. Pointer Scaling / Pointer Mixing No specific length The system will still assign value and memory address which later can be used to exploit by attackers. Uninitialized Variable Not requires any length Assign a value to the variable and memory address, which can be used for exploitation. Null Termination Length is equivalent to defined length Nothing happen on the system and no additional characters added.
  • 84. Result and Discussions: Taxonomy Evaluation Evaluation 5: Evaluation on Static Analysis Tools Effectiveness in Detecting Vulnerabilities based on C Overflow Vulnerabilities Exploit Taxonomy Vulnerability class Program Types SPLINT RATS ITS4 BOON BLAST Unsafe Function Intra-procedural Yes Yes Yes Yes Yes Inter-procedural Yes Yes Yes Yes Yes Array Out-of-bound Intra-procedural Yes Yes Yes Yes Yes Inter-procedural Yes Yes Yes Yes Yes Integer Ranger / Overflow Intra-procedural Yes Yes Yes Yes Yes Inter-procedural Yes Yes Yes Yes Yes Return-into-LibC Intra-procedural No No No No No Inter-procedural No No No No No Memory Function Intra-procedural Yes Yes Yes Yes Yes Inter-procedural Yes Yes Yes Yes Yes Function Pointer / Pointer Aliasing Intra-procedural No No No No Yes Inter-procedural No No No No Yes Variable Type Conversion Intra-procedural No No No No Yes Inter-procedural No No No No Yes Pointer Scaling / Pointer Mixing Intra-procedural No Yes No No Yes Inter-procedural No Yes No No Yes Uninitialized Variable Intra-procedural No No No No No Inter-procedural No No No No No Null Termination Intra-procedural No No No No No Inter-procedural No No No No No
  • 85. Result and Discussions: Taxonomy Evaluation Evaluation 6: Evaluation on Static Analysis Tools Effectiveness in Detecting Vulnerabilities on Open-Source Code Tools Program Time (minutes) Detect? False alarm True alarm Detection rate SPLINT Apache HTTPD 280 5 5 0 0% Google Chrome 180 8 7 1 12.5% MySQL CE 300 9 7 2 22.2% Open VM Tool 130 8 5 3 37.5% TPM Emulator 20 4 4 0 0% RATS Apache HTTPD 270 6 6 0 0% Google Chrome 200 9 6 3 33.3% MySQL CE 330 15 13 2 13.3% Open VM Tool 110 5 4 1 20% TPM Emulator 35 6 6 0 0% ITS4 Apache HTTPD 290 8 7 1 12.5% Google Chrome 230 11 10 1 9.0% MySQL CE 310 13 8 5 38.5% Open VM Tool 100 9 7 2 22.2% TPM Emulator 20 6 6 0 0% Boon Apache HTTPD 315 6 6 0 0% Google Chrome 190 5 4 1 20% MySQL CE 400 9 6 3 33.3% Open VM Tool 170 13 12 1 7.7% TPM Emulator 25 13 11 2 15.4% BLAST Apache HTTPD 650 24 21 3 12.5% Google Chrome 490 14 11 3 21.4% MySQL CE 410 29 19 10 34.5% Open VM Tool 320 23 8 15 65.2% TPM Emulator 80 14 4 10 71.4%
  • 86. Result and Discussions: Summary Phases Section Phase 2 – Taxonomy Construction Phase 3 – Taxonomy Evaluation Research Question (RQ) RQ 2: How to identify and improve understanding of C overflow vulnerabilities and prevent from occurring again? RQ 4: Is the taxonomy effectives in improving understanding of C overflow vulnerabilities? RQ 5: Is the taxonomy comprehensive and useful? Research Objectives (RO) RO 3: To compile and construct criteria for well-defined taxonomy RO 4: To construct taxonomy specifically addressing C overflow vulnerabilities RO 5: To evaluate taxonomy based on criteria RO 6: To evaluate the criticality and relevancies of C overflows vulnerabilities based on the taxonomy Phase Deliverables / Output (RR) RR 3: Criteria of well-defined taxonomy RR 4: Taxonomy of C overflow vulnerabilities exploit RR 5: Taxonomy validated RR 6: Significant findings of the research 1. Consolidate and construct a list of criterion for well-defined taxonomy 2. Construct C Overflow Vulnerabilities Exploit Taxonomy 3. Perform 5 Evaluation which presented in 6 evaluation results Based on the evaluations, it is concluded that 1. The taxonomy is well-defined and inline will the criteria 2. The evaluations proved that the classes in the C Overflow Vulnerabilities Exploit taxonomy is significant and relevant
  • 88. Phases Section Phase 1 – Theoretical Studies Phase 2 – Taxonomy Construction Phase 3 – Taxonomy Evaluation Research Question (RQ) RQ 1: Why C overflow vulnerabilities still persist although it is common and known for more than two decades? RQ 2: How to identify and improve understanding of C overflow vulnerabilities and prevent from occurring again? RQ 4: Is the taxonomy effectives in improving understanding of C overflow vulnerabilities? RQ 5: Is the taxonomy comprehensive and useful? Research Objectives (RO) RO 1: To identify strength and weaknesses of current mechanisms in detecting and preventing C overflow vulnerabilities from occurring RO 2: To identify list of C overflows vulnerabilities class RO 3: To compile and construct criteria for well- defined taxonomy RO 4: To construct taxonomy specifically addressing C overflow vulnerabilities RO 5: To evaluate taxonomy based on criteria RO 6: To evaluate the criticality and relevancies of C overflows vulnerabilities based on the taxonomy Phase Deliverables / Output (RR) RR 1: Strength and weaknesses of current detection and prevention mechanism RR 2: List of C overflow vulnerabilities classes RR 3: Criteria of well-defined taxonomy RR 4: Taxonomy of C overflow vulnerabilities exploit RR 5: Taxonomy validated RR 6: Significant findings of the research Conclusion
  • 89. Conclusion 1. C Overflow Vulnerabilities is still relevant 2. There is NO well-defined taxonomy specifically focusing on complete C Overflow Vulnerabilities from source-code perspective for improvement of understanding and knowledge of C developers which looks into the root cause of the problem. Therefore, this is a taxonomy; “C Overflow Vulnerabilities Exploit” Taxonomy; that is proven from the evaluation done to be helpful and useful. 3. 5 evaluations done that shows the significant and relevancies of each classes in the constructed taxonomy Recommendation 1. To further develop a method using the taxonomy as guidelines for static analysis tools improvement 2. To further evaluate the effectiveness of the taxonomy
  • 91. Nurul Haszeli Ahmad Student Id: 2009625912 FSMK, UiTM Shah Alam [email protected] Supervisor: Dr Syed Ahmad Aljunid (FSMK, UiTM Shah Alam) Co-supervisor: Dr Jamalul-Lail Ab Manan (MIMOS Berhad)

Editor's Notes

  • #6: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #7: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #8: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #9: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #10: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #11: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #12: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #13: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #14: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #15: Software vulnerabilities exist since the beginning of IT Become IT (software security) focus with the first recorded finding – Morris Worm (first ever-recorded unintended exploitation) From late 90s onwards, it become the primary focus of software security. However, it does not stop the evolution and trend of vulnerabilities and exploitation. First sophisticated malware exploit Windows vulnerable reported – Code name W32.Stuxnet – attack SCADA systems and it evolved with second version released in 2012 50% - words, spreadsheet, pdf, etc (only 49% contributed by web-based vulnerabilities such as SQLi, XSS, etc). http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=total_number_of_vulnerabilities
  • #16: And so does the exploitation. As shown in previous slide – the numbers of release consistently around 4000 per year. It only requires just a single vulnerability to cause catastrophic in computer systems such what happen to 600,000 apple computer due to known vulnerability in apple OS (reported by Symantec Corporation, 2010) It does not requires up-to-date vulnerabilities to exploit. Kaspersky Lab and Symantec Corporation report shows that there are successful exploit based on old vulnerabilities. The statistics shown on the screen justified this. Howard, LeBlanc and Viega listed 19 in 2005 and later increase to 25. MITRE Corporation and SANS Institute release the same number of programming errors in 2011. Few known cases are attack on SCADA system, Train disaster at London, and Toyota break failure. Many ways to inject overflow – exploit unsafe functions, force flaw arithmetic operation, insert crafted message with no termination, etc. There are many languages having overflow vulnerabilities issues. However, the scope of the thesis is on C language (will describe in LR) To this date, there are 11 methods and more than 40 analysis tools (static and dynamic) and more than 10 classifications or taxonomy
  • #17: And so does the exploitation. As shown in previous slide – the numbers of release consistently around 4000 per year. It only requires just a single vulnerability to cause catastrophic in computer systems such what happen to 600,000 apple computer due to known vulnerability in apple OS (reported by Symantec Corporation, 2010) It does not requires up-to-date vulnerabilities to exploit. Kaspersky Lab and Symantec Corporation report shows that there are successful exploit based on old vulnerabilities. The statistics shown on the screen justified this. Howard, LeBlanc and Viega listed 19 in 2005 and later increase to 25. MITRE Corporation and SANS Institute release the same number of programming errors in 2011. Few known cases are attack on SCADA system, Train disaster at London, and Toyota break failure. Many ways to inject overflow – exploit unsafe functions, force flaw arithmetic operation, insert crafted message with no termination, etc. There are many languages having overflow vulnerabilities issues. However, the scope of the thesis is on C language (will describe in LR) To this date, there are 11 methods and more than 40 analysis tools (static and dynamic) and more than 10 classifications or taxonomy
  • #18: And so does the exploitation. As shown in previous slide – the numbers of release consistently around 4000 per year. It only requires just a single vulnerability to cause catastrophic in computer systems such what happen to 600,000 apple computer due to known vulnerability in apple OS (reported by Symantec Corporation, 2010) It does not requires up-to-date vulnerabilities to exploit. Kaspersky Lab and Symantec Corporation report shows that there are successful exploit based on old vulnerabilities. The statistics shown on the screen justified this. Howard, LeBlanc and Viega listed 19 in 2005 and later increase to 25. MITRE Corporation and SANS Institute release the same number of programming errors in 2011. Few known cases are attack on SCADA system, Train disaster at London, and Toyota break failure. Many ways to inject overflow – exploit unsafe functions, force flaw arithmetic operation, insert crafted message with no termination, etc. There are many languages having overflow vulnerabilities issues. However, the scope of the thesis is on C language (will describe in LR) To this date, there are 11 methods and more than 40 analysis tools (static and dynamic) and more than 10 classifications or taxonomy
  • #19: And so does the exploitation. As shown in previous slide – the numbers of release consistently around 4000 per year. It only requires just a single vulnerability to cause catastrophic in computer systems such what happen to 600,000 apple computer due to known vulnerability in apple OS (reported by Symantec Corporation, 2010) It does not requires up-to-date vulnerabilities to exploit. Kaspersky Lab and Symantec Corporation report shows that there are successful exploit based on old vulnerabilities. The statistics shown on the screen justified this. Howard, LeBlanc and Viega listed 19 in 2005 and later increase to 25. MITRE Corporation and SANS Institute release the same number of programming errors in 2011. Few known cases are attack on SCADA system, Train disaster at London, and Toyota break failure. Many ways to inject overflow – exploit unsafe functions, force flaw arithmetic operation, insert crafted message with no termination, etc. There are many languages having overflow vulnerabilities issues. However, the scope of the thesis is on C language (will describe in LR) To this date, there are 11 methods and more than 40 analysis tools (static and dynamic) and more than 10 classifications or taxonomy
  • #20: Summarize the previous statement – share about the current landscapes and our critical finding. Significant: C overflow vulnerabilities is still relevant, thus by identifying and classifying all types of C overflow vulnerabilities; it will improve understanding of overflow vulnerabilities thus helping in securing software from development to deployment stage. C language is widely used in various mission-critical applications, OS, and drivers (embedded), hence, the result of this research is valuable towards providing protection at early stage of development. By classifying it in a structured way and focusing on the root cause from source code perspective, it will ease the understanding of software developers to write codes, which is free from C overflow vulnerabilities. It is also useful for program analysis tool as guidelines in analysing the software; that is the analysis methods or the tool itself. By experiment and evaluation on feasibility of C overflow vulnerabilities type, we will be able to focus security research initiatives on most critical and relevant types of C overflow vulnerabilities. By evaluating static analysis tools using our constructed classifications and using major open-source programs, we will determine the identified static analysis tools capability, strength and weaknesses. Using the same result, we will determine criticality and persistency of all C overflow vulnerabilities types.
  • #21: Summarize the previous statement – share about the current landscapes and our critical finding. Significant: C overflow vulnerabilities is still relevant, thus by identifying and classifying all types of C overflow vulnerabilities; it will improve understanding of overflow vulnerabilities thus helping in securing software from development to deployment stage. C language is widely used in various mission-critical applications, OS, and drivers (embedded), hence, the result of this research is valuable towards providing protection at early stage of development. By classifying it in a structured way and focusing on the root cause from source code perspective, it will ease the understanding of software developers to write codes, which is free from C overflow vulnerabilities. It is also useful for program analysis tool as guidelines in analysing the software; that is the analysis methods or the tool itself. By experiment and evaluation on feasibility of C overflow vulnerabilities type, we will be able to focus security research initiatives on most critical and relevant types of C overflow vulnerabilities. By evaluating static analysis tools using our constructed classifications and using major open-source programs, we will determine the identified static analysis tools capability, strength and weaknesses. Using the same result, we will determine criticality and persistency of all C overflow vulnerabilities types.
  • #22: Summarize the previous statement – share about the current landscapes and our critical finding. Significant: C overflow vulnerabilities is still relevant, thus by identifying and classifying all types of C overflow vulnerabilities; it will improve understanding of overflow vulnerabilities thus helping in securing software from development to deployment stage. C language is widely used in various mission-critical applications, OS, and drivers (embedded), hence, the result of this research is valuable towards providing protection at early stage of development. By classifying it in a structured way and focusing on the root cause from source code perspective, it will ease the understanding of software developers to write codes, which is free from C overflow vulnerabilities. It is also useful for program analysis tool as guidelines in analysing the software; that is the analysis methods or the tool itself. By experiment and evaluation on feasibility of C overflow vulnerabilities type, we will be able to focus security research initiatives on most critical and relevant types of C overflow vulnerabilities. By evaluating static analysis tools using our constructed classifications and using major open-source programs, we will determine the identified static analysis tools capability, strength and weaknesses. Using the same result, we will determine criticality and persistency of all C overflow vulnerabilities types.
  • #23: Summarize the previous statement – share about the current landscapes and our critical finding. Significant: C overflow vulnerabilities is still relevant, thus by identifying and classifying all types of C overflow vulnerabilities; it will improve understanding of overflow vulnerabilities thus helping in securing software from development to deployment stage. C language is widely used in various mission-critical applications, OS, and drivers (embedded), hence, the result of this research is valuable towards providing protection at early stage of development. By classifying it in a structured way and focusing on the root cause from source code perspective, it will ease the understanding of software developers to write codes, which is free from C overflow vulnerabilities. It is also useful for program analysis tool as guidelines in analysing the software; that is the analysis methods or the tool itself. By experiment and evaluation on feasibility of C overflow vulnerabilities type, we will be able to focus security research initiatives on most critical and relevant types of C overflow vulnerabilities. By evaluating static analysis tools using our constructed classifications and using major open-source programs, we will determine the identified static analysis tools capability, strength and weaknesses. Using the same result, we will determine criticality and persistency of all C overflow vulnerabilities types.
  • #25: Summarize the previous statement – share about the current landscapes and our critical finding. Biezer (1990) present software vuln in 9 classes based on SDLC phases. Piessens share the same views and come out with taxonomy. Vipindeep and Jalote in 2005 classified 8 soft vuln based on insecure coding technique. From many soft vuln mentioned, the most mentioned due to high significant and impact is programming errors. It is also listed in various database Many types of programming errors - lack of input validation, incorrect value type, insufficient exception handling, and incorrect memory assignment or use The result of programming errors - It can affect the confidentiality, integrity, and availability of information system which resulted in losses in profit, life, and infrastructure’s or mechanical breakdown. Many exploitation technique to exploit programming errors. Among those top programming error vulnerabilities exploitation techniques are Cross-Site Scripting (XSS) Injection, SQL Injection, Operating System (OS) Command Injection, Path Traversal, and memory violation or known as overflow vulnerabilities (Martin B. , Brown, Parker, & Kirby, 2011), (Siddharth & Doshi, 2010), (Shariar & Zulkernine, 2011), (IMPERVA, 2011) However, this programming error is more synonym with C and Java languages (Cenzic Inc, 2009), (MITRE Corporation, 2012) and (Mandalia, 2011) because both are dominant languages at OS, server, database, and application level (TIOBE Software, 2012), (Tiwebb Ltd., 2007), (DedaSys LLC, 2011) and (Krill, 2011). Even though overflow does occurred in other languages such as PHP (SecurityFocus, 2009) and (Esser, 2006), Python (Python Software Foundation, 2010), (SecurityFocus, 2010) and (SPI, 2010), Perl (Gentoo Foundation, 2007), (Secunia, 2008), and .NET (SecurityFocus, 2009) and (IBM, 2009), it is not the dominant vulnerability for the programming languages Between Java and C, which is critical? Critical applications such as Windows OS (Microsoft, 2006) and (Microsoft, 2009) and Android OS (Android Developer, 2012), Linux OS (Stack Exchange Inc, 2012), databases (Oracle Corporation, 2012) and (Oracle FAQ, 2012), and web servers (The Apache Software Foundation, 2011). Even the IT giant, Apple Inc., is still relying on C to develop their Apple iOS and Mac OS X (Apple Inc, 2012). Single C overflow vulnerability occurs in this mission critical can causes havoc as published by Symantec whereby 600,000 computers running Apple iOS were affected by one vulnerability found in the OS (Symantec Corporation, 2013). In addition, there are successful attacks that exploited old C overflow vulnerability (Kaspersky Lab, 2013)
  • #26: Summarize the previous statement – share about the current landscapes and our critical finding. Biezer (1990) present software vuln in 9 classes based on SDLC phases. Piessens share the same views and come out with taxonomy. Vipindeep and Jalote in 2005 classified 8 soft vuln based on insecure coding technique. From many soft vuln mentioned, the most mentioned due to high significant and impact is programming errors. It is also listed in various database Many types of programming errors - lack of input validation, incorrect value type, insufficient exception handling, and incorrect memory assignment or use The result of programming errors - It can affect the confidentiality, integrity, and availability of information system which resulted in losses in profit, life, and infrastructure’s or mechanical breakdown. Many exploitation technique to exploit programming errors. Among those top programming error vulnerabilities exploitation techniques are Cross-Site Scripting (XSS) Injection, SQL Injection, Operating System (OS) Command Injection, Path Traversal, and memory violation or known as overflow vulnerabilities (Martin B. , Brown, Parker, & Kirby, 2011), (Siddharth & Doshi, 2010), (Shariar & Zulkernine, 2011), (IMPERVA, 2011) However, this programming error is more synonym with C and Java languages (Cenzic Inc, 2009), (MITRE Corporation, 2012) and (Mandalia, 2011) because both are dominant languages at OS, server, database, and application level (TIOBE Software, 2012), (Tiwebb Ltd., 2007), (DedaSys LLC, 2011) and (Krill, 2011). Even though overflow does occurred in other languages such as PHP (SecurityFocus, 2009) and (Esser, 2006), Python (Python Software Foundation, 2010), (SecurityFocus, 2010) and (SPI, 2010), Perl (Gentoo Foundation, 2007), (Secunia, 2008), and .NET (SecurityFocus, 2009) and (IBM, 2009), it is not the dominant vulnerability for the programming languages Between Java and C, which is critical? Critical applications such as Windows OS (Microsoft, 2006) and (Microsoft, 2009) and Android OS (Android Developer, 2012), Linux OS (Stack Exchange Inc, 2012), databases (Oracle Corporation, 2012) and (Oracle FAQ, 2012), and web servers (The Apache Software Foundation, 2011). Even the IT giant, Apple Inc., is still relying on C to develop their Apple iOS and Mac OS X (Apple Inc, 2012). Single C overflow vulnerability occurs in this mission critical can causes havoc as published by Symantec whereby 600,000 computers running Apple iOS were affected by one vulnerability found in the OS (Symantec Corporation, 2013). In addition, there are successful attacks that exploited old C overflow vulnerability (Kaspersky Lab, 2013)
  • #27: Also known as buffer overflow/overrun. Unsafe function is more general in definition compared to buffer overflow, buffer overrun or format string attack as it can be unsafe by nature such as printf and scanf and exploiting unsafe function may not involve buffer and it can happen to non-related string formation like strlen. Therefore, the term ‘Unsafe function’ covers wide range of C overflows vulnerabilities as listed by Microsoft. Furthermore, the term is closely related to coding, thus understandable to software developers rather than the other earlier used terms. Therefore, as it was identified by Wagner in 2000 (Wagner D. A., 2000), unsafe functions is identified as the first C overflows vulnerabilities class The hype of the research started from the year 2000 Although we listed 9 classes, the actual classes identified is 6. Others mentioned the behaviour but not the class name. Among these nine classes, the most critical and of high occurrences are the unsafe functions and array out-of-bound. However, despite being ignored by some experts and number of occurrences is not as many compared to the two classes, the other seven classes are significantly important because of their impact and severity are ranked at least medium in many vulnerabilities database website. Beside those identified nine classes of C overflow vulnerabilities, there is one more class that was left out by many security experts or researchers although NIST identified it in the year 2007 (Black, Kass, & Koo, Source Code Security Analysis Tool Functional Specification Version 1.0, 2007). It is restated again by Seacord in the year 2008 (Seacord R. C., The CERT C Secure Coding Standard, 2008). NIST listed again the same vulnerability in their report in the year of 2011 (Black, Kass, Koo, & Fong, Source Code Security Analysis Tool Functional Specification Version 1.1 , 2011). It is also identified and listed by Computer Emergency Response Team (CERT) of Carnegie Mellon University (Seacord R. C., CERT C Secure Coding Standard, 2011), (Pincar & Haas, 2011), (Seacord & Haas, EXP41-C. Do not add or subtract a scaled integer to a pointer, 2011), OWASP (OWASP Organization, 2009) and NIST (NIST, 2012). The vulnerability is known as pointer scaling or mixing whereby it is a vulnerability that trigger by incorrect or invalid use of pointer in an arithmetic operation. It might be because it can be easily confuse between function pointer and pointer scaling in terms of the name and use of pointer. However, the behaviour, syntax and way of exploiting is very different from source code perspective for the two pointer related vulnerabilities. As published by NIST (NIST, 2012), pointer scaling is ranked between medium to critical severity, low complexity, and complete impact on confidentiality, integrity and availability. Hence, it shall not be ignored and should be included in any discussion on C overflows vulnerabilities too. Summarized the discussion
  • #28: Also known as buffer overflow/overrun. Unsafe function is more general in definition compared to buffer overflow, buffer overrun or format string attack as it can be unsafe by nature such as printf and scanf and exploiting unsafe function may not involve buffer and it can happen to non-related string formation like strlen. Therefore, the term ‘Unsafe function’ covers wide range of C overflows vulnerabilities as listed by Microsoft. Furthermore, the term is closely related to coding, thus understandable to software developers rather than the other earlier used terms. Therefore, as it was identified by Wagner in 2000 (Wagner D. A., 2000), unsafe functions is identified as the first C overflows vulnerabilities class The hype of the research started from the year 2000 Although we listed 9 classes, the actual classes identified is 6. Others mentioned the behaviour but not the class name. Among these nine classes, the most critical and of high occurrences are the unsafe functions and array out-of-bound. However, despite being ignored by some experts and number of occurrences is not as many compared to the two classes, the other seven classes are significantly important because of their impact and severity are ranked at least medium in many vulnerabilities database website. Beside those identified nine classes of C overflow vulnerabilities, there is one more class that was left out by many security experts or researchers although NIST identified it in the year 2007 (Black, Kass, & Koo, Source Code Security Analysis Tool Functional Specification Version 1.0, 2007). It is restated again by Seacord in the year 2008 (Seacord R. C., The CERT C Secure Coding Standard, 2008). NIST listed again the same vulnerability in their report in the year of 2011 (Black, Kass, Koo, & Fong, Source Code Security Analysis Tool Functional Specification Version 1.1 , 2011). It is also identified and listed by Computer Emergency Response Team (CERT) of Carnegie Mellon University (Seacord R. C., CERT C Secure Coding Standard, 2011), (Pincar & Haas, 2011), (Seacord & Haas, EXP41-C. Do not add or subtract a scaled integer to a pointer, 2011), OWASP (OWASP Organization, 2009) and NIST (NIST, 2012). The vulnerability is known as pointer scaling or mixing whereby it is a vulnerability that trigger by incorrect or invalid use of pointer in an arithmetic operation. It might be because it can be easily confuse between function pointer and pointer scaling in terms of the name and use of pointer. However, the behaviour, syntax and way of exploiting is very different from source code perspective for the two pointer related vulnerabilities. As published by NIST (NIST, 2012), pointer scaling is ranked between medium to critical severity, low complexity, and complete impact on confidentiality, integrity and availability. Hence, it shall not be ignored and should be included in any discussion on C overflows vulnerabilities too. Summarized the discussion
  • #29: Also known as buffer overflow/overrun. Unsafe function is more general in definition compared to buffer overflow, buffer overrun or format string attack as it can be unsafe by nature such as printf and scanf and exploiting unsafe function may not involve buffer and it can happen to non-related string formation like strlen. Therefore, the term ‘Unsafe function’ covers wide range of C overflows vulnerabilities as listed by Microsoft. Furthermore, the term is closely related to coding, thus understandable to software developers rather than the other earlier used terms. Therefore, as it was identified by Wagner in 2000 (Wagner D. A., 2000), unsafe functions is identified as the first C overflows vulnerabilities class The hype of the research started from the year 2000 Although we listed 9 classes, the actual classes identified is 6. Others mentioned the behaviour but not the class name. Among these nine classes, the most critical and of high occurrences are the unsafe functions and array out-of-bound. However, despite being ignored by some experts and number of occurrences is not as many compared to the two classes, the other seven classes are significantly important because of their impact and severity are ranked at least medium in many vulnerabilities database website. Beside those identified nine classes of C overflow vulnerabilities, there is one more class that was left out by many security experts or researchers although NIST identified it in the year 2007 (Black, Kass, & Koo, Source Code Security Analysis Tool Functional Specification Version 1.0, 2007). It is restated again by Seacord in the year 2008 (Seacord R. C., The CERT C Secure Coding Standard, 2008). NIST listed again the same vulnerability in their report in the year of 2011 (Black, Kass, Koo, & Fong, Source Code Security Analysis Tool Functional Specification Version 1.1 , 2011). It is also identified and listed by Computer Emergency Response Team (CERT) of Carnegie Mellon University (Seacord R. C., CERT C Secure Coding Standard, 2011), (Pincar & Haas, 2011), (Seacord & Haas, EXP41-C. Do not add or subtract a scaled integer to a pointer, 2011), OWASP (OWASP Organization, 2009) and NIST (NIST, 2012). The vulnerability is known as pointer scaling or mixing whereby it is a vulnerability that trigger by incorrect or invalid use of pointer in an arithmetic operation. It might be because it can be easily confuse between function pointer and pointer scaling in terms of the name and use of pointer. However, the behaviour, syntax and way of exploiting is very different from source code perspective for the two pointer related vulnerabilities. As published by NIST (NIST, 2012), pointer scaling is ranked between medium to critical severity, low complexity, and complete impact on confidentiality, integrity and availability. Hence, it shall not be ignored and should be included in any discussion on C overflows vulnerabilities too. Summarized the discussion
  • #30: Also known as buffer overflow/overrun. Unsafe function is more general in definition compared to buffer overflow, buffer overrun or format string attack as it can be unsafe by nature such as printf and scanf and exploiting unsafe function may not involve buffer and it can happen to non-related string formation like strlen. Therefore, the term ‘Unsafe function’ covers wide range of C overflows vulnerabilities as listed by Microsoft. Furthermore, the term is closely related to coding, thus understandable to software developers rather than the other earlier used terms. Therefore, as it was identified by Wagner in 2000 (Wagner D. A., 2000), unsafe functions is identified as the first C overflows vulnerabilities class The hype of the research started from the year 2000 Although we listed 9 classes, the actual classes identified is 6. Others mentioned the behaviour but not the class name. Among these nine classes, the most critical and of high occurrences are the unsafe functions and array out-of-bound. However, despite being ignored by some experts and number of occurrences is not as many compared to the two classes, the other seven classes are significantly important because of their impact and severity are ranked at least medium in many vulnerabilities database website. Beside those identified nine classes of C overflow vulnerabilities, there is one more class that was left out by many security experts or researchers although NIST identified it in the year 2007 (Black, Kass, & Koo, Source Code Security Analysis Tool Functional Specification Version 1.0, 2007). It is restated again by Seacord in the year 2008 (Seacord R. C., The CERT C Secure Coding Standard, 2008). NIST listed again the same vulnerability in their report in the year of 2011 (Black, Kass, Koo, & Fong, Source Code Security Analysis Tool Functional Specification Version 1.1 , 2011). It is also identified and listed by Computer Emergency Response Team (CERT) of Carnegie Mellon University (Seacord R. C., CERT C Secure Coding Standard, 2011), (Pincar & Haas, 2011), (Seacord & Haas, EXP41-C. Do not add or subtract a scaled integer to a pointer, 2011), OWASP (OWASP Organization, 2009) and NIST (NIST, 2012). The vulnerability is known as pointer scaling or mixing whereby it is a vulnerability that trigger by incorrect or invalid use of pointer in an arithmetic operation. It might be because it can be easily confuse between function pointer and pointer scaling in terms of the name and use of pointer. However, the behaviour, syntax and way of exploiting is very different from source code perspective for the two pointer related vulnerabilities. As published by NIST (NIST, 2012), pointer scaling is ranked between medium to critical severity, low complexity, and complete impact on confidentiality, integrity and availability. Hence, it shall not be ignored and should be included in any discussion on C overflows vulnerabilities too. Summarized the discussion
  • #31: Also known as buffer overflow/overrun. Unsafe function is more general in definition compared to buffer overflow, buffer overrun or format string attack as it can be unsafe by nature such as printf and scanf and exploiting unsafe function may not involve buffer and it can happen to non-related string formation like strlen. Therefore, the term ‘Unsafe function’ covers wide range of C overflows vulnerabilities as listed by Microsoft. Furthermore, the term is closely related to coding, thus understandable to software developers rather than the other earlier used terms. Therefore, as it was identified by Wagner in 2000 (Wagner D. A., 2000), unsafe functions is identified as the first C overflows vulnerabilities class The hype of the research started from the year 2000 Although we listed 9 classes, the actual classes identified is 6. Others mentioned the behaviour but not the class name. Among these nine classes, the most critical and of high occurrences are the unsafe functions and array out-of-bound. However, despite being ignored by some experts and number of occurrences is not as many compared to the two classes, the other seven classes are significantly important because of their impact and severity are ranked at least medium in many vulnerabilities database website. Beside those identified nine classes of C overflow vulnerabilities, there is one more class that was left out by many security experts or researchers although NIST identified it in the year 2007 (Black, Kass, & Koo, Source Code Security Analysis Tool Functional Specification Version 1.0, 2007). It is restated again by Seacord in the year 2008 (Seacord R. C., The CERT C Secure Coding Standard, 2008). NIST listed again the same vulnerability in their report in the year of 2011 (Black, Kass, Koo, & Fong, Source Code Security Analysis Tool Functional Specification Version 1.1 , 2011). It is also identified and listed by Computer Emergency Response Team (CERT) of Carnegie Mellon University (Seacord R. C., CERT C Secure Coding Standard, 2011), (Pincar & Haas, 2011), (Seacord & Haas, EXP41-C. Do not add or subtract a scaled integer to a pointer, 2011), OWASP (OWASP Organization, 2009) and NIST (NIST, 2012). The vulnerability is known as pointer scaling or mixing whereby it is a vulnerability that trigger by incorrect or invalid use of pointer in an arithmetic operation. It might be because it can be easily confuse between function pointer and pointer scaling in terms of the name and use of pointer. However, the behaviour, syntax and way of exploiting is very different from source code perspective for the two pointer related vulnerabilities. As published by NIST (NIST, 2012), pointer scaling is ranked between medium to critical severity, low complexity, and complete impact on confidentiality, integrity and availability. Hence, it shall not be ignored and should be included in any discussion on C overflows vulnerabilities too. Summarized the discussion
  • #32: Program analysis started by Kings in 1970 Static analysis is a method to analyse program without executing the program. It either analyses the actual source code or binary form of the program (Ireland, 2009) and extract the information within for debugging, comprehension or validation process (Binkley, 2007)
  • #33: Program analysis started by Kings in 1970 Static analysis is a method to analyse program without executing the program. It either analyses the actual source code or binary form of the program (Ireland, 2009) and extract the information within for debugging, comprehension or validation process (Binkley, 2007)
  • #34: In summary works starts as early as 1977 but focus on security starts in 2000 Through out till now, there are about 11 methods and approximate of 40 tools but still C overflow vulnerabilities persist. Although, (Secunia, 2011), (MITRE Corporation, 2012), (Cenzic Inc., 2010), (HewlettPackard, 2011) shows reduction in numbers of C overflow vulnerabilities and SANS ranked it from no 1 to no 3, it is still considered as highly critical and shall not be ignored (Howard, LeBlanc & Viega, 2010) and (Sadeghi, 2011) and the numbers of advisories release are still significantly high (Rusnacko, 2014), (Secunia, 2013), (Trustwave, 2013), (Paul, 2013), (Qualys, 2013) and (Department of Homeland Security, 2013) Took another step in the research to review the methods and tools limitation
  • #35: In summary works starts as early as 1977 but focus on security starts in 2000 Through out till now, there are about 11 methods and approximate of 40 tools but still C overflow vulnerabilities persist. Although, (Secunia, 2011), (MITRE Corporation, 2012), (Cenzic Inc., 2010), (HewlettPackard, 2011) shows reduction in numbers of C overflow vulnerabilities and SANS ranked it from no 1 to no 3, it is still considered as highly critical and shall not be ignored (Howard, LeBlanc & Viega, 2010) and (Sadeghi, 2011) and the numbers of advisories release are still significantly high (Rusnacko, 2014), (Secunia, 2013), (Trustwave, 2013), (Paul, 2013), (Qualys, 2013) and (Department of Homeland Security, 2013) Took another step in the research to review the methods and tools limitation
  • #36: In summary works starts as early as 1977 but focus on security starts in 2000 Through out till now, there are about 11 methods and approximate of 40 tools but still C overflow vulnerabilities persist. Although, (Secunia, 2011), (MITRE Corporation, 2012), (Cenzic Inc., 2010), (HewlettPackard, 2011) shows reduction in numbers of C overflow vulnerabilities and SANS ranked it from no 1 to no 3, it is still considered as highly critical and shall not be ignored (Howard, LeBlanc & Viega, 2010) and (Sadeghi, 2011) and the numbers of advisories release are still significantly high (Rusnacko, 2014), (Secunia, 2013), (Trustwave, 2013), (Paul, 2013), (Qualys, 2013) and (Department of Homeland Security, 2013) Took another step in the research to review the methods and tools limitation
  • #37: In summary works starts as early as 1977 but focus on security starts in 2000 Through out till now, there are about 11 methods and approximate of 40 tools but still C overflow vulnerabilities persist. Although, (Secunia, 2011), (MITRE Corporation, 2012), (Cenzic Inc., 2010), (HewlettPackard, 2011) shows reduction in numbers of C overflow vulnerabilities and SANS ranked it from no 1 to no 3, it is still considered as highly critical and shall not be ignored (Howard, LeBlanc & Viega, 2010) and (Sadeghi, 2011) and the numbers of advisories release are still significantly high (Rusnacko, 2014), (Secunia, 2013), (Trustwave, 2013), (Paul, 2013), (Qualys, 2013) and (Department of Homeland Security, 2013) Took another step in the research to review the methods and tools limitation
  • #38: Lexical analysis is the oldest method implemented in compiler (Aho, Lam, Sethi, & Ullman, 1986) and (Kunst, 1988), and grep utility (Viega J. , Bloch, Kohno, & McGraw, 2000), (Deursen & Kuipers, 1999) and (Chess & McGraw, 2004) This method is favored due to its simple straightforward analysis approach. It read stream of token or character and matches with the rules or pattern defined in the program or listed in a database (Sotirov, 2005), (Li & Cui, 2010), (Deursen & Kuipers, 1999), (Viega J. , Bloch, Kohno, & McGraw, 2000), (Zafar & Ali, 2007), (Kolmonen, 2007) and (Chess & McGraw, 2004). Does not require any transformation of code and complex mathematical processes, hence produces faster result (Zafar & Ali, 2007) This simplicity principle is also its greatest weakness. The method ignores semantics and relies on defined pattern or rules (Larochelle & Evans, 2001). Thus, the analysis is limited to the pattern or rules defined and the result is within the boundary of a function (Sotirov, 2005), (Li & Cui, 2010), (Deursen & Kuipers, 1999) and (Zafar & Ali, 2007) Inter-procedural and intra-procedural analysis are two common analysis methods, implemented as its own or combined with other methods to enhance the effectiveness (Qadeer & Rehof, 2005), (Viega J. , Bloch, Kohno, & McGraw, 2002), (Andersen, 1994) and (Pozza & Sisto, 2008). Inter-procedural analysis consider relationship and impact of reference’s variable between functions, therefore, it is slower but more accurate result as compared to intra-procedural analysis (Andersen, 1994). Intra-procedural analysis, although faster, is unreliable due to ignorance on the semantics of the analysed programs (Andersen, 1994) and (Pozza & Sisto, 2008). However, both have common weakness, which both are dependent on other methods to have a sound and complete analysis (Andersen, 1994) and (Pozza & Sisto, 2008). Abstract interpretation in the early implementation by the Cousot’s is for program understanding and debugging purposes (Cousot & Cousot, 1977). The method started implemented in software security area after it was implemented in a static analysis tool named ASTREE (Cousot P. , et al., 2005). The method considered the semantics of a program in its analysis (Erkkinen & Hote, 2006), (Jhala & Majumdar, Software Model Checking, 2009). Hence, no matter how complex the source code is, it will be able to analyse it. However, for that to happen, it requires complex formal method, which involves the abstraction process and mathematical algorithm (D’Silva, Kroening, & Weissenbacher, 2008), (Ferrara, 2010), (Gopan & Reps, 2007), (Ferrara, 2009) and (Logozzo, 2004). Due to this, abstract interpretation suffers scalability issues (Ferrara, Logozzo, & Fanhdrich, 2008) and (Venet, 2005). To overcome the scalability issue, many abstract interpretation implementations ignores some properties in its analysis processes resulted in reducinc analysis scope and accuracy (Ivančić, et al., 2005), (Zaks, et al., 2006), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009) and (Burgstaller, Scholz, & Blieberger, 2006). On top of scalability issue, the method also produce non-reliable result due to use of approximation (Burgstaller, Scholz, & Blieberger, 2006). Data flow analysis (DFA) is a semantic-light analysis method. The differences between the method and abstract interpretation is that the latter looks into relationship of each attributes in flow graph whereas the earlier analyse each attributes by following the path of identified attributes. DFA is able to perform precise analysis and locate the exact path of vulnerabilities flow (Kolmonen, 2007) and (Parasoft Corporation, 2009). However, implementation of DFA took longer times for analysis (Pozza & Sisto, 2008). Therefore, to reduce analysis time, the scope of analysis is reduced (Soffa, 2008). This resulted in equivalent issue as abstract interpretation. Symbolic analysis is a method derived to analyze program parallel programming architecture (Bae, 2003) such as kernel-level driver and embedded program. The method semantically converts the program into closest execution form and symbolically represents the form before analysis (Haghighat & Polychronopoulos, 1993), (Ball, et al., 2006), (Bae, 2003), (Lim, Lal, & Reps, 2009) and (Burgstaller, Scholz, & Blieberger, 2006), hence reduce the flow path to be analyse and the time taken is lesser compared to DFA and AI. In addition, the method counter-checks the errors found to reduce false alarm (Ball, et al., 2006) and (Lim, Lal, & Reps, 2009). The problem with this method is, it depends on input data or constructed model in its analysis (Ball, et al., 2006) and (Lim, Lal, & Reps, 2009). Incorrect data prediction or model will result in false alarm (Ball, et al., 2006). Moreover, refinement process may slow down the analysis or causing overhead at implementation. On top of that, the complexity of the algorithm may cause issues during implementation or scope reduction as faces by DFA and AI method. Integer range analysis method is simple mathematical-based method that ignores program relationship Has the ability to analyse complex algorithm (Venet, 2005) An excellent method in detecting string manipulation vulnerabilities such as unsafe functions and array out-of-bound (Dor, Rodeh, & Sagiv, 2001) and (Wagner, Foster, Brewer, & Aiken, 2000). Despite the strength, the technique suffers scalability issues (Venet, 2005) and it failed to analyse program with references, pointer aliasing, or complex inter-procedural functions. The method also failed to determine the array structure correctly (Venet, 2005). Due to ignores program semantics the method suffers many false alarms (Dor, Rodeh, & Sagiv, 2001) and (Wagner, Foster, Brewer, & Aiken, 2000). Annotation based is another simple and light but powerful method (no complex algo) The strength of the method lies in the annotation wrote by developers in the source code (Larochelle & Evans, 2001) which enforce the output based on the annotated rules (Tevis & Hamilton, 2004) and (Pozza & Sisto, 2008). The method is highly dependent on correct syntax of the annotation (Larochelle & Evans, 2001), (Evans, Guttag, Horning, & Tan, 1994) and (Evans & Larochelle, 2002). Incorrect or misinterpretation = false alarms The reluctant of developer to annotate their code as required (Pozza & Sisto, 2008) and (Rinard, Cadar, Dumitran, Roy, & Leu, 2004) contributes to failure too. Type matching is another analysis similar to AI but focuses on certain attributes in a program, which is the variable types. (Ferrara, 2010) and (Wang, Guo, & Chen, 2009) The method understands the variable types, verified and ensures that type of input can be accepted or correctly parses into the recipient variable. For the method to analyze correctly, abstraction of source code must be defined correctly (Ferrara, 2010) and (Wang, Guo, & Chen, 2009). Incorrect abstraction or source code information will result in imprecise analysis. Another problem of the method is limited to type analysis (Sotirov, 2005), which from C overflow vulnerabilities perspective; it is only able to detect variable type conversion and function pointer vulnerabilities. Software model checking and bounded model checking are two analysis methods utilize the model or properties defined for a program. Both share the same strengths. The first strength is the ability to analyse all reachable code based on defined model for variable properties verification (Li & Cui, 2010), (Wenkai, 2002) and (Biere, Cimatti, Clarke, & Strichman, 2003). The second strength is both understand semantically the structure of the source code by abstracting the source code based on defined model into the closest form on runtime execution (Li & Cui, 2010), (Wenkai, 2002), (Biere, Cimatti, Clarke, & Strichman, 2003) and (Jhala & Majumdar, 2009). The third strength of these two methods is the unique counterexample algorithm implemented to reduce false alarm (Clarke E. M., 2006), (Ku, Hart, Chechik, & Lie, 2007) and (Bakera, et al., 2009). Nevertheless, both still has limitation. SMC suffer state explosion issues due to exhaustively traverse every possible path resulted in possibility of infinite traversal especially in loop functions (Wenkai, 2002) and (Clarke, Biere, Raimi, & Zhu, 2001). In order to avoid state explosion, BMC method implements algorithm that reduces the possibility of state explosion (Wenkai, 2002). However, Wenkai only manage to reduce but not eliminate. This also resulting on implementations complication and less sound compared to SMC (Clarke E. M., 2006), (Clarke, Biere, Raimi, & Zhu, 2001) and (Biere, Cimatti, Clarke, & Strichman, 2003). Another weaknesses shared is the implementation of the methods requires understanding of formal method and automaton theory. Due to complications, the methods only focuses on certain C overflow vulnerabilities, hence limits its effectiveness (Holzmann G. J., 2002), (Clarke, Biere, Raimi, & Zhu, 2001) and (Qadeer & Rehof, 2005) There are few more issues related to these 11 methods. These weakness has also impact on the tools that implements the methods.
  • #39: Lexical analysis is the oldest method implemented in compiler (Aho, Lam, Sethi, & Ullman, 1986) and (Kunst, 1988), and grep utility (Viega J. , Bloch, Kohno, & McGraw, 2000), (Deursen & Kuipers, 1999) and (Chess & McGraw, 2004) This method is favored due to its simple straightforward analysis approach. It read stream of token or character and matches with the rules or pattern defined in the program or listed in a database (Sotirov, 2005), (Li & Cui, 2010), (Deursen & Kuipers, 1999), (Viega J. , Bloch, Kohno, & McGraw, 2000), (Zafar & Ali, 2007), (Kolmonen, 2007) and (Chess & McGraw, 2004). Does not require any transformation of code and complex mathematical processes, hence produces faster result (Zafar & Ali, 2007) This simplicity principle is also its greatest weakness. The method ignores semantics and relies on defined pattern or rules (Larochelle & Evans, 2001). Thus, the analysis is limited to the pattern or rules defined and the result is within the boundary of a function (Sotirov, 2005), (Li & Cui, 2010), (Deursen & Kuipers, 1999) and (Zafar & Ali, 2007) Inter-procedural and intra-procedural analysis are two common analysis methods, implemented as its own or combined with other methods to enhance the effectiveness (Qadeer & Rehof, 2005), (Viega J. , Bloch, Kohno, & McGraw, 2002), (Andersen, 1994) and (Pozza & Sisto, 2008). Inter-procedural analysis consider relationship and impact of reference’s variable between functions, therefore, it is slower but more accurate result as compared to intra-procedural analysis (Andersen, 1994). Intra-procedural analysis, although faster, is unreliable due to ignorance on the semantics of the analysed programs (Andersen, 1994) and (Pozza & Sisto, 2008). However, both have common weakness, which both are dependent on other methods to have a sound and complete analysis (Andersen, 1994) and (Pozza & Sisto, 2008). Abstract interpretation in the early implementation by the Cousot’s is for program understanding and debugging purposes (Cousot & Cousot, 1977). The method started implemented in software security area after it was implemented in a static analysis tool named ASTREE (Cousot P. , et al., 2005). The method considered the semantics of a program in its analysis (Erkkinen & Hote, 2006), (Jhala & Majumdar, Software Model Checking, 2009). Hence, no matter how complex the source code is, it will be able to analyse it. However, for that to happen, it requires complex formal method, which involves the abstraction process and mathematical algorithm (D’Silva, Kroening, & Weissenbacher, 2008), (Ferrara, 2010), (Gopan & Reps, 2007), (Ferrara, 2009) and (Logozzo, 2004). Due to this, abstract interpretation suffers scalability issues (Ferrara, Logozzo, & Fanhdrich, 2008) and (Venet, 2005). To overcome the scalability issue, many abstract interpretation implementations ignores some properties in its analysis processes resulted in reducinc analysis scope and accuracy (Ivančić, et al., 2005), (Zaks, et al., 2006), (Balakrishnan, Sankaranarayanan, Ivančić, & Gupta, 2009) and (Burgstaller, Scholz, & Blieberger, 2006). On top of scalability issue, the method also produce non-reliable result due to use of approximation (Burgstaller, Scholz, & Blieberger, 2006). Data flow analysis (DFA) is a semantic-light analysis method. The differences between the method and abstract interpretation is that the latter looks into relationship of each attributes in flow graph whereas the earlier analyse each attributes by following the path of identified attributes. DFA is able to perform precise analysis and locate the exact path of vulnerabilities flow (Kolmonen, 2007) and (Parasoft Corporation, 2009). However, implementation of DFA took longer times for analysis (Pozza & Sisto, 2008). Therefore, to reduce analysis time, the scope of analysis is reduced (Soffa, 2008). This resulted in equivalent issue as abstract interpretation. Symbolic analysis is a method derived to analyze program parallel programming architecture (Bae, 2003) such as kernel-level driver and embedded program. The method semantically converts the program into closest execution form and symbolically represents the form before analysis (Haghighat & Polychronopoulos, 1993), (Ball, et al., 2006), (Bae, 2003), (Lim, Lal, & Reps, 2009) and (Burgstaller, Scholz, & Blieberger, 2006), hence reduce the flow path to be analyse and the time taken is lesser compared to DFA and AI. In addition, the method counter-checks the errors found to reduce false alarm (Ball, et al., 2006) and (Lim, Lal, & Reps, 2009). The problem with this method is, it depends on input data or constructed model in its analysis (Ball, et al., 2006) and (Lim, Lal, & Reps, 2009). Incorrect data prediction or model will result in false alarm (Ball, et al., 2006). Moreover, refinement process may slow down the analysis or causing overhead at implementation. On top of that, the complexity of the algorithm may cause issues during implementation or scope reduction as faces by DFA and AI method. Integer range analysis method is simple mathematical-based method that ignores program relationship Has the ability to analyse complex algorithm (Venet, 2005) An excellent method in detecting string manipulation vulnerabilities such as unsafe functions and array out-of-bound (Dor, Rodeh, & Sagiv, 2001) and (Wagner, Foster, Brewer, & Aiken, 2000). Despite the strength, the technique suffers scalability issues (Venet, 2005) and it failed to analyse program with references, pointer aliasing, or complex inter-procedural functions. The method also failed to determine the array structure correctly (Venet, 2005). Due to ignores program semantics the method suffers many false alarms (Dor, Rodeh, & Sagiv, 2001) and (Wagner, Foster, Brewer, & Aiken, 2000). Annotation based is another simple and light but powerful method (no complex algo) The strength of the method lies in the annotation wrote by developers in the source code (Larochelle & Evans, 2001) which enforce the output based on the annotated rules (Tevis & Hamilton, 2004) and (Pozza & Sisto, 2008). The method is highly dependent on correct syntax of the annotation (Larochelle & Evans, 2001), (Evans, Guttag, Horning, & Tan, 1994) and (Evans & Larochelle, 2002). Incorrect or misinterpretation = false alarms The reluctant of developer to annotate their code as required (Pozza & Sisto, 2008) and (Rinard, Cadar, Dumitran, Roy, & Leu, 2004) contributes to failure too. Type matching is another analysis similar to AI but focuses on certain attributes in a program, which is the variable types. (Ferrara, 2010) and (Wang, Guo, & Chen, 2009) The method understands the variable types, verified and ensures that type of input can be accepted or correctly parses into the recipient variable. For the method to analyze correctly, abstraction of source code must be defined correctly (Ferrara, 2010) and (Wang, Guo, & Chen, 2009). Incorrect abstraction or source code information will result in imprecise analysis. Another problem of the method is limited to type analysis (Sotirov, 2005), which from C overflow vulnerabilities perspective; it is only able to detect variable type conversion and function pointer vulnerabilities. Software model checking and bounded model checking are two analysis methods utilize the model or properties defined for a program. Both share the same strengths. The first strength is the ability to analyse all reachable code based on defined model for variable properties verification (Li & Cui, 2010), (Wenkai, 2002) and (Biere, Cimatti, Clarke, & Strichman, 2003). The second strength is both understand semantically the structure of the source code by abstracting the source code based on defined model into the closest form on runtime execution (Li & Cui, 2010), (Wenkai, 2002), (Biere, Cimatti, Clarke, & Strichman, 2003) and (Jhala & Majumdar, 2009). The third strength of these two methods is the unique counterexample algorithm implemented to reduce false alarm (Clarke E. M., 2006), (Ku, Hart, Chechik, & Lie, 2007) and (Bakera, et al., 2009). Nevertheless, both still has limitation. SMC suffer state explosion issues due to exhaustively traverse every possible path resulted in possibility of infinite traversal especially in loop functions (Wenkai, 2002) and (Clarke, Biere, Raimi, & Zhu, 2001). In order to avoid state explosion, BMC method implements algorithm that reduces the possibility of state explosion (Wenkai, 2002). However, Wenkai only manage to reduce but not eliminate. This also resulting on implementations complication and less sound compared to SMC (Clarke E. M., 2006), (Clarke, Biere, Raimi, & Zhu, 2001) and (Biere, Cimatti, Clarke, & Strichman, 2003). Another weaknesses shared is the implementation of the methods requires understanding of formal method and automaton theory. Due to complications, the methods only focuses on certain C overflow vulnerabilities, hence limits its effectiveness (Holzmann G. J., 2002), (Clarke, Biere, Raimi, & Zhu, 2001) and (Qadeer & Rehof, 2005) There are few more issues related to these 11 methods. These weakness has also impact on the tools that implements the methods.
  • #40: FlawFinder and RATS is two examples of lexical analysis tools and is continuously improved with latest version released in the year of 2007 (FlawFinder) and 2011 (RATS). FlawFinder produces analysis with high false alarms and limited to unsafe functions vulnerabilities (Sotirov, 2005). However, to compliment the limitation, FlawFinder highlight risk level of identified vulnerabilities, which determine the significant and reduce the time to evaluate the severity of the identified vulnerabilities but the limitation still remains (Wheeler, 2007) RATS is another lexical analysis based tool (Zafar & Ali, 2007) that inherit the same limitation as other tools implementing the method (Kolmonen, 2007) and (Tevis & Hamilton, 2004). The tools differentiate the overflow locations; heap or stack (Viega J. , Bloch, Kohno, & McGraw, 2002). Database of RATS is more extensive (HP Fortify, 2011) and (Viega J. , Bloch, Kohno, & McGraw, 2002) compared to others lexical analysis tool. Nevertheless, HP notified that the weaknesses (high false alarms and limited vulnerabilities) remains (HP Fortify, 2011). BOON uses IRA and effective and efficient in detecting vulnerabilities that has possibility to overflow due to manipulation of lower or upper bound (Chess & McGraw, 2004) and (Kolmonen, 2007). BOON does not dependent on list or database, hence does not limit to unsafe function (Tevis & Hamilton, 2004) However, due to IRA method, the tool still generate false alarms and limited to C overflow vulnerabilities, of which bounds can be measured (Zitser, Lippmann, & Leek, 2004), (Xie, Chou, & Engler, 2003) and (Sotirov, 2005). ARCHER is claim by its inventor (Cousots) as one of promising tools. The tool in general, uses symbolic analysis method and understands the structures of a program, thus is able to precisely detect vulnerabilities (Lim, Lal, & Reps, 2011) and (Pozza & Sisto, 2008) and able to analyse large source code (Xie, Chou, & Engler, 2003). The problem with the tool is it need to completely and accurately convert programs into symbolic model and this is still unresolved issue causing less effectiveness in detecting vulnerabilities (Lim, Lal, & Reps, 2011), (Kolmonen, 2007), (Zitser, Lippmann, & Leek, 2004). Moreover, the tool only published vulnerabilities that are proven to be vulnerable. Therefore, there is possibility to trigger false positive for vulnerabilities that cannot be proven by the tool (Zitser, Lippmann, & Leek, 2004) and (Xie, Chou, & Engler, 2003). MOPS is another tool claimed to be a promising tool in detecting vulnerabilities (Chess & McGraw, 2004), (Tevis & Hamilton, 2004) and (Kolmonen, 2007). MOPS does not focuses on any specific C overflow vulnerabilities; instead it can be used to find any types of vulnerabilities and programming errors depending on model created (Chess & McGraw, 2004), (Tevis & Hamilton, 2004), (Kolmonen, 2007) and (Sotirov, 2005) A difficulty in modelling the exact execution path and complexity in building the model especially for a complex source code has causes the tool to be less effective and efficient (Engler & Musuvathi, 2004). The time taken to build the model has also contributed to the reluctant of security analyst in using the tool (Engler & Musuvathi, 2004). Additionally, incorrect modelling due to source code complexity has the possibility of producing false alarm (Engler & Musuvathi, 2004). There are many more tools and to go each tools in detail will be time consume. As shown on the slide, we have review many tools by critically review many previous works as shown on the list of references. The conclusion here is that, until today, there are no tools that can successfully detect all available C overflow vulnerabilities classes.
  • #41: FlawFinder and RATS is two examples of lexical analysis tools and is continuously improved with latest version released in the year of 2007 (FlawFinder) and 2011 (RATS). FlawFinder produces analysis with high false alarms and limited to unsafe functions vulnerabilities (Sotirov, 2005). However, to compliment the limitation, FlawFinder highlight risk level of identified vulnerabilities, which determine the significant and reduce the time to evaluate the severity of the identified vulnerabilities but the limitation still remains (Wheeler, 2007) RATS is another lexical analysis based tool (Zafar & Ali, 2007) that inherit the same limitation as other tools implementing the method (Kolmonen, 2007) and (Tevis & Hamilton, 2004). The tools differentiate the overflow locations; heap or stack (Viega J. , Bloch, Kohno, & McGraw, 2002). Database of RATS is more extensive (HP Fortify, 2011) and (Viega J. , Bloch, Kohno, & McGraw, 2002) compared to others lexical analysis tool. Nevertheless, HP notified that the weaknesses (high false alarms and limited vulnerabilities) remains (HP Fortify, 2011). BOON uses IRA and effective and efficient in detecting vulnerabilities that has possibility to overflow due to manipulation of lower or upper bound (Chess & McGraw, 2004) and (Kolmonen, 2007). BOON does not dependent on list or database, hence does not limit to unsafe function (Tevis & Hamilton, 2004) However, due to IRA method, the tool still generate false alarms and limited to C overflow vulnerabilities, of which bounds can be measured (Zitser, Lippmann, & Leek, 2004), (Xie, Chou, & Engler, 2003) and (Sotirov, 2005). ARCHER is claim by its inventor (Cousots) as one of promising tools. The tool in general, uses symbolic analysis method and understands the structures of a program, thus is able to precisely detect vulnerabilities (Lim, Lal, & Reps, 2011) and (Pozza & Sisto, 2008) and able to analyse large source code (Xie, Chou, & Engler, 2003). The problem with the tool is it need to completely and accurately convert programs into symbolic model and this is still unresolved issue causing less effectiveness in detecting vulnerabilities (Lim, Lal, & Reps, 2011), (Kolmonen, 2007), (Zitser, Lippmann, & Leek, 2004). Moreover, the tool only published vulnerabilities that are proven to be vulnerable. Therefore, there is possibility to trigger false positive for vulnerabilities that cannot be proven by the tool (Zitser, Lippmann, & Leek, 2004) and (Xie, Chou, & Engler, 2003). MOPS is another tool claimed to be a promising tool in detecting vulnerabilities (Chess & McGraw, 2004), (Tevis & Hamilton, 2004) and (Kolmonen, 2007). MOPS does not focuses on any specific C overflow vulnerabilities; instead it can be used to find any types of vulnerabilities and programming errors depending on model created (Chess & McGraw, 2004), (Tevis & Hamilton, 2004), (Kolmonen, 2007) and (Sotirov, 2005) A difficulty in modelling the exact execution path and complexity in building the model especially for a complex source code has causes the tool to be less effective and efficient (Engler & Musuvathi, 2004). The time taken to build the model has also contributed to the reluctant of security analyst in using the tool (Engler & Musuvathi, 2004). Additionally, incorrect modelling due to source code complexity has the possibility of producing false alarm (Engler & Musuvathi, 2004). There are many more tools and to go each tools in detail will be time consume. As shown on the slide, we have review many tools by critically review many previous works as shown on the list of references. The conclusion here is that, until today, there are no tools that can successfully detect all available C overflow vulnerabilities classes.
  • #43: Dynamic analysis has gain popularity due to ineffectiveness and inefficiencies of static analysis tools. However, despite lots of promising, dynamic analysis is yet to prevail. Since the methods used is also the same method as static analysis, the review done here focuses on tools effectiveness. This unique problem of dynamic analysis has highly influencing the performance of analysed program as it decreased the analysed program processing speed and hence not suitable for large-scale program such as web server or database server. This happens when dynamic analysis tools continue to terminate and or restart affected process after an overflow is detected.
  • #44: Dynamic analysis has gain popularity due to ineffectiveness and inefficiencies of static analysis tools. However, despite lots of promising, dynamic analysis is yet to prevail. Since the methods used is also the same method as static analysis, the review done here focuses on tools effectiveness. This unique problem of dynamic analysis has highly influencing the performance of analysed program as it decreased the analysed program processing speed and hence not suitable for large-scale program such as web server or database server. This happens when dynamic analysis tools continue to terminate and or restart affected process after an overflow is detected.
  • #45: Consequently, changing codes at development stage is easier and less hassle compare to after the complete system is ready, and the code becomes complicated. The cost of changing code after deployment is more than 100 times higher compare to code modification at development stage (Shapiro, 2008) and (Graham, Leroux, & Landry, 2008). This is due to more times needed to find the root cause of the overflows, redesign and restructure the codes and finally to redo the necessary test for the fixed code (Parasoft Corporation, 2009). Furthermore, it required involvement of many persons such as software architecture, project manager, software developers and tester. Static analysis, in contrast, can be done at developer’s machine and only required developers to change the code if found vulnerable without the need of troubleshooting, redesign of software architectures and other hassle process. Microsoft preferred to use static analysis rather than dynamic analysis as their proactive measurements whereby they include static analysis as part of development process and introduce Security Development Lifecycle (SDL). There are three ways that requires extensive works as there are still gaps in them.
  • #46: From the studies done and described in previous few sections, one consistent gap shared across static and dynamic analysis is the coverage of C overflow vulnerabilities classes. All analysis methods are found to focus on certain classes of C overflow vulnerabilities. This is either due to the knowledge on the variety of C overflow vulnerabilities is limited, ignorance of software developer to understand or write secure coding, or rely too much on tool. Therefore, there is a need to further understand and improve the knowledge on C overflow vulnerabilities. Vulnerabilities understanding is the process of educating and building the knowledge on vulnerabilities that helps security community to understand vulnerabilities and hence be able to utilize the information to further improve analysis methods and tools, and/or security implementation (Krsul, 1998). One of the way is using taxonomy or classifications. To study taxonomy and classifications, we cannot simply ignore the criteria behind a well-defined taxonomy, hence, it is also reviewed in this
  • #47: Although their criteria has gaps, they continue to construct taxonomy or classifications using the criteria as the guidelines. Our next review will provide evidences that they did not construct well-defined taxonomy or classifications that meet their own criteria.
  • #48: Although their criteria has gaps, they continue to construct taxonomy or classifications using the criteria as the guidelines. Our next review will provide evidences that they did not construct well-defined taxonomy or classifications that meet their own criteria.
  • #49: 1. Our review covers software vulnerabilities taxonomy and it is divided into general software vulnerabilities taxonomy and specifics to C overflow vulnerabilities taxonomy. The first table shown here focus on general software vulnerabilities taxonomy.
  • #50: 1. Our review covers software vulnerabilities taxonomy and it is divided into general software vulnerabilities taxonomy and specifics to C overflow vulnerabilities taxonomy. The first table shown here focus on general software vulnerabilities taxonomy.
  • #52: Between many software vulnerabilities, C overflow vulnerabilities is regarded as the most critical with the highest probability of being exploited due to its persistency and CSSV scores from various well-known organizations and well-established online vulnerability database websites. For that reason, having a well-defined taxonomy is significant towards having a good solutions to resolve issue concerning C overflow vulnerabilities
  • #53: Between many software vulnerabilities, C overflow vulnerabilities is regarded as the most critical with the highest probability of being exploited due to its persistency and CSSV scores from various well-known organizations and well-established online vulnerability database websites. For that reason, having a well-defined taxonomy is significant towards having a good solutions to resolve issue concerning C overflow vulnerabilities
  • #54: Between many software vulnerabilities, C overflow vulnerabilities is regarded as the most critical with the highest probability of being exploited due to its persistency and CSSV scores from various well-known organizations and well-established online vulnerability database websites. For that reason, having a well-defined taxonomy is significant towards having a good solutions to resolve issue concerning C overflow vulnerabilities
  • #56: The first phase starts with detailed literature review done on the three identified key areas resulted from pre-analysis of vulnerabilities as defined by Kaspersky (Kaspersky Lab ZAO, 2013) and MITRE (OWASP Organization, 2011), various exploitation cases such as in Poland (Baker & Graeme, 2008) and Iran (Chen T. M., 2010) and the Toyota break failure case (Carty, 2010). The second phase starts with comprehensive review on various taxonomy criteria discussed by many experts such as Killourhy et. al (Killourhy, Maxion, & Tan, 2004), Hansmann (Hansmann & Hunt, 2005), Krsul (Krsul, 1998), and Howard and Longstaff (Howard & Longstaff, 1998). In line with this significant review, the criteria for well-defined taxonomy is identified and formulated. On top of the publications, various reports/advisories from Microsoft (Microsoft Corporation, 2011), MITRE (MITRE Corporation, 2012), Dept of Homeland Security (Dept of Homeland Security, 2011), Kaspersky (Kaspersky Lab ZAO, 2013), IBM (IBM X-Force, 2011) and HP (HewlettPackard, 2011) is used. Finally, the last phase evaluate the taxonomy and implements the taxonomy to evaluate static analysis tools.
  • #57: The first phase starts with detailed literature review done on the three identified key areas resulted from pre-analysis of vulnerabilities as defined by Kaspersky (Kaspersky Lab ZAO, 2013) and MITRE (OWASP Organization, 2011), various exploitation cases such as in Poland (Baker & Graeme, 2008) and Iran (Chen T. M., 2010) and the Toyota break failure case (Carty, 2010). The second phase starts with comprehensive review on various taxonomy criteria discussed by many experts such as Killourhy et. al (Killourhy, Maxion, & Tan, 2004), Hansmann (Hansmann & Hunt, 2005), Krsul (Krsul, 1998), and Howard and Longstaff (Howard & Longstaff, 1998). In line with this significant review, the criteria for well-defined taxonomy is identified and formulated. On top of the publications, various reports/advisories from Microsoft (Microsoft Corporation, 2011), MITRE (MITRE Corporation, 2012), Dept of Homeland Security (Dept of Homeland Security, 2011), Kaspersky (Kaspersky Lab ZAO, 2013), IBM (IBM X-Force, 2011) and HP (HewlettPackard, 2011) is used. Finally, the last phase evaluate the taxonomy and implements the taxonomy to evaluate static analysis tools.
  • #58: The first phase starts with detailed literature review done on the three identified key areas resulted from pre-analysis of vulnerabilities as defined by Kaspersky (Kaspersky Lab ZAO, 2013) and MITRE (OWASP Organization, 2011), various exploitation cases such as in Poland (Baker & Graeme, 2008) and Iran (Chen T. M., 2010) and the Toyota break failure case (Carty, 2010). The second phase starts with comprehensive review on various taxonomy criteria discussed by many experts such as Killourhy et. al (Killourhy, Maxion, & Tan, 2004), Hansmann (Hansmann & Hunt, 2005), Krsul (Krsul, 1998), and Howard and Longstaff (Howard & Longstaff, 1998). In line with this significant review, the criteria for well-defined taxonomy is identified and formulated. On top of the publications, various reports/advisories from Microsoft (Microsoft Corporation, 2011), MITRE (MITRE Corporation, 2012), Dept of Homeland Security (Dept of Homeland Security, 2011), Kaspersky (Kaspersky Lab ZAO, 2013), IBM (IBM X-Force, 2011) and HP (HewlettPackard, 2011) is used. Finally, the last phase evaluate the taxonomy and implements the taxonomy to evaluate static analysis tools.
  • #59: Theoretical studies Review on vulnerabilities in general and starts by referring reports from well-known organization The following criteria are used to determine the organization credibility. Well-known to information security community and referred by many experts. Number of evidence collected or frequent release of reports or advisories. Vulnerability listing is up-to-date. Use the criteria -> the are few organizations identified -> Symantec, IBM, HP, Microsoft, NIST, MITRE, Homeland Security, Secunia, Oracle, SANS Institute, Cenzic, CSM and Kaspersky. From each organization -> at least 1 reports between the year 1990 to 2010 is reviewed (priority is latest) Books and publications from expert is reviewed too -> Bisbey et. al. (Bisbey & Hollingworth, 1978), Binkley (Binkley, 2007), Chess et. al. (Chess & McGraw, 2004), (Howard, LeBlanc, & Viega, 24 Deadly Sins of Software Security - Programming Flaws and How to Fix Them, 2010), and Viega et. al. (Viega & McGraw, Building Secure Software: How to Avoid Security Problems the Right Way, 2002). Continues with software vulnerabilities after identified it earlier Using various keywords such as buffer overrun, heap overflow, etc Focus on reports between 2000 – 2010 Further separated to language categories, identified the relevancies and significances by measuring the CVSS Upon identified C overflow vulnerabilities, review in-depth Carefully selected the online vulnerabilities database -> OWASP, NIST, OSVDB, Secunia, Microsoft Review an average of 5 reports per year from 2010 to five years before Each reports -> identified the root cause, class of overflow, year, criticality, exploitation how to and impact Further review on the defensive and preventive mechanism -> static analysis tools and method The review continues with in-depth studies on vulnerabilities taxonomy as we agreed with Krsul and Tsipenyuk (and others) that understanding on vulnerabilities is crucial for further defending and protecting from exploitation on the vulnerabilities. Taxonomy Construction Two critical activities in this phase -> identified and develop/formulae the criteria for well-defined taxonomy and construction of C overflow vulnerabilities exploit taxonomy Criteria for well-defined taxonomy Focus on related reports and publications especially papers published by well-known publisher (IEEE & Springer) -> covers papers that specifies, criticize or compliments criterions Shortlisted criterions is then reviewed -> focus on characteristics and meaning of the criterions. The final list is then published for expert review Construction of the taxonomy The idea -> construct from source-code perspective and focus on exploits methods Studies the previously collected evidences (reports, papers, etc) -> focus on vulnerability behaviour, characteristics on exploit, severity and impact. The identify vulnerabilities are then classified together based on common shared characteristics/behaviour using name are known to community The taxonomy is also published for feedback and comments
  • #60: Theoretical studies Review on vulnerabilities in general and starts by referring reports from well-known organization The following criteria are used to determine the organization credibility. Well-known to information security community and referred by many experts. Number of evidence collected or frequent release of reports or advisories. Vulnerability listing is up-to-date. Use the criteria -> the are few organizations identified -> Symantec, IBM, HP, Microsoft, NIST, MITRE, Homeland Security, Secunia, Oracle, SANS Institute, Cenzic, CSM and Kaspersky. From each organization -> at least 1 reports between the year 1990 to 2010 is reviewed (priority is latest) Books and publications from expert is reviewed too -> Bisbey et. al. (Bisbey & Hollingworth, 1978), Binkley (Binkley, 2007), Chess et. al. (Chess & McGraw, 2004), (Howard, LeBlanc, & Viega, 24 Deadly Sins of Software Security - Programming Flaws and How to Fix Them, 2010), and Viega et. al. (Viega & McGraw, Building Secure Software: How to Avoid Security Problems the Right Way, 2002). Continues with software vulnerabilities after identified it earlier Using various keywords such as buffer overrun, heap overflow, etc Focus on reports between 2000 – 2010 Further separated to language categories, identified the relevancies and significances by measuring the CVSS Upon identified C overflow vulnerabilities, review in-depth Carefully selected the online vulnerabilities database -> OWASP, NIST, OSVDB, Secunia, Microsoft Review an average of 5 reports per year from 2010 to five years before Each reports -> identified the root cause, class of overflow, year, criticality, exploitation how to and impact Further review on the defensive and preventive mechanism -> static analysis tools and method The review continues with in-depth studies on vulnerabilities taxonomy as we agreed with Krsul and Tsipenyuk (and others) that understanding on vulnerabilities is crucial for further defending and protecting from exploitation on the vulnerabilities. Taxonomy Construction Two critical activities in this phase -> identified and develop/formulae the criteria for well-defined taxonomy and construction of C overflow vulnerabilities exploit taxonomy Criteria for well-defined taxonomy Focus on related reports and publications especially papers published by well-known publisher (IEEE & Springer) -> covers papers that specifies, criticize or compliments criterions Shortlisted criterions is then reviewed -> focus on characteristics and meaning of the criterions. The final list is then published for expert review Construction of the taxonomy The idea -> construct from source-code perspective and focus on exploits methods Studies the previously collected evidences (reports, papers, etc) -> focus on vulnerability behaviour, characteristics on exploit, severity and impact. The identify vulnerabilities are then classified together based on common shared characteristics/behaviour using name are known to community The taxonomy is also published for feedback and comments
  • #61: Theoretical studies Review on vulnerabilities in general and starts by referring reports from well-known organization The following criteria are used to determine the organization credibility. Well-known to information security community and referred by many experts. Number of evidence collected or frequent release of reports or advisories. Vulnerability listing is up-to-date. Use the criteria -> the are few organizations identified -> Symantec, IBM, HP, Microsoft, NIST, MITRE, Homeland Security, Secunia, Oracle, SANS Institute, Cenzic, CSM and Kaspersky. From each organization -> at least 1 reports between the year 1990 to 2010 is reviewed (priority is latest) Books and publications from expert is reviewed too -> Bisbey et. al. (Bisbey & Hollingworth, 1978), Binkley (Binkley, 2007), Chess et. al. (Chess & McGraw, 2004), (Howard, LeBlanc, & Viega, 24 Deadly Sins of Software Security - Programming Flaws and How to Fix Them, 2010), and Viega et. al. (Viega & McGraw, Building Secure Software: How to Avoid Security Problems the Right Way, 2002). Continues with software vulnerabilities after identified it earlier Using various keywords such as buffer overrun, heap overflow, etc Focus on reports between 2000 – 2010 Further separated to language categories, identified the relevancies and significances by measuring the CVSS Upon identified C overflow vulnerabilities, review in-depth Carefully selected the online vulnerabilities database -> OWASP, NIST, OSVDB, Secunia, Microsoft Review an average of 5 reports per year from 2010 to five years before Each reports -> identified the root cause, class of overflow, year, criticality, exploitation how to and impact Further review on the defensive and preventive mechanism -> static analysis tools and method The review continues with in-depth studies on vulnerabilities taxonomy as we agreed with Krsul and Tsipenyuk (and others) that understanding on vulnerabilities is crucial for further defending and protecting from exploitation on the vulnerabilities. Taxonomy Construction Two critical activities in this phase -> identified and develop/formulae the criteria for well-defined taxonomy and construction of C overflow vulnerabilities exploit taxonomy Criteria for well-defined taxonomy Focus on related reports and publications especially papers published by well-known publisher (IEEE & Springer) -> covers papers that specifies, criticize or compliments criterions Shortlisted criterions is then reviewed -> focus on characteristics and meaning of the criterions. The final list is then published for expert review Construction of the taxonomy The idea -> construct from source-code perspective and focus on exploits methods Studies the previously collected evidences (reports, papers, etc) -> focus on vulnerability behaviour, characteristics on exploit, severity and impact. The identify vulnerabilities are then classified together based on common shared characteristics/behaviour using name are known to community The taxonomy is also published for feedback and comments
  • #62: The objective of the evaluation: To measure the effectiveness and completeness of the taxonomy To measure severity, persistency and relevancies of each classes. Indirectly measure completeness and knowledge compliant of the taxonomy and shows how useful the taxonomy is by measuring To measure the effectiveness and efficiencies of static analysis tools Uniqueness of the evaluation is that there is no methods or approaches to evaluate a taxonomy especially against well-defined criteria done before. The first evaluation is evaluation on taxonomy against the well-defined criteria Measurement on Effectiveness and Completeness The criteria used to select tester are: Good understanding and knowledge in information security and familiar with exploitation and vulnerabilities terms. At least two years experiences in the area of information security. Know and experiences C programming language Taxonomy Evaluation with Selected Well-known Online Vulnerability Database and Organization Using the same set of sites and organization but limit to 7 most relevant The keyword used is the class name or its synonym If a report or advisory found, critical review is done to validate the class name, behaviour and characteristics Evaluation on the Significances and Relevancies of Classes Defined in the Taxonomy of C Overflow Vulnerabilities Exploit To measure on the significances and relevancies of each class defined in the taxonomy to current and future computer system security based on its current severity, impact and persistency. Indirectly indicates the correctness of position the class hierarchy and applicability in the taxonomy Used NIST, OWASP and OSVDB + Microsoft, Symantec, Secunia and Dept of Homeland Security. Numbers of advisories or reports reviewed from each sites are set to 50 maximum. Keyword + synonyms used -> unsafe function, integer range, integer overflow, memory overflow, heap overflow, buffer overflow, buffer overrun, memory overrun, pointer mixing, pointer scaling, out-of-bound, array out-of-bound, null termination, uninitialized variable and heap overrun Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality To ensure that the classes is relevant and significance to security community Steps 1 – evaluate probability to appear in the OS: Identify the OS based on market shares site (Awio Web Services LLC, 2014), (StatCounter, 2014) and (Net Applications.com, 2014) XP 32 bits, 7 32 and 64 bits and Linux 32 bits (Centos 5.5) Installed on Computers – Intel Pentium Core 2 Duo 2.2Ghz, 4GB RAM, 1TB harddisk, Gigabyte Motherboard Develop simple programs – at least 2 programs for each classes – Inter and Intra-procedural Run 3 times daily and reboot for each OS Steps 2 – Based on collected reports/advisories The CVSS - focus on severity and impact value The mean/average of the severity and impact is calculated for each class. Evaluate the static analysis tools effectiveness in detecting the identified classes Using the same develop programs and limited to windows 7 64-bits Select tools based on availability, full features available (not trial), be able to run on Windows OS Limit to 5 - SPLINT, RATS, ITS4, BOON, and BLAST Measure the time taken, number of detection, false alarm and true alarm. Evaluate the Static Analysis Tools Efficiencies in Analysing Open-source Code for Comparison with Previous Works Selected open source applications -> criteria: application and source code available and used by community. Identify – Apache HTTPD, Chrome, MySQL CE, Open VM Tool, TPM Emulator Measure the time, number of detection and false alarm are collected
  • #63: The objective of the evaluation: To measure the effectiveness and completeness of the taxonomy To measure severity, persistency and relevancies of each classes. Indirectly measure completeness and knowledge compliant of the taxonomy and shows how useful the taxonomy is by measuring To measure the effectiveness and efficiencies of static analysis tools Uniqueness of the evaluation is that there is no methods or approaches to evaluate a taxonomy especially against well-defined criteria done before. The first evaluation is evaluation on taxonomy against the well-defined criteria Measurement on Effectiveness and Completeness The criteria used to select tester are: Good understanding and knowledge in information security and familiar with exploitation and vulnerabilities terms. At least two years experiences in the area of information security. Know and experiences C programming language Taxonomy Evaluation with Selected Well-known Online Vulnerability Database and Organization Using the same set of sites and organization but limit to 7 most relevant The keyword used is the class name or its synonym If a report or advisory found, critical review is done to validate the class name, behaviour and characteristics Evaluation on the Significances and Relevancies of Classes Defined in the Taxonomy of C Overflow Vulnerabilities Exploit To measure on the significances and relevancies of each class defined in the taxonomy to current and future computer system security based on its current severity, impact and persistency. Indirectly indicates the correctness of position the class hierarchy and applicability in the taxonomy Used NIST, OWASP and OSVDB + Microsoft, Symantec, Secunia and Dept of Homeland Security. Numbers of advisories or reports reviewed from each sites are set to 50 maximum. Keyword + synonyms used -> unsafe function, integer range, integer overflow, memory overflow, heap overflow, buffer overflow, buffer overrun, memory overrun, pointer mixing, pointer scaling, out-of-bound, array out-of-bound, null termination, uninitialized variable and heap overrun Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality To ensure that the classes is relevant and significance to security community Steps 1 – evaluate probability to appear in the OS: Identify the OS based on market shares site (Awio Web Services LLC, 2014), (StatCounter, 2014) and (Net Applications.com, 2014) XP 32 bits, 7 32 and 64 bits and Linux 32 bits (Centos 5.5) Installed on Computers – Intel Pentium Core 2 Duo 2.2Ghz, 4GB RAM, 1TB harddisk, Gigabyte Motherboard Develop simple programs – at least 2 programs for each classes – Inter and Intra-procedural Run 3 times daily and reboot for each OS Steps 2 – Based on collected reports/advisories The CVSS - focus on severity and impact value The mean/average of the severity and impact is calculated for each class. Evaluate the static analysis tools effectiveness in detecting the identified classes Using the same develop programs and limited to windows 7 64-bits Select tools based on availability, full features available (not trial), be able to run on Windows OS Limit to 5 - SPLINT, RATS, ITS4, BOON, and BLAST Measure the time taken, number of detection, false alarm and true alarm. Evaluate the Static Analysis Tools Efficiencies in Analysing Open-source Code for Comparison with Previous Works Selected open source applications -> criteria: application and source code available and used by community. Identify – Apache HTTPD, Chrome, MySQL CE, Open VM Tool, TPM Emulator Measure the time, number of detection and false alarm are collected
  • #64: Theoretical studies Review on vulnerabilities in general and starts by referring reports from well-known organization The following criteria are used to determine the organization credibility. Well-known to information security community and referred by many experts. Number of evidence collected or frequent release of reports or advisories. Vulnerability listing is up-to-date. Use the criteria -> the are few organizations identified -> Symantec, IBM, HP, Microsoft, NIST, MITRE, Homeland Security, Secunia, Oracle, SANS Institute, Cenzic, CSM and Kaspersky. From each organization -> at least 1 reports between the year 1990 to 2010 is reviewed (priority is latest) Books and publications from expert is reviewed too -> Bisbey et. al. (Bisbey & Hollingworth, 1978), Binkley (Binkley, 2007), Chess et. al. (Chess & McGraw, 2004), (Howard, LeBlanc, & Viega, 24 Deadly Sins of Software Security - Programming Flaws and How to Fix Them, 2010), and Viega et. al. (Viega & McGraw, Building Secure Software: How to Avoid Security Problems the Right Way, 2002). Continues with software vulnerabilities after identified it earlier Using various keywords such as buffer overrun, heap overflow, etc Focus on reports between 2000 – 2010 Further separated to language categories, identified the relevancies and significances by measuring the CVSS Upon identified C overflow vulnerabilities, review in-depth Carefully selected the online vulnerabilities database -> OWASP, NIST, OSVDB, Secunia, Microsoft Review an average of 5 reports per year from 2010 to five years before Each reports -> identified the root cause, class of overflow, year, criticality, exploitation how to and impact Further review on the defensive and preventive mechanism -> static analysis tools and method The review continues with in-depth studies on vulnerabilities taxonomy as we agreed with Krsul and Tsipenyuk (and others) that understanding on vulnerabilities is crucial for further defending and protecting from exploitation on the vulnerabilities. Taxonomy Construction Two critical activities in this phase -> identified and develop/formulae the criteria for well-defined taxonomy and construction of C overflow vulnerabilities exploit taxonomy Criteria for well-defined taxonomy Focus on related reports and publications especially papers published by well-known publisher (IEEE & Springer) -> covers papers that specifies, criticize or compliments criterions Shortlisted criterions is then reviewed -> focus on characteristics and meaning of the criterions. The final list is then published for expert review Construction of the taxonomy The idea -> construct from source-code perspective and focus on exploits methods Studies the previously collected evidences (reports, papers, etc) -> focus on vulnerability behaviour, characteristics on exploit, severity and impact. The identify vulnerabilities are then classified together based on common shared characteristics/behaviour using name are known to community The taxonomy is also published for feedback and comments
  • #65: The objective of the evaluation: To measure the effectiveness and completeness of the taxonomy To measure severity, persistency and relevancies of each classes. Indirectly measure completeness and knowledge compliant of the taxonomy and shows how useful the taxonomy is by measuring To measure the effectiveness and efficiencies of static analysis tools Uniqueness of the evaluation is that there is no methods or approaches to evaluate a taxonomy especially against well-defined criteria done before. The first evaluation is evaluation on taxonomy against the well-defined criteria Measurement on Effectiveness and Completeness The criteria used to select tester are: Good understanding and knowledge in information security and familiar with exploitation and vulnerabilities terms. At least two years experiences in the area of information security. Know and experiences C programming language Taxonomy Evaluation with Selected Well-known Online Vulnerability Database and Organization Using the same set of sites and organization but limit to 7 most relevant The keyword used is the class name or its synonym If a report or advisory found, critical review is done to validate the class name, behaviour and characteristics Evaluation on the Significances and Relevancies of Classes Defined in the Taxonomy of C Overflow Vulnerabilities Exploit To measure on the significances and relevancies of each class defined in the taxonomy to current and future computer system security based on its current severity, impact and persistency. Indirectly indicates the correctness of position the class hierarchy and applicability in the taxonomy Used NIST, OWASP and OSVDB + Microsoft, Symantec, Secunia and Dept of Homeland Security. Numbers of advisories or reports reviewed from each sites are set to 50 maximum. Keyword + synonyms used -> unsafe function, integer range, integer overflow, memory overflow, heap overflow, buffer overflow, buffer overrun, memory overrun, pointer mixing, pointer scaling, out-of-bound, array out-of-bound, null termination, uninitialized variable and heap overrun Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality To ensure that the classes is relevant and significance to security community Steps 1 – evaluate probability to appear in the OS: Identify the OS based on market shares site (Awio Web Services LLC, 2014), (StatCounter, 2014) and (Net Applications.com, 2014) XP 32 bits, 7 32 and 64 bits and Linux 32 bits (Centos 5.5) Installed on Computers – Intel Pentium Core 2 Duo 2.2Ghz, 4GB RAM, 1TB harddisk, Gigabyte Motherboard Develop simple programs – at least 2 programs for each classes – Inter and Intra-procedural Run 3 times daily and reboot for each OS Steps 2 – Based on collected reports/advisories The CVSS - focus on severity and impact value The mean/average of the severity and impact is calculated for each class. Evaluate the static analysis tools effectiveness in detecting the identified classes Using the same develop programs and limited to windows 7 64-bits Select tools based on availability, full features available (not trial), be able to run on Windows OS Limit to 5 - SPLINT, RATS, ITS4, BOON, and BLAST Measure the time taken, number of detection, false alarm and true alarm. Evaluate the Static Analysis Tools Efficiencies in Analysing Open-source Code for Comparison with Previous Works Selected open source applications -> criteria: application and source code available and used by community. Identify – Apache HTTPD, Chrome, MySQL CE, Open VM Tool, TPM Emulator Measure the time, number of detection and false alarm are collected
  • #66: The objective of the evaluation: To measure the effectiveness and completeness of the taxonomy To measure severity, persistency and relevancies of each classes. Indirectly measure completeness and knowledge compliant of the taxonomy and shows how useful the taxonomy is by measuring To measure the effectiveness and efficiencies of static analysis tools Uniqueness of the evaluation is that there is no methods or approaches to evaluate a taxonomy especially against well-defined criteria done before. The first evaluation is evaluation on taxonomy against the well-defined criteria Measurement on Effectiveness and Completeness The criteria used to select tester are: Good understanding and knowledge in information security and familiar with exploitation and vulnerabilities terms. At least two years experiences in the area of information security. Know and experiences C programming language Taxonomy Evaluation with Selected Well-known Online Vulnerability Database and Organization Using the same set of sites and organization but limit to 7 most relevant The keyword used is the class name or its synonym If a report or advisory found, critical review is done to validate the class name, behaviour and characteristics Evaluation on the Significances and Relevancies of Classes Defined in the Taxonomy of C Overflow Vulnerabilities Exploit To measure on the significances and relevancies of each class defined in the taxonomy to current and future computer system security based on its current severity, impact and persistency. Indirectly indicates the correctness of position the class hierarchy and applicability in the taxonomy Used NIST, OWASP and OSVDB + Microsoft, Symantec, Secunia and Dept of Homeland Security. Numbers of advisories or reports reviewed from each sites are set to 50 maximum. Keyword + synonyms used -> unsafe function, integer range, integer overflow, memory overflow, heap overflow, buffer overflow, buffer overrun, memory overrun, pointer mixing, pointer scaling, out-of-bound, array out-of-bound, null termination, uninitialized variable and heap overrun Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality To ensure that the classes is relevant and significance to security community Steps 1 – evaluate probability to appear in the OS: Identify the OS based on market shares site (Awio Web Services LLC, 2014), (StatCounter, 2014) and (Net Applications.com, 2014) XP 32 bits, 7 32 and 64 bits and Linux 32 bits (Centos 5.5) Installed on Computers – Intel Pentium Core 2 Duo 2.2Ghz, 4GB RAM, 1TB harddisk, Gigabyte Motherboard Develop simple programs – at least 2 programs for each classes – Inter and Intra-procedural Run 3 times daily and reboot for each OS Steps 2 – Based on collected reports/advisories The CVSS - focus on severity and impact value The mean/average of the severity and impact is calculated for each class. Evaluate the static analysis tools effectiveness in detecting the identified classes Using the same develop programs and limited to windows 7 64-bits Select tools based on availability, full features available (not trial), be able to run on Windows OS Limit to 5 - SPLINT, RATS, ITS4, BOON, and BLAST Measure the time taken, number of detection, false alarm and true alarm. Evaluate the Static Analysis Tools Efficiencies in Analysing Open-source Code for Comparison with Previous Works Selected open source applications -> criteria: application and source code available and used by community. Identify – Apache HTTPD, Chrome, MySQL CE, Open VM Tool, TPM Emulator Measure the time, number of detection and false alarm are collected
  • #67: The objective of the evaluation: To measure the effectiveness and completeness of the taxonomy To measure severity, persistency and relevancies of each classes. Indirectly measure completeness and knowledge compliant of the taxonomy and shows how useful the taxonomy is by measuring To measure the effectiveness and efficiencies of static analysis tools Uniqueness of the evaluation is that there is no methods or approaches to evaluate a taxonomy especially against well-defined criteria done before. The first evaluation is evaluation on taxonomy against the well-defined criteria Measurement on Effectiveness and Completeness The criteria used to select tester are: Good understanding and knowledge in information security and familiar with exploitation and vulnerabilities terms. At least two years experiences in the area of information security. Know and experiences C programming language Taxonomy Evaluation with Selected Well-known Online Vulnerability Database and Organization Using the same set of sites and organization but limit to 7 most relevant The keyword used is the class name or its synonym If a report or advisory found, critical review is done to validate the class name, behaviour and characteristics Evaluation on the Significances and Relevancies of Classes Defined in the Taxonomy of C Overflow Vulnerabilities Exploit To measure on the significances and relevancies of each class defined in the taxonomy to current and future computer system security based on its current severity, impact and persistency. Indirectly indicates the correctness of position the class hierarchy and applicability in the taxonomy Used NIST, OWASP and OSVDB + Microsoft, Symantec, Secunia and Dept of Homeland Security. Numbers of advisories or reports reviewed from each sites are set to 50 maximum. Keyword + synonyms used -> unsafe function, integer range, integer overflow, memory overflow, heap overflow, buffer overflow, buffer overrun, memory overrun, pointer mixing, pointer scaling, out-of-bound, array out-of-bound, null termination, uninitialized variable and heap overrun Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality To ensure that the classes is relevant and significance to security community Steps 1 – evaluate probability to appear in the OS: Identify the OS based on market shares site (Awio Web Services LLC, 2014), (StatCounter, 2014) and (Net Applications.com, 2014) XP 32 bits, 7 32 and 64 bits and Linux 32 bits (Centos 5.5) Installed on Computers – Intel Pentium Core 2 Duo 2.2Ghz, 4GB RAM, 1TB harddisk, Gigabyte Motherboard Develop simple programs – at least 2 programs for each classes – Inter and Intra-procedural Run 3 times daily and reboot for each OS Steps 2 – Based on collected reports/advisories The CVSS - focus on severity and impact value The mean/average of the severity and impact is calculated for each class. Evaluate the static analysis tools effectiveness in detecting the identified classes Using the same develop programs and limited to windows 7 64-bits Select tools based on availability, full features available (not trial), be able to run on Windows OS Limit to 5 - SPLINT, RATS, ITS4, BOON, and BLAST Measure the time taken, number of detection, false alarm and true alarm. Evaluate the Static Analysis Tools Efficiencies in Analysing Open-source Code for Comparison with Previous Works Selected open source applications -> criteria: application and source code available and used by community. Identify – Apache HTTPD, Chrome, MySQL CE, Open VM Tool, TPM Emulator Measure the time, number of detection and false alarm are collected
  • #68: The objective of the evaluation: To measure the effectiveness and completeness of the taxonomy To measure severity, persistency and relevancies of each classes. Indirectly measure completeness and knowledge compliant of the taxonomy and shows how useful the taxonomy is by measuring To measure the effectiveness and efficiencies of static analysis tools Uniqueness of the evaluation is that there is no methods or approaches to evaluate a taxonomy especially against well-defined criteria done before. The first evaluation is evaluation on taxonomy against the well-defined criteria Measurement on Effectiveness and Completeness The criteria used to select tester are: Good understanding and knowledge in information security and familiar with exploitation and vulnerabilities terms. At least two years experiences in the area of information security. Know and experiences C programming language Taxonomy Evaluation with Selected Well-known Online Vulnerability Database and Organization Using the same set of sites and organization but limit to 7 most relevant The keyword used is the class name or its synonym If a report or advisory found, critical review is done to validate the class name, behaviour and characteristics Evaluation on the Significances and Relevancies of Classes Defined in the Taxonomy of C Overflow Vulnerabilities Exploit To measure on the significances and relevancies of each class defined in the taxonomy to current and future computer system security based on its current severity, impact and persistency. Indirectly indicates the correctness of position the class hierarchy and applicability in the taxonomy Used NIST, OWASP and OSVDB + Microsoft, Symantec, Secunia and Dept of Homeland Security. Numbers of advisories or reports reviewed from each sites are set to 50 maximum. Keyword + synonyms used -> unsafe function, integer range, integer overflow, memory overflow, heap overflow, buffer overflow, buffer overrun, memory overrun, pointer mixing, pointer scaling, out-of-bound, array out-of-bound, null termination, uninitialized variable and heap overrun Evaluation on Significant and Relevancies of C Overflow Vulnerabilities Classes and Impact to OS Criticality To ensure that the classes is relevant and significance to security community Steps 1 – evaluate probability to appear in the OS: Identify the OS based on market shares site (Awio Web Services LLC, 2014), (StatCounter, 2014) and (Net Applications.com, 2014) XP 32 bits, 7 32 and 64 bits and Linux 32 bits (Centos 5.5) Installed on Computers – Intel Pentium Core 2 Duo 2.2Ghz, 4GB RAM, 1TB harddisk, Gigabyte Motherboard Develop simple programs – at least 2 programs for each classes – Inter and Intra-procedural Run 3 times daily and reboot for each OS Steps 2 – Based on collected reports/advisories The CVSS - focus on severity and impact value The mean/average of the severity and impact is calculated for each class. Evaluate the static analysis tools effectiveness in detecting the identified classes Using the same develop programs and limited to windows 7 64-bits Select tools based on availability, full features available (not trial), be able to run on Windows OS Limit to 5 - SPLINT, RATS, ITS4, BOON, and BLAST Measure the time taken, number of detection, false alarm and true alarm. Evaluate the Static Analysis Tools Efficiencies in Analysing Open-source Code for Comparison with Previous Works Selected open source applications -> criteria: application and source code available and used by community. Identify – Apache HTTPD, Chrome, MySQL CE, Open VM Tool, TPM Emulator Measure the time, number of detection and false alarm are collected
  • #71: There are many criteria listed by previous experts ranging from 3(Killourhy, Maxion, & Tan, 2004) to at most 18 by Lough (Lough, 2001). After a series of careful review on the meaning and relationship of each criterion, it is then finalized to eight most relevant and significant criteria, which later became the criteria for well-defined taxonomy The first criterion is Simplicity whereby it shows the complexity of knowledge in a simplified readable and understandable structures (or collections) of an object being studied. Identified by Tsipenyuk et. al. (Tsipenyuk, Chess, & McGraw, 2005) in their studies on taxonomy. This criterion ease the understanding of a structured body of knowledge presented in taxonomy form. Failure to meet this criterion will result in difficulties for a user to understand the subject or object under study. For example, taxonomy by Hansmann (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005), despite it is structured, the taxonomy failed to meet this criterion. The second criterion for well-defined taxonomy is Organized Structures. Identified by many experts such as by Lough (Lough, 2001) and Vijayagharavan (Vijayaraghavan & Kaner, 2003) as an important criterion for a well-defined taxonomy. Indicates that a particular taxonomy is well-defined when a proper and readable structures or form of group or class of subject is well established. It is agreeable as without proper structures, although it satisfy many other criterions for well-defined taxonomy, it will not ease user to understand the classified subject or object. The third criterion for well-defined taxonomy is Obvious. Also known as Objectivity (Krsul, 1998), Determinism (Krsul, 1998), (Lough, 2001) and Hansmann (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005), and Alhazmi et. al. (Alhazmi, Woo, & Malaiya, 2006) . It is the third most significant criterion, as it is required to ease the process of classification or identification of an element or object in the field being studied without doubt. This is to avoid confusion and difficulties to a user which will result in non-repeatability and non-exclusiveness of a process and class, and hence contributes to ineffectiveness of the taxonomy as a tool to understand the field of studies. Repeatability is the fourth criterion required to ensure a particular taxonomy is well-defined. The criterion bridges obviousness to specificity. It ascertains the earlier and former criterion so as it will be applicable and fit to any taxonomy in achieving a well-defined taxonomy. Repeatability establishes consistency in classification, thus resulting in reliable result. It is also identified by many expert as important criteria such as Hansmann (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005), Alhazmi et. al. (Alhazmi, Woo, & Malaiya, 2006), Lough (Lough, 2001), Krsul (Krsul, 1998), Howard and Longstaff (Howard & Longstaff, 1998), and Killourhy (Killourhy, Maxion, & Tan, 2004). The fifth is Specificity. A.k.a Mutual Exclusive (Howard & Longstaff, 1998), (Lough, 2001), (Killourhy, Maxion, & Tan, 2004), (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005) and (Alhazmi, Woo, & Malaiya, 2006), Unambiguous (Howard & Longstaff, 1998) and (Lough, 2001), (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005) and Primitive (Aslam, 1995). listed after repeatability and obviousness as an object studied must first be obvious and the process of classifications must be able to be repeated by any individual user within the same domain of studies. The criteria enhance taxonomy Obviousness and Repeatability by ensuring pure consistency and avoids confusion in classifications as an object being studied can only be classified in one and only one class. There are a few taxonomies that ignore this criterion such as by Zhivich (Zhivich M. A., 2005), Krsul (Krsul, 1998) and Zitser (Zitser, An Evaluation of Static Source Code Analyzers, 2003), which have resulted in difficulties in getting reliable and consistent classifications. To strengthen the taxonomy and to bind the previous three criterions; that are Obviousness, Repeatability and Specificity, we have the sixth criterion, known as Similarity. The criterion must be used in conjunction with the other three to ensure that a taxonomy be able to achieve repeatability, obviousness of a class or object in the class and comprises of classes that are mutually exclusive. This is only possible because similarity criterion guarantees that a particular class has specific characteristics and the objects that are classified fall within the class that shares common behaviours or characteristics. The seven criterion is Completeness It is a criterion that defines comprehensiveness of a taxonomy whereby the classes listed within the taxonomy are sufficient and broad enough to cover every possible object within the field of study. Without this criterion for well-defined taxonomy, there would be a possibility of an object being filed in new class, a wrong class, a class that does not reflect the object characteristics, or a class that contains many objects with contrasting characteristics. This will result in its inconsistency and non-repeatability in applying taxonomy and thus causing users to confuse and have difficulties to understand the studied field as exemplified by taxonomy given by Bazaz and Arthur (Bazaz & Arthur, 2007), and Gegick and Williams (Gegick & Williams, 2005). Finally, the last criterion that must be complied before the taxonomy is considered as well-defined is Knowledge Compliant. The criterion indicates that the taxonomy shall be constructed within the known body of knowledge and shall use terminologies or descriptions that are friendly to user within the area being studied. With this, the taxonomy shall be able to improve user understanding on the subject. Vijayaraghan and Kaner (Vijayaraghavan & Kaner, 2003) shared the same thought whereby they stated that a good taxonomy shall be completed, useful and broad for ease of understanding especially to new or moderate person in the area of studies. In addition, although it is develop in the area of software vulnerabilities, we highly believed that it is also applicable for other kind of classifications and taxonomies
  • #72: Taxonomy of C overflow vulnerabilities exploits was constructed using the developed criteria for well-defined taxonomy as the guideline to ensure it is useful and ease user in understanding the C overflow vulnerabilities. This is a characteristic-based taxonomy whereby each class listed in the taxonomy has its own unique characteristics to ensure no ambiguity in the process of classifications. It was constructed from source code perspective and exploitation methods that have been used to exploit the vulnerabilities. The taxonomy that was developed focuses on overflow vulnerabilities in C programming language. Nevertheless, C++ and Objective-C programming language is also covered in this taxonomy as both is the successor of C and therefore in inherits the C behavior and characteristics which includes the overflow vulnerabilities (Younan, Joosen, & Piessens, 2004), (Fünfrocken, 2005), (Apple Inc., 2012) and (Bitz, et al., 2008). The first class in the taxonomy is unsafe function. The class comprises of functions that are unsafe by default, for example scanf(), gets() and strcat() (Microsoft Corp., 2012), (IBM X-Force, 2011), (Secunia, 2011) and (Bitz, et al., 2008), whereby this functions does not perform validation at the time it is being executed. This differ with other functions such as strlcpy() and strlcat(), as these two functions are considered safe by default since both of the functions requires developers to define the ‘n’ length of a string as one of the input and therefore minimize the overflow possibilities. Upon being executed, it will automatically trigger overflow which have resulted in either the next assigned memory is overwritten, next functions being executed or skipped, or causing the system to behave abnormally. To classify vulnerabilities under this is rather simple as ones needs to determine two criterion; that are the use of unsafe function and non-validated input. Exploiting this vulnerability is also as simple as detecting it. Attackers only need to supply an input that is greater than the allocated size or larger than estimated length of memory addressing Nevertheless, the defence mechanism is also similarly easy; that is, avoid the use of unsafe functions. If it is required, then user shall provide an input validation function before the function call. The second class in the taxonomy is Array Out-of-Bound. The class classifies any type of array implemented in a program or function that is vulnerable for exploitation if it matches two criterions; that are use of array and, no validation on the array and its index before the array being called or processed. Attackers have exploited this class of vulnerability by supplying an index that is beyond the defined size of the array; that is either more than ‘n’ length of array or ‘-1’, or by exploiting a program that wrongly perform index calculation. The third class in C overflow vulnerabilities exploits taxonomy is integer ranger/overflow. Any arithmetic operations that are potentially triggering overflow are classified in this class. However, overflow due to numerical conversion resulted from arithmetic operations is not included in this class and the reason is described in the subsequent discussion section. This class includes all arithmetic operations such as add, multiply and deduction, and all types of numerical variable like int, double and float. The characteristics of this class of vulnerability are, it is triggered in an arithmetic operation and involve numerical variable. This vulnerability class is a difficult to detect since ones needs to understand the arithmetic operation and be able to calculate possible input and output of the operation. However, it can be prevented by validating input before any operation and checking the output of the operation before proceed to another line of code that use the output as part of its operation. The fourth class in the C overflows vulnerabilities exploits taxonomy is return-into-libC. This class of attack characteristics is similar to unsafe functions and array out-of-bound whereby it receives an input from user of the program and triggers overflow upon being executed. However, instead of overflowing the next assign variable or memory space with previous extra data, return-into-libC overflow the next assign variable or memory space with a call to function in C library which is then used to execute a call to a malicious function or program. <click> Shows the comparison of these three overflow vulnerabilities classes. (click) To prevent this vulnerability is by validating input before executing any process with the input and ensure that the argument being passing around is valid argument with valid length or refer to valid functions. The fifth class listed in the C overflow vulnerabilities exploits taxonomy is Memory Function. It is a C overflow vulnerabilities class that classified any overflow vulnerabilities which involves or use of functions concerning managing or processing memory. By default, memory function such as malloc(), memcpy(), xmalloc() and free() is safe and those not trigger overflow upon executed despite having malicious input as its parameter. This is the key difference between this class and unsafe function class. From source code perspective, this vulnerability exist where an input is not validate before being used, followed by used of memory related function and finally, the result is not validated after successfully executed the function. Uninitialized memory and unused memory after allocation is also included in this. From source code perspective, this class of vulnerability can be avoided by validating the input before being used in the memory function, removing unused memory, ensure that the second call to the same memory is verified and/or initialized memory with default value. At number six is function pointer or also known as pointer aliasing and hence is used vice versa in this thesis. Pointer aliasing relates directly to the implementation of pointer as reference variable that store variable or function address, which later used to get the variable value. This class differ from return-into-libc vulnerability class as it does not depends on input, not limited to string pointer and different vulnerabilities triggering method. The characteristics for this class of vulnerability are used of pointer to refer functions or variable and being use only as reference variable and becomes vulnerable when it refers to nullified variable, refer to null or the reference is overwritten with different values. A nullified variable or reference will open an opportunity for hackers to exploit, whereas overwritten values is injected by hackers and works the same way as return-into-libC. In terms of prevention, it can be prevented by ensuring the validation is in place before a pointer is used and this only applied to nullified reference or the pointer itself. For overwritten values, a semantic approach in static or dynamic source code analysis is required to prevent this class of vulnerability from being exploited. Variable type conversion is the seventh class in C overflow vulnerabilities exploits taxonomy. Related to the process of converting from one variable to another type of variable. Any process including arithmetic operation that resulted in type conversion; for instance, converting int type to double or float in division operation and converting char type variable to int and vice versa, that does not properly handled will be classified into this class of vulnerability. To identify this class of vulnerability is simply by looking at conversion done within the program and verify if there are validation done before the conversion code runs. It can be prevented by validating the variable used in conversion and ensure that it holds within the range of accepted variable after the conversion happens. The eight class of the constructed taxonomy is pointer scaling or also known as pointer mixing. It differ from pointer alias (pointer function) vulnerability class as this class of vulnerability is triggered in an arithmetic operation involving pointer whereas pointer alias is a vulnerability that exist on pointer that is used as references to variable or function. In terms of references, the earlier class may point to invalid references whereas the latter class still point to valid reference but the value it refers may be overflow to the next defined variable due to incorrect arithmetic operation. This vulnerability exists when there is possibilities of a pointer reference to be move unintentionally to certain bytes of address resulted by scaling of pointer process in the arithmetic operation. From source code perspective, the vulnerability is not easy to be identified. Programmers need to look on the pointer involves, the type of the pointers and the result of the arithmetic and scaling operation on the pointer. Therefore, to prevent this class of vulnerability from occurring is by ensuring that the scaling operation is mathematically proven safe. At number nine in C overflow vulnerabilities exploit taxonomy is Uninitialized Variable. The class is defined as any variable used that is not properly initialized upon or after definition and is used in any of the process within the application is vulnerable and exploitable. Therefore, the characteristics of the class are it involves variable and it is used before being initialized in a read statement. It can be prevented by ensuring that all variables used are initialized before used. Finally, the last but noted as important class of vulnerability in the C overflow vulnerabilities exploit taxonomy is Null Termination vulnerability class. The class is defined as a vulnerability that occurs due to improper string termination, an array that does not have terminator at the last index of the array or no null byte termination which resulted as a potential overflow or exploitable. The characteristics of the class are the used of variable that accepts string of input with length possible to be maximum defined for the variable; for instance if the length is defined as 12 characters thus the maximum input is also 12 characters, with possibility of having no termination at the end of the input. From source code perspective, this class of vulnerability can be identified by looking at the read statement where the input string is received and assign to a variable and no validation on the content of the input before next operation on the input. It can be prevented by having validation before read operation on the variable or limit the input to ‘n – 1’ length which allows the string to be appended with null terminator if required. Discussion It is agreed that many of the vulnerabilities classes have been identified earlier such as by Wagner (Wagner, Foster, Brewer, & Aiken, 2000), Lippmann and his students (Kratkiewicz & Lippmann, 2005) and (Zhivich, Leek, & Lippmann, 2005), and latest by Shahriar (Shariar & Zulkernine, 2011). However, these known C overflow vulnerabilities were not classified in a well-defined taxonomy with focus on exploit methodologies and source code perspectives. Sotirov stated that earlier works on taxonomy and software vulnerabilities classifications did not focus on the right attributes that contributes to the problem in static analysis of which has great impact in understanding of C overflow vulnerabilities (Sotirov, 2005). There are also taxonomies that were developed specifics to C overflow vulnerabilities and source code perspective such as by Sotirov (Sotirov, 2005) and Moore (Moore, 2007) but limited coverage of C overflow vulnerabilities. Beside taxonomies, there are also classifications on software and network vulnerabilities that cover most of known C overflow vulnerabilities such as by Shahriar and Zulkernine (Shahriar & Zulkernine, 2011), and Killourhy et. al. (Killourhy, Maxion, & Tan, 2004) despite having broad or too general classifications. Therefore, this C overflow vulnerabilities exploit taxonomy is constructed to overcomes the shortcomings of previous works in classification and taxonomy with regards to C overflow vulnerabilities. Our taxonomy is constructed based on eight criterions defined for constructing well-defined taxonomy. Besides ours, there are also earlier taxonomy constructed using criteria as guidelines such as by Krsul (Krsul, 1998), Lough (Lough, 2001), Vijayaraghavan (Vijayaraghavan G. V., 2003), Killourhy (Killourhy, Maxion, & Tan, 2004) and Hansmann (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005). However, the criteria used in previous works are incomprehensive, unnecessary or incomplete. Although those previous taxonomy has improved the understanding on vulnerabilities, it is yet to be significant especially in the area of C overflow vulnerabilities. In terms of perspective, earlier works focused on computer’s memory perspective as they viewed the impact on the memory processing. This perspective is relevant but yet to be significant to developers or programmers as it is not useful to developers in writing secure codes. As such, the terminology and structures must be closely related to programming work. In contrast, C overflow vulnerabilities exploit taxonomy is constructed from source code perspective and exploit methods with the purpose of educating the software developers on secure coding in C. The characteristics and classes were derived by focusing on its appearance in the source code and the methods used to exploit that have impact on the code sequences and process. This is because it is believed that to prevent C overflow from occurring, the root cause must be cure rather than the symptom of it. Shahriar and Zulkernine (Shariar & Zulkernine, 2011), and Killourhy et. al. (Killourhy, Maxion, & Tan, 2004) constructed their classifications based on the same perspective. However, compared to their classifications, the classification in our taxonomy focuses specifically on developers view and how vulnerable codes looks like. In contrast, their classifications mentioned source code as generic and did not focus on appearance in the code. For that reason, the C overflow vulnerabilities exploit taxonomy is close and friendly to developers compared to other C overflow vulnerabilities taxonomy or classifications. We agreed with previous experts such as Krsul (Krsul, 1998), Lough (Lough, 2001), Vijayaraghavan (Vijayaraghavan G. V., 2003) and Alhazmi et. al. (Alhazmi, Woo, & Malaiya, 2006) whom stated that a well-defined taxonomy shall be specifics and cover all vulnerabilities. Therefore, the C overflow vulnerabilities exploit taxonomy is constructed specifics to C programming languages focusing only on overflow vulnerabilities. Hence, the scope of the taxonomy is C programming language and overflow vulnerabilities in C. However, although it is specifics to C, since it is also predecessor and baseline for C++ and Objective-C, the two programming languages is indirectly covered. Moore (Moore, 2007), Wilander (Wilander, 2002), Zitser (Zitser, An Evaluation of Static Source Code Analyzers, 2003), Zhivich (Zhivich M. A., 2005), Sotirov (Sotirov, 2005), Wagner (Wagner D. A., 2000) and Kratkiewicz (Kratkiewicz K. J., 2005) also constructed their taxonomy or classifications specifics to C programming language. However, due to different perspective, their taxonomy or classifications only cover limited classes of C overflow vulnerabilities. The other taxonomies such as by Aslam (Aslam, 1995), Howard and Longstaff (Howard & Longstaff, 1998), Krsul (Krsul, 1998), Pothamsetty and Akyol (Pothamsetty & Akyol, 2004), Killourhy et. al. (Killourhy, Maxion, & Tan, 2004) and Hansmann and Hunt (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005), despite covers many classes of C overflow vulnerabilities, is not specifics to C programming language and C overflow vulnerabilities. Their scopes are network, generic software vulnerabilities or web application vulnerabilities. Hence, their scopes are still broad and generic. For that reason, the C overflow vulnerabilities exploit taxonomy is more relevant and significant to developers whom use C as the programming language for coding. C overflow vulnerabilities exploit taxonomy covers 10 classes of C overflow vulnerabilities. It is the first taxonomy that covers all classes of C overflow vulnerabilities. The closest to this taxonomy is classifications by Tsipenyuk et. al. (Tsipenyuk, Chess, & McGraw, 2005) of which she listed seven overflow vulnerabilities classes in the year 2005. No doubt that all of the classes in our taxonomy have been identified earlier by many experts and well-established organization, but there is no works done to consolidate and organize the information into a taxonomy for ease of understanding and educated developers. Therefore, it is conclude that our taxonomy is comprehensive and up to date compare to previous works as it contains all relevant and significant classes of C overflow vulnerabilities
  • #73: Taxonomy of C overflow vulnerabilities exploits was constructed using the developed criteria for well-defined taxonomy as the guideline to ensure it is useful and ease user in understanding the C overflow vulnerabilities. This is a characteristic-based taxonomy whereby each class listed in the taxonomy has its own unique characteristics to ensure no ambiguity in the process of classifications. It was constructed from source code perspective and exploitation methods that have been used to exploit the vulnerabilities. The taxonomy that was developed focuses on overflow vulnerabilities in C programming language. Nevertheless, C++ and Objective-C programming language is also covered in this taxonomy as both is the successor of C and therefore in inherits the C behavior and characteristics which includes the overflow vulnerabilities (Younan, Joosen, & Piessens, 2004), (Fünfrocken, 2005), (Apple Inc., 2012) and (Bitz, et al., 2008). The first class in the taxonomy is unsafe function. The class comprises of functions that are unsafe by default, for example scanf(), gets() and strcat() (Microsoft Corp., 2012), (IBM X-Force, 2011), (Secunia, 2011) and (Bitz, et al., 2008), whereby this functions does not perform validation at the time it is being executed. This differ with other functions such as strlcpy() and strlcat(), as these two functions are considered safe by default since both of the functions requires developers to define the ‘n’ length of a string as one of the input and therefore minimize the overflow possibilities. Upon being executed, it will automatically trigger overflow which have resulted in either the next assigned memory is overwritten, next functions being executed or skipped, or causing the system to behave abnormally. To classify vulnerabilities under this is rather simple as ones needs to determine two criterion; that are the use of unsafe function and non-validated input. Exploiting this vulnerability is also as simple as detecting it. Attackers only need to supply an input that is greater than the allocated size or larger than estimated length of memory addressing Nevertheless, the defence mechanism is also similarly easy; that is, avoid the use of unsafe functions. If it is required, then user shall provide an input validation function before the function call. The second class in the taxonomy is Array Out-of-Bound. The class classifies any type of array implemented in a program or function that is vulnerable for exploitation if it matches two criterions; that are use of array and, no validation on the array and its index before the array being called or processed. Attackers have exploited this class of vulnerability by supplying an index that is beyond the defined size of the array; that is either more than ‘n’ length of array or ‘-1’, or by exploiting a program that wrongly perform index calculation. The third class in C overflow vulnerabilities exploits taxonomy is integer ranger/overflow. Any arithmetic operations that are potentially triggering overflow are classified in this class. However, overflow due to numerical conversion resulted from arithmetic operations is not included in this class and the reason is described in the subsequent discussion section. This class includes all arithmetic operations such as add, multiply and deduction, and all types of numerical variable like int, double and float. The characteristics of this class of vulnerability are, it is triggered in an arithmetic operation and involve numerical variable. This vulnerability class is a difficult to detect since ones needs to understand the arithmetic operation and be able to calculate possible input and output of the operation. However, it can be prevented by validating input before any operation and checking the output of the operation before proceed to another line of code that use the output as part of its operation. The fourth class in the C overflows vulnerabilities exploits taxonomy is return-into-libC. This class of attack characteristics is similar to unsafe functions and array out-of-bound whereby it receives an input from user of the program and triggers overflow upon being executed. However, instead of overflowing the next assign variable or memory space with previous extra data, return-into-libC overflow the next assign variable or memory space with a call to function in C library which is then used to execute a call to a malicious function or program. <click> Shows the comparison of these three overflow vulnerabilities classes. (click) To prevent this vulnerability is by validating input before executing any process with the input and ensure that the argument being passing around is valid argument with valid length or refer to valid functions. The fifth class listed in the C overflow vulnerabilities exploits taxonomy is Memory Function. It is a C overflow vulnerabilities class that classified any overflow vulnerabilities which involves or use of functions concerning managing or processing memory. By default, memory function such as malloc(), memcpy(), xmalloc() and free() is safe and those not trigger overflow upon executed despite having malicious input as its parameter. This is the key difference between this class and unsafe function class. From source code perspective, this vulnerability exist where an input is not validate before being used, followed by used of memory related function and finally, the result is not validated after successfully executed the function. Uninitialized memory and unused memory after allocation is also included in this. From source code perspective, this class of vulnerability can be avoided by validating the input before being used in the memory function, removing unused memory, ensure that the second call to the same memory is verified and/or initialized memory with default value. At number six is function pointer or also known as pointer aliasing and hence is used vice versa in this thesis. Pointer aliasing relates directly to the implementation of pointer as reference variable that store variable or function address, which later used to get the variable value. This class differ from return-into-libc vulnerability class as it does not depends on input, not limited to string pointer and different vulnerabilities triggering method. The characteristics for this class of vulnerability are used of pointer to refer functions or variable and being use only as reference variable and becomes vulnerable when it refers to nullified variable, refer to null or the reference is overwritten with different values. A nullified variable or reference will open an opportunity for hackers to exploit, whereas overwritten values is injected by hackers and works the same way as return-into-libC. In terms of prevention, it can be prevented by ensuring the validation is in place before a pointer is used and this only applied to nullified reference or the pointer itself. For overwritten values, a semantic approach in static or dynamic source code analysis is required to prevent this class of vulnerability from being exploited. Variable type conversion is the seventh class in C overflow vulnerabilities exploits taxonomy. Related to the process of converting from one variable to another type of variable. Any process including arithmetic operation that resulted in type conversion; for instance, converting int type to double or float in division operation and converting char type variable to int and vice versa, that does not properly handled will be classified into this class of vulnerability. To identify this class of vulnerability is simply by looking at conversion done within the program and verify if there are validation done before the conversion code runs. It can be prevented by validating the variable used in conversion and ensure that it holds within the range of accepted variable after the conversion happens. The eight class of the constructed taxonomy is pointer scaling or also known as pointer mixing. It differ from pointer alias (pointer function) vulnerability class as this class of vulnerability is triggered in an arithmetic operation involving pointer whereas pointer alias is a vulnerability that exist on pointer that is used as references to variable or function. In terms of references, the earlier class may point to invalid references whereas the latter class still point to valid reference but the value it refers may be overflow to the next defined variable due to incorrect arithmetic operation. This vulnerability exists when there is possibilities of a pointer reference to be move unintentionally to certain bytes of address resulted by scaling of pointer process in the arithmetic operation. From source code perspective, the vulnerability is not easy to be identified. Programmers need to look on the pointer involves, the type of the pointers and the result of the arithmetic and scaling operation on the pointer. Therefore, to prevent this class of vulnerability from occurring is by ensuring that the scaling operation is mathematically proven safe. At number nine in C overflow vulnerabilities exploit taxonomy is Uninitialized Variable. The class is defined as any variable used that is not properly initialized upon or after definition and is used in any of the process within the application is vulnerable and exploitable. Therefore, the characteristics of the class are it involves variable and it is used before being initialized in a read statement. It can be prevented by ensuring that all variables used are initialized before used. Finally, the last but noted as important class of vulnerability in the C overflow vulnerabilities exploit taxonomy is Null Termination vulnerability class. The class is defined as a vulnerability that occurs due to improper string termination, an array that does not have terminator at the last index of the array or no null byte termination which resulted as a potential overflow or exploitable. The characteristics of the class are the used of variable that accepts string of input with length possible to be maximum defined for the variable; for instance if the length is defined as 12 characters thus the maximum input is also 12 characters, with possibility of having no termination at the end of the input. From source code perspective, this class of vulnerability can be identified by looking at the read statement where the input string is received and assign to a variable and no validation on the content of the input before next operation on the input. It can be prevented by having validation before read operation on the variable or limit the input to ‘n – 1’ length which allows the string to be appended with null terminator if required. Discussion It is agreed that many of the vulnerabilities classes have been identified earlier such as by Wagner (Wagner, Foster, Brewer, & Aiken, 2000), Lippmann and his students (Kratkiewicz & Lippmann, 2005) and (Zhivich, Leek, & Lippmann, 2005), and latest by Shahriar (Shariar & Zulkernine, 2011). However, these known C overflow vulnerabilities were not classified in a well-defined taxonomy with focus on exploit methodologies and source code perspectives. Sotirov stated that earlier works on taxonomy and software vulnerabilities classifications did not focus on the right attributes that contributes to the problem in static analysis of which has great impact in understanding of C overflow vulnerabilities (Sotirov, 2005). There are also taxonomies that were developed specifics to C overflow vulnerabilities and source code perspective such as by Sotirov (Sotirov, 2005) and Moore (Moore, 2007) but limited coverage of C overflow vulnerabilities. Beside taxonomies, there are also classifications on software and network vulnerabilities that cover most of known C overflow vulnerabilities such as by Shahriar and Zulkernine (Shahriar & Zulkernine, 2011), and Killourhy et. al. (Killourhy, Maxion, & Tan, 2004) despite having broad or too general classifications. Therefore, this C overflow vulnerabilities exploit taxonomy is constructed to overcomes the shortcomings of previous works in classification and taxonomy with regards to C overflow vulnerabilities. Our taxonomy is constructed based on eight criterions defined for constructing well-defined taxonomy. Besides ours, there are also earlier taxonomy constructed using criteria as guidelines such as by Krsul (Krsul, 1998), Lough (Lough, 2001), Vijayaraghavan (Vijayaraghavan G. V., 2003), Killourhy (Killourhy, Maxion, & Tan, 2004) and Hansmann (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005). However, the criteria used in previous works are incomprehensive, unnecessary or incomplete. Although those previous taxonomy has improved the understanding on vulnerabilities, it is yet to be significant especially in the area of C overflow vulnerabilities. In terms of perspective, earlier works focused on computer’s memory perspective as they viewed the impact on the memory processing. This perspective is relevant but yet to be significant to developers or programmers as it is not useful to developers in writing secure codes. As such, the terminology and structures must be closely related to programming work. In contrast, C overflow vulnerabilities exploit taxonomy is constructed from source code perspective and exploit methods with the purpose of educating the software developers on secure coding in C. The characteristics and classes were derived by focusing on its appearance in the source code and the methods used to exploit that have impact on the code sequences and process. This is because it is believed that to prevent C overflow from occurring, the root cause must be cure rather than the symptom of it. Shahriar and Zulkernine (Shariar & Zulkernine, 2011), and Killourhy et. al. (Killourhy, Maxion, & Tan, 2004) constructed their classifications based on the same perspective. However, compared to their classifications, the classification in our taxonomy focuses specifically on developers view and how vulnerable codes looks like. In contrast, their classifications mentioned source code as generic and did not focus on appearance in the code. For that reason, the C overflow vulnerabilities exploit taxonomy is close and friendly to developers compared to other C overflow vulnerabilities taxonomy or classifications. We agreed with previous experts such as Krsul (Krsul, 1998), Lough (Lough, 2001), Vijayaraghavan (Vijayaraghavan G. V., 2003) and Alhazmi et. al. (Alhazmi, Woo, & Malaiya, 2006) whom stated that a well-defined taxonomy shall be specifics and cover all vulnerabilities. Therefore, the C overflow vulnerabilities exploit taxonomy is constructed specifics to C programming languages focusing only on overflow vulnerabilities. Hence, the scope of the taxonomy is C programming language and overflow vulnerabilities in C. However, although it is specifics to C, since it is also predecessor and baseline for C++ and Objective-C, the two programming languages is indirectly covered. Moore (Moore, 2007), Wilander (Wilander, 2002), Zitser (Zitser, An Evaluation of Static Source Code Analyzers, 2003), Zhivich (Zhivich M. A., 2005), Sotirov (Sotirov, 2005), Wagner (Wagner D. A., 2000) and Kratkiewicz (Kratkiewicz K. J., 2005) also constructed their taxonomy or classifications specifics to C programming language. However, due to different perspective, their taxonomy or classifications only cover limited classes of C overflow vulnerabilities. The other taxonomies such as by Aslam (Aslam, 1995), Howard and Longstaff (Howard & Longstaff, 1998), Krsul (Krsul, 1998), Pothamsetty and Akyol (Pothamsetty & Akyol, 2004), Killourhy et. al. (Killourhy, Maxion, & Tan, 2004) and Hansmann and Hunt (Hansmann & Hunt, A Taxonomy of Network and Computer Attacks, 2005), despite covers many classes of C overflow vulnerabilities, is not specifics to C programming language and C overflow vulnerabilities. Their scopes are network, generic software vulnerabilities or web application vulnerabilities. Hence, their scopes are still broad and generic. For that reason, the C overflow vulnerabilities exploit taxonomy is more relevant and significant to developers whom use C as the programming language for coding. C overflow vulnerabilities exploit taxonomy covers 10 classes of C overflow vulnerabilities. It is the first taxonomy that covers all classes of C overflow vulnerabilities. The closest to this taxonomy is classifications by Tsipenyuk et. al. (Tsipenyuk, Chess, & McGraw, 2005) of which she listed seven overflow vulnerabilities classes in the year 2005. No doubt that all of the classes in our taxonomy have been identified earlier by many experts and well-established organization, but there is no works done to consolidate and organize the information into a taxonomy for ease of understanding and educated developers. Therefore, it is conclude that our taxonomy is comprehensive and up to date compare to previous works as it contains all relevant and significant classes of C overflow vulnerabilities
  • #74: Legend High – Average of the classes is classified with high severity (between 6.0 – 10.0) Medium – Average of the classes is classified with medium severity (between 4.0 – 5.9) Low – Average of the classes is classified with low severity (between 1.0 – 3.9) NA – Not Available. Tester Based on the evaluation, it is found that users experience and knowledge plays an important role in identifying and classifying the C overflow vulnerabilities. The more experience in C programming and software security, the more correctness Albeit of user experiences and knowledge plays important role, the taxonomy do help improves the user understanding based on number of successful mapping increases compared between the first test iteration and the third test iteration. This also indicates that upon user understand the structures of the taxonomy, they are able to recognize and identify the vulnerabilities easily. On top of that, it is found that majority of mismatch or failure to map is due to unclear definition, limited information and confusing statement in the given reports, advisories or publications. Further analysis on the reports or advisories used for this evaluation does not have source code as references resulted in difficulties to do the matching process. As a result, we concluded that the failure of mapping or mismatch is not due to our taxonomy, but rather the source used for mapping process. Hence, based on responses, is is strongly believed that the taxonomy is well-defined and therefore useful for software developers to identify C overflow vulnerabilities in source code. In addition, compared to previous works in taxonomy area, this is the first evaluation done to evaluate a taxonomy against well-defined criteria. It is agreeable and noted that the number of testers is small and hence the result may not be a strong or concrete evidence. Nevertheless, the result can be a baseline for future validation and improvement on the taxonomy to ensure that it is meaningful and useful to software developers and software security community. On top of that, the method used to evaluate this taxonomy is also applicable to validate other taxonomy. Database As shown on the table, the taxonomy is well-defined and comprehensive as it completely covers all existing C overflow vulnerabilities since it was discovered and reported in the seven websites despite there are vulnerabilities was not seen on the site. This is because the scope of the site and numbers of advisories published. The result also shows that all these vulnerabilities are still significant and relevant with many of it still appearing and the CVSS score ranked at least medium. Persistency, Impact, and Severity As shown in the table, first three classes consistently and persistently ranked with high impact and high severity by all organizations. Additionally, these three C overflow vulnerabilities classes are persistent too with the latest advisory published for Array out-of-bound vulnerability class on the year 2014. Having mentioned that, there is also possibility without doubt that the other two high ranked vulnerabilities will appear as well in the year 2014. This is based on finding that there are advisories released at the end of year 2013 and the year 2014 is yet to end. The result also shows that pointer scaling/mixing and null termination does not appear in any of the list since late 2009. This could indicate that the two vulnerabilities are no longer relevant and significant to security community. However, further analysis brings us to a few significant findings. First, despite these two C overflow vulnerabilities classes does not appeared in any of organizations database, some of the advisories listed on the last identified year are still being revised and yet to complete. On top of that, as mentioned by Kaspersky, there are successful exploitations that utilize old vulnerability (Kaspersky Lab, 2013). Furthermore, there is possibility of the two vulnerabilities to appear on those organizations list again. This is due to there are other organization such as RedHat and HP published the vulnerabilities after the year 2009. As the result shows, first three vulnerabilities classes as arranged in hierarchy in C overflow vulnerabilities exploit taxonomy, is found to be ranked with high severity. Experts and well-known organization also consistently ranked the three classes with high severity. The result also shows that the last two classes was given a score of between low to medium severity, which indicates that the two C overflow vulnerabilities classes does not possess a high threat to security community. However, there are advisories of the three classes was ranked with medium or lower severity and the last two classes ranked with high severity. There are also advisories, which was given high severity in the first release, but later was revised with different severity level. There are many inputs contributes to the severity value. Detail analysis on advisories found that the severity is link with impact value, complexity to exploit, probability of being exploited and availability of fix or work around solution. On top of that, there are advisories/reports was later rejected nor completed and this too play important role on severity level of particular C overflow vulnerabilities class. Further analysis on the result also found that there is correlation between the severity and persistency. The more severe the vulnerability is, the more persistent it will be. On top of that, impact value plays important role on severity value too -> the more impact a vulnerability is, the more severe it will be.
  • #75: Legend High – Average of the classes is classified with high severity (between 6.0 – 10.0) Medium – Average of the classes is classified with medium severity (between 4.0 – 5.9) Low – Average of the classes is classified with low severity (between 1.0 – 3.9) NA – Not Available. Tester Based on the evaluation, it is found that users experience and knowledge plays an important role in identifying and classifying the C overflow vulnerabilities. The more experience in C programming and software security, the more correctness Albeit of user experiences and knowledge plays important role, the taxonomy do help improves the user understanding based on number of successful mapping increases compared between the first test iteration and the third test iteration. This also indicates that upon user understand the structures of the taxonomy, they are able to recognize and identify the vulnerabilities easily. On top of that, it is found that majority of mismatch or failure to map is due to unclear definition, limited information and confusing statement in the given reports, advisories or publications. Further analysis on the reports or advisories used for this evaluation does not have source code as references resulted in difficulties to do the matching process. As a result, we concluded that the failure of mapping or mismatch is not due to our taxonomy, but rather the source used for mapping process. Hence, based on responses, is is strongly believed that the taxonomy is well-defined and therefore useful for software developers to identify C overflow vulnerabilities in source code. In addition, compared to previous works in taxonomy area, this is the first evaluation done to evaluate a taxonomy against well-defined criteria. It is agreeable and noted that the number of testers is small and hence the result may not be a strong or concrete evidence. Nevertheless, the result can be a baseline for future validation and improvement on the taxonomy to ensure that it is meaningful and useful to software developers and software security community. On top of that, the method used to evaluate this taxonomy is also applicable to validate other taxonomy. Database As shown on the table, the taxonomy is well-defined and comprehensive as it completely covers all existing C overflow vulnerabilities since it was discovered and reported in the seven websites despite there are vulnerabilities was not seen on the site. This is because the scope of the site and numbers of advisories published. The result also shows that all these vulnerabilities are still significant and relevant with many of it still appearing and the CVSS score ranked at least medium. Persistency, Impact, and Severity As shown in the table, first three classes consistently and persistently ranked with high impact and high severity by all organizations. Additionally, these three C overflow vulnerabilities classes are persistent too with the latest advisory published for Array out-of-bound vulnerability class on the year 2014. Having mentioned that, there is also possibility without doubt that the other two high ranked vulnerabilities will appear as well in the year 2014. This is based on finding that there are advisories released at the end of year 2013 and the year 2014 is yet to end. The result also shows that pointer scaling/mixing and null termination does not appear in any of the list since late 2009. This could indicate that the two vulnerabilities are no longer relevant and significant to security community. However, further analysis brings us to a few significant findings. First, despite these two C overflow vulnerabilities classes does not appeared in any of organizations database, some of the advisories listed on the last identified year are still being revised and yet to complete. On top of that, as mentioned by Kaspersky, there are successful exploitations that utilize old vulnerability (Kaspersky Lab, 2013). Furthermore, there is possibility of the two vulnerabilities to appear on those organizations list again. This is due to there are other organization such as RedHat and HP published the vulnerabilities after the year 2009. As the result shows, first three vulnerabilities classes as arranged in hierarchy in C overflow vulnerabilities exploit taxonomy, is found to be ranked with high severity. Experts and well-known organization also consistently ranked the three classes with high severity. The result also shows that the last two classes was given a score of between low to medium severity, which indicates that the two C overflow vulnerabilities classes does not possess a high threat to security community. However, there are advisories of the three classes was ranked with medium or lower severity and the last two classes ranked with high severity. There are also advisories, which was given high severity in the first release, but later was revised with different severity level. There are many inputs contributes to the severity value. Detail analysis on advisories found that the severity is link with impact value, complexity to exploit, probability of being exploited and availability of fix or work around solution. On top of that, there are advisories/reports was later rejected nor completed and this too play important role on severity level of particular C overflow vulnerabilities class. Further analysis on the result also found that there is correlation between the severity and persistency. The more severe the vulnerability is, the more persistent it will be. On top of that, impact value plays important role on severity value too -> the more impact a vulnerability is, the more severe it will be.
  • #76: Legend High – Average of the classes is classified with high severity (between 6.0 – 10.0) Medium – Average of the classes is classified with medium severity (between 4.0 – 5.9) Low – Average of the classes is classified with low severity (between 1.0 – 3.9) NA – Not Available. Tester Based on the evaluation, it is found that users experience and knowledge plays an important role in identifying and classifying the C overflow vulnerabilities. The more experience in C programming and software security, the more correctness Albeit of user experiences and knowledge plays important role, the taxonomy do help improves the user understanding based on number of successful mapping increases compared between the first test iteration and the third test iteration. This also indicates that upon user understand the structures of the taxonomy, they are able to recognize and identify the vulnerabilities easily. On top of that, it is found that majority of mismatch or failure to map is due to unclear definition, limited information and confusing statement in the given reports, advisories or publications. Further analysis on the reports or advisories used for this evaluation does not have source code as references resulted in difficulties to do the matching process. As a result, we concluded that the failure of mapping or mismatch is not due to our taxonomy, but rather the source used for mapping process. Hence, based on responses, is is strongly believed that the taxonomy is well-defined and therefore useful for software developers to identify C overflow vulnerabilities in source code. In addition, compared to previous works in taxonomy area, this is the first evaluation done to evaluate a taxonomy against well-defined criteria. It is agreeable and noted that the number of testers is small and hence the result may not be a strong or concrete evidence. Nevertheless, the result can be a baseline for future validation and improvement on the taxonomy to ensure that it is meaningful and useful to software developers and software security community. On top of that, the method used to evaluate this taxonomy is also applicable to validate other taxonomy. Database As shown on the table, the taxonomy is well-defined and comprehensive as it completely covers all existing C overflow vulnerabilities since it was discovered and reported in the seven websites despite there are vulnerabilities was not seen on the site. This is because the scope of the site and numbers of advisories published. The result also shows that all these vulnerabilities are still significant and relevant with many of it still appearing and the CVSS score ranked at least medium. Persistency, Impact, and Severity As shown in the table, first three classes consistently and persistently ranked with high impact and high severity by all organizations. Additionally, these three C overflow vulnerabilities classes are persistent too with the latest advisory published for Array out-of-bound vulnerability class on the year 2014. Having mentioned that, there is also possibility without doubt that the other two high ranked vulnerabilities will appear as well in the year 2014. This is based on finding that there are advisories released at the end of year 2013 and the year 2014 is yet to end. The result also shows that pointer scaling/mixing and null termination does not appear in any of the list since late 2009. This could indicate that the two vulnerabilities are no longer relevant and significant to security community. However, further analysis brings us to a few significant findings. First, despite these two C overflow vulnerabilities classes does not appeared in any of organizations database, some of the advisories listed on the last identified year are still being revised and yet to complete. On top of that, as mentioned by Kaspersky, there are successful exploitations that utilize old vulnerability (Kaspersky Lab, 2013). Furthermore, there is possibility of the two vulnerabilities to appear on those organizations list again. This is due to there are other organization such as RedHat and HP published the vulnerabilities after the year 2009. As the result shows, first three vulnerabilities classes as arranged in hierarchy in C overflow vulnerabilities exploit taxonomy, is found to be ranked with high severity. Experts and well-known organization also consistently ranked the three classes with high severity. The result also shows that the last two classes was given a score of between low to medium severity, which indicates that the two C overflow vulnerabilities classes does not possess a high threat to security community. However, there are advisories of the three classes was ranked with medium or lower severity and the last two classes ranked with high severity. There are also advisories, which was given high severity in the first release, but later was revised with different severity level. There are many inputs contributes to the severity value. Detail analysis on advisories found that the severity is link with impact value, complexity to exploit, probability of being exploited and availability of fix or work around solution. On top of that, there are advisories/reports was later rejected nor completed and this too play important role on severity level of particular C overflow vulnerabilities class. Further analysis on the result also found that there is correlation between the severity and persistency. The more severe the vulnerability is, the more persistent it will be. On top of that, impact value plays important role on severity value too -> the more impact a vulnerability is, the more severe it will be.
  • #77: Legend High – Average of the classes is classified with high severity (between 6.0 – 10.0) Medium – Average of the classes is classified with medium severity (between 4.0 – 5.9) Low – Average of the classes is classified with low severity (between 1.0 – 3.9) NA – Not Available. Tester Based on the evaluation, it is found that users experience and knowledge plays an important role in identifying and classifying the C overflow vulnerabilities. The more experience in C programming and software security, the more correctness Albeit of user experiences and knowledge plays important role, the taxonomy do help improves the user understanding based on number of successful mapping increases compared between the first test iteration and the third test iteration. This also indicates that upon user understand the structures of the taxonomy, they are able to recognize and identify the vulnerabilities easily. On top of that, it is found that majority of mismatch or failure to map is due to unclear definition, limited information and confusing statement in the given reports, advisories or publications. Further analysis on the reports or advisories used for this evaluation does not have source code as references resulted in difficulties to do the matching process. As a result, we concluded that the failure of mapping or mismatch is not due to our taxonomy, but rather the source used for mapping process. Hence, based on responses, is is strongly believed that the taxonomy is well-defined and therefore useful for software developers to identify C overflow vulnerabilities in source code. In addition, compared to previous works in taxonomy area, this is the first evaluation done to evaluate a taxonomy against well-defined criteria. It is agreeable and noted that the number of testers is small and hence the result may not be a strong or concrete evidence. Nevertheless, the result can be a baseline for future validation and improvement on the taxonomy to ensure that it is meaningful and useful to software developers and software security community. On top of that, the method used to evaluate this taxonomy is also applicable to validate other taxonomy. Database As shown on the table, the taxonomy is well-defined and comprehensive as it completely covers all existing C overflow vulnerabilities since it was discovered and reported in the seven websites despite there are vulnerabilities was not seen on the site. This is because the scope of the site and numbers of advisories published. The result also shows that all these vulnerabilities are still significant and relevant with many of it still appearing and the CVSS score ranked at least medium. Persistency, Impact, and Severity As shown in the table, first three classes consistently and persistently ranked with high impact and high severity by all organizations. Additionally, these three C overflow vulnerabilities classes are persistent too with the latest advisory published for Array out-of-bound vulnerability class on the year 2014. Having mentioned that, there is also possibility without doubt that the other two high ranked vulnerabilities will appear as well in the year 2014. This is based on finding that there are advisories released at the end of year 2013 and the year 2014 is yet to end. The result also shows that pointer scaling/mixing and null termination does not appear in any of the list since late 2009. This could indicate that the two vulnerabilities are no longer relevant and significant to security community. However, further analysis brings us to a few significant findings. First, despite these two C overflow vulnerabilities classes does not appeared in any of organizations database, some of the advisories listed on the last identified year are still being revised and yet to complete. On top of that, as mentioned by Kaspersky, there are successful exploitations that utilize old vulnerability (Kaspersky Lab, 2013). Furthermore, there is possibility of the two vulnerabilities to appear on those organizations list again. This is due to there are other organization such as RedHat and HP published the vulnerabilities after the year 2009. As the result shows, first three vulnerabilities classes as arranged in hierarchy in C overflow vulnerabilities exploit taxonomy, is found to be ranked with high severity. Experts and well-known organization also consistently ranked the three classes with high severity. The result also shows that the last two classes was given a score of between low to medium severity, which indicates that the two C overflow vulnerabilities classes does not possess a high threat to security community. However, there are advisories of the three classes was ranked with medium or lower severity and the last two classes ranked with high severity. There are also advisories, which was given high severity in the first release, but later was revised with different severity level. There are many inputs contributes to the severity value. Detail analysis on advisories found that the severity is link with impact value, complexity to exploit, probability of being exploited and availability of fix or work around solution. On top of that, there are advisories/reports was later rejected nor completed and this too play important role on severity level of particular C overflow vulnerabilities class. Further analysis on the result also found that there is correlation between the severity and persistency. The more severe the vulnerability is, the more persistent it will be. On top of that, impact value plays important role on severity value too -> the more impact a vulnerability is, the more severe it will be.
  • #78: Legend High – Average of the classes is classified with high severity (between 6.0 – 10.0) Medium – Average of the classes is classified with medium severity (between 4.0 – 5.9) Low – Average of the classes is classified with low severity (between 1.0 – 3.9) NA – Not Available. Tester Based on the evaluation, it is found that users experience and knowledge plays an important role in identifying and classifying the C overflow vulnerabilities. The more experience in C programming and software security, the more correctness Albeit of user experiences and knowledge plays important role, the taxonomy do help improves the user understanding based on number of successful mapping increases compared between the first test iteration and the third test iteration. This also indicates that upon user understand the structures of the taxonomy, they are able to recognize and identify the vulnerabilities easily. On top of that, it is found that majority of mismatch or failure to map is due to unclear definition, limited information and confusing statement in the given reports, advisories or publications. Further analysis on the reports or advisories used for this evaluation does not have source code as references resulted in difficulties to do the matching process. As a result, we concluded that the failure of mapping or mismatch is not due to our taxonomy, but rather the source used for mapping process. Hence, based on responses, is is strongly believed that the taxonomy is well-defined and therefore useful for software developers to identify C overflow vulnerabilities in source code. In addition, compared to previous works in taxonomy area, this is the first evaluation done to evaluate a taxonomy against well-defined criteria. It is agreeable and noted that the number of testers is small and hence the result may not be a strong or concrete evidence. Nevertheless, the result can be a baseline for future validation and improvement on the taxonomy to ensure that it is meaningful and useful to software developers and software security community. On top of that, the method used to evaluate this taxonomy is also applicable to validate other taxonomy. Database As shown on the table, the taxonomy is well-defined and comprehensive as it completely covers all existing C overflow vulnerabilities since it was discovered and reported in the seven websites despite there are vulnerabilities was not seen on the site. This is because the scope of the site and numbers of advisories published. The result also shows that all these vulnerabilities are still significant and relevant with many of it still appearing and the CVSS score ranked at least medium. Persistency, Impact, and Severity As shown in the table, first three classes consistently and persistently ranked with high impact and high severity by all organizations. Additionally, these three C overflow vulnerabilities classes are persistent too with the latest advisory published for Array out-of-bound vulnerability class on the year 2014. Having mentioned that, there is also possibility without doubt that the other two high ranked vulnerabilities will appear as well in the year 2014. This is based on finding that there are advisories released at the end of year 2013 and the year 2014 is yet to end. The result also shows that pointer scaling/mixing and null termination does not appear in any of the list since late 2009. This could indicate that the two vulnerabilities are no longer relevant and significant to security community. However, further analysis brings us to a few significant findings. First, despite these two C overflow vulnerabilities classes does not appeared in any of organizations database, some of the advisories listed on the last identified year are still being revised and yet to complete. On top of that, as mentioned by Kaspersky, there are successful exploitations that utilize old vulnerability (Kaspersky Lab, 2013). Furthermore, there is possibility of the two vulnerabilities to appear on those organizations list again. This is due to there are other organization such as RedHat and HP published the vulnerabilities after the year 2009. As the result shows, first three vulnerabilities classes as arranged in hierarchy in C overflow vulnerabilities exploit taxonomy, is found to be ranked with high severity. Experts and well-known organization also consistently ranked the three classes with high severity. The result also shows that the last two classes was given a score of between low to medium severity, which indicates that the two C overflow vulnerabilities classes does not possess a high threat to security community. However, there are advisories of the three classes was ranked with medium or lower severity and the last two classes ranked with high severity. There are also advisories, which was given high severity in the first release, but later was revised with different severity level. There are many inputs contributes to the severity value. Detail analysis on advisories found that the severity is link with impact value, complexity to exploit, probability of being exploited and availability of fix or work around solution. On top of that, there are advisories/reports was later rejected nor completed and this too play important role on severity level of particular C overflow vulnerabilities class. Further analysis on the result also found that there is correlation between the severity and persistency. The more severe the vulnerability is, the more persistent it will be. On top of that, impact value plays important role on severity value too -> the more impact a vulnerability is, the more severe it will be.
  • #79: Without any doubt, has shown that all of the selected OS are still vulnerable. Although Windows XP 32-bits OS is end-of-life (EOL) and the support by Microsoft ended on April 2014, the number of users using the OS is significantly still many compared to other OS such as Linux and Windows 8 (StatCounter, 2014), (Awio Web Services LLC, 2014), (Sharwood, 2014) and (Net Applications.com, 2014). Hence, the impact of exploiting the OS is still significant and substantially critical. With the result shown, it is absolute critical to provide effective security mechanism or replace the OS as soon as possible to avoid unnecessary risk. On top of that, it is highly critical for Microsoft to improve their OS security mechanism and provide effective protection against vulnerable programs or applications. As shown in the result, all C overflow vulnerabilities classes are still relevant and can be exploited in this operating system without failed. There is no doubt that the complexity of exploiting the vulnerability is increase especially for 64-bits OS, but it cannot be used to justify that the protection provided in the OS is enough. It is shown in the table that it requires a few numbers of trial for an exploit to be successful and overflow to happen. The result also that many of the vulnerabilities are exceptionally easy to exploit. This is because the vulnerabilities are easily identified from the source code or from the behaviour of the programs based on try-and-error concept. 74 bits OS is difficult to exploit. There are many reasons contributes to the difficulties of triggering overflow in Windows 7 64-bits operating system. The operating system was built with different architecture and foundation compare to its ancestor. The operating system implements n64 bytes of addressing and memory structures and randomize memory allocation of which causing difficulties to find the suitable slot for triggering overflow. Therefore, although the operating system is highly critical and defenceless from C overflow vulnerabilities; the process of exploiting the vulnerabilities is significantly difficult too compared to its ancestor; Windows XP, and its competitor; that is Linux-based operating system. From the result achieved in this evaluation, it is also concluded that all C overflow vulnerabilities are still relevant and significant and the classes constructed in C overflow vulnerabilities exploit taxonomy is useful for security community to evaluate the secureness of the operating systems.
  • #80: Without any doubt, has shown that all of the selected OS are still vulnerable. Although Windows XP 32-bits OS is end-of-life (EOL) and the support by Microsoft ended on April 2014, the number of users using the OS is significantly still many compared to other OS such as Linux and Windows 8 (StatCounter, 2014), (Awio Web Services LLC, 2014), (Sharwood, 2014) and (Net Applications.com, 2014). Hence, the impact of exploiting the OS is still significant and substantially critical. With the result shown, it is absolute critical to provide effective security mechanism or replace the OS as soon as possible to avoid unnecessary risk. On top of that, it is highly critical for Microsoft to improve their OS security mechanism and provide effective protection against vulnerable programs or applications. As shown in the result, all C overflow vulnerabilities classes are still relevant and can be exploited in this operating system without failed. There is no doubt that the complexity of exploiting the vulnerability is increase especially for 64-bits OS, but it cannot be used to justify that the protection provided in the OS is enough. It is shown in the table that it requires a few numbers of trial for an exploit to be successful and overflow to happen. The result also that many of the vulnerabilities are exceptionally easy to exploit. This is because the vulnerabilities are easily identified from the source code or from the behaviour of the programs based on try-and-error concept. 74 bits OS is difficult to exploit. There are many reasons contributes to the difficulties of triggering overflow in Windows 7 64-bits operating system. The operating system was built with different architecture and foundation compare to its ancestor. The operating system implements n64 bytes of addressing and memory structures and randomize memory allocation of which causing difficulties to find the suitable slot for triggering overflow. Therefore, although the operating system is highly critical and defenceless from C overflow vulnerabilities; the process of exploiting the vulnerabilities is significantly difficult too compared to its ancestor; Windows XP, and its competitor; that is Linux-based operating system. From the result achieved in this evaluation, it is also concluded that all C overflow vulnerabilities are still relevant and significant and the classes constructed in C overflow vulnerabilities exploit taxonomy is useful for security community to evaluate the secureness of the operating systems.
  • #81: Without any doubt, has shown that all of the selected OS are still vulnerable. Although Windows XP 32-bits OS is end-of-life (EOL) and the support by Microsoft ended on April 2014, the number of users using the OS is significantly still many compared to other OS such as Linux and Windows 8 (StatCounter, 2014), (Awio Web Services LLC, 2014), (Sharwood, 2014) and (Net Applications.com, 2014). Hence, the impact of exploiting the OS is still significant and substantially critical. With the result shown, it is absolute critical to provide effective security mechanism or replace the OS as soon as possible to avoid unnecessary risk. On top of that, it is highly critical for Microsoft to improve their OS security mechanism and provide effective protection against vulnerable programs or applications. As shown in the result, all C overflow vulnerabilities classes are still relevant and can be exploited in this operating system without failed. There is no doubt that the complexity of exploiting the vulnerability is increase especially for 64-bits OS, but it cannot be used to justify that the protection provided in the OS is enough. It is shown in the table that it requires a few numbers of trial for an exploit to be successful and overflow to happen. The result also that many of the vulnerabilities are exceptionally easy to exploit. This is because the vulnerabilities are easily identified from the source code or from the behaviour of the programs based on try-and-error concept. 74 bits OS is difficult to exploit. There are many reasons contributes to the difficulties of triggering overflow in Windows 7 64-bits operating system. The operating system was built with different architecture and foundation compare to its ancestor. The operating system implements n64 bytes of addressing and memory structures and randomize memory allocation of which causing difficulties to find the suitable slot for triggering overflow. Therefore, although the operating system is highly critical and defenceless from C overflow vulnerabilities; the process of exploiting the vulnerabilities is significantly difficult too compared to its ancestor; Windows XP, and its competitor; that is Linux-based operating system. From the result achieved in this evaluation, it is also concluded that all C overflow vulnerabilities are still relevant and significant and the classes constructed in C overflow vulnerabilities exploit taxonomy is useful for security community to evaluate the secureness of the operating systems.
  • #82: Without any doubt, has shown that all of the selected OS are still vulnerable. Although Windows XP 32-bits OS is end-of-life (EOL) and the support by Microsoft ended on April 2014, the number of users using the OS is significantly still many compared to other OS such as Linux and Windows 8 (StatCounter, 2014), (Awio Web Services LLC, 2014), (Sharwood, 2014) and (Net Applications.com, 2014). Hence, the impact of exploiting the OS is still significant and substantially critical. With the result shown, it is absolute critical to provide effective security mechanism or replace the OS as soon as possible to avoid unnecessary risk. On top of that, it is highly critical for Microsoft to improve their OS security mechanism and provide effective protection against vulnerable programs or applications. As shown in the result, all C overflow vulnerabilities classes are still relevant and can be exploited in this operating system without failed. There is no doubt that the complexity of exploiting the vulnerability is increase especially for 64-bits OS, but it cannot be used to justify that the protection provided in the OS is enough. It is shown in the table that it requires a few numbers of trial for an exploit to be successful and overflow to happen. The result also that many of the vulnerabilities are exceptionally easy to exploit. This is because the vulnerabilities are easily identified from the source code or from the behaviour of the programs based on try-and-error concept. 74 bits OS is difficult to exploit. There are many reasons contributes to the difficulties of triggering overflow in Windows 7 64-bits operating system. The operating system was built with different architecture and foundation compare to its ancestor. The operating system implements n64 bytes of addressing and memory structures and randomize memory allocation of which causing difficulties to find the suitable slot for triggering overflow. Therefore, although the operating system is highly critical and defenceless from C overflow vulnerabilities; the process of exploiting the vulnerabilities is significantly difficult too compared to its ancestor; Windows XP, and its competitor; that is Linux-based operating system. From the result achieved in this evaluation, it is also concluded that all C overflow vulnerabilities are still relevant and significant and the classes constructed in C overflow vulnerabilities exploit taxonomy is useful for security community to evaluate the secureness of the operating systems.
  • #83: Without any doubt, has shown that all of the selected OS are still vulnerable. Although Windows XP 32-bits OS is end-of-life (EOL) and the support by Microsoft ended on April 2014, the number of users using the OS is significantly still many compared to other OS such as Linux and Windows 8 (StatCounter, 2014), (Awio Web Services LLC, 2014), (Sharwood, 2014) and (Net Applications.com, 2014). Hence, the impact of exploiting the OS is still significant and substantially critical. With the result shown, it is absolute critical to provide effective security mechanism or replace the OS as soon as possible to avoid unnecessary risk. On top of that, it is highly critical for Microsoft to improve their OS security mechanism and provide effective protection against vulnerable programs or applications. As shown in the result, all C overflow vulnerabilities classes are still relevant and can be exploited in this operating system without failed. There is no doubt that the complexity of exploiting the vulnerability is increase especially for 64-bits OS, but it cannot be used to justify that the protection provided in the OS is enough. It is shown in the table that it requires a few numbers of trial for an exploit to be successful and overflow to happen. The result also that many of the vulnerabilities are exceptionally easy to exploit. This is because the vulnerabilities are easily identified from the source code or from the behaviour of the programs based on try-and-error concept. 74 bits OS is difficult to exploit. There are many reasons contributes to the difficulties of triggering overflow in Windows 7 64-bits operating system. The operating system was built with different architecture and foundation compare to its ancestor. The operating system implements n64 bytes of addressing and memory structures and randomize memory allocation of which causing difficulties to find the suitable slot for triggering overflow. Therefore, although the operating system is highly critical and defenceless from C overflow vulnerabilities; the process of exploiting the vulnerabilities is significantly difficult too compared to its ancestor; Windows XP, and its competitor; that is Linux-based operating system. From the result achieved in this evaluation, it is also concluded that all C overflow vulnerabilities are still relevant and significant and the classes constructed in C overflow vulnerabilities exploit taxonomy is useful for security community to evaluate the secureness of the operating systems.
  • #84: Without any doubt, has shown that all of the selected OS are still vulnerable. Although Windows XP 32-bits OS is end-of-life (EOL) and the support by Microsoft ended on April 2014, the number of users using the OS is significantly still many compared to other OS such as Linux and Windows 8 (StatCounter, 2014), (Awio Web Services LLC, 2014), (Sharwood, 2014) and (Net Applications.com, 2014). Hence, the impact of exploiting the OS is still significant and substantially critical. With the result shown, it is absolute critical to provide effective security mechanism or replace the OS as soon as possible to avoid unnecessary risk. On top of that, it is highly critical for Microsoft to improve their OS security mechanism and provide effective protection against vulnerable programs or applications. As shown in the result, all C overflow vulnerabilities classes are still relevant and can be exploited in this operating system without failed. There is no doubt that the complexity of exploiting the vulnerability is increase especially for 64-bits OS, but it cannot be used to justify that the protection provided in the OS is enough. It is shown in the table that it requires a few numbers of trial for an exploit to be successful and overflow to happen. The result also that many of the vulnerabilities are exceptionally easy to exploit. This is because the vulnerabilities are easily identified from the source code or from the behaviour of the programs based on try-and-error concept. 74 bits OS is difficult to exploit. There are many reasons contributes to the difficulties of triggering overflow in Windows 7 64-bits operating system. The operating system was built with different architecture and foundation compare to its ancestor. The operating system implements n64 bytes of addressing and memory structures and randomize memory allocation of which causing difficulties to find the suitable slot for triggering overflow. Therefore, although the operating system is highly critical and defenceless from C overflow vulnerabilities; the process of exploiting the vulnerabilities is significantly difficult too compared to its ancestor; Windows XP, and its competitor; that is Linux-based operating system. From the result achieved in this evaluation, it is also concluded that all C overflow vulnerabilities are still relevant and significant and the classes constructed in C overflow vulnerabilities exploit taxonomy is useful for security community to evaluate the secureness of the operating systems.