Applsci 14 08365
Applsci 14 08365
1 Department of Electrical Engineering, National Taiwan University, Taipei City 106319, Taiwan;
[email protected] (G.-Y.Y.); [email protected] (Y.-Z.G.); [email protected] (Y.-W.T.);
[email protected] (P.-H.H.)
2 CyberLink Corporation, New Taipei City 231023, Taiwan
3 Institute of Artificial Intelligence Innovation, National Yang Ming Chiao Tung University,
Abstract: The rapid proliferation of network applications has led to a significant increase in network
attacks. According to the OWASP Top 10 Projects report released in 2021, injection attacks rank
among the top three vulnerabilities in software projects. This growing threat landscape has in-
creased the complexity and workload of software testing, necessitating advanced tools to support
agile development cycles. This paper introduces a novel test prioritization method for SQL injection
vulnerabilities to enhance testing efficiency. By leveraging previous test outcomes, our method ad-
justs defense strength vectors for subsequent tests, optimizing the testing workflow and tailoring
defense mechanisms to specific software needs. This approach aims to improve the effectiveness
and efficiency of vulnerability detection and mitigation through a flexible framework that incorpo-
rates dynamic adjustments and considers the temporal aspects of vulnerability exposure.
Recent reports, such as the OWASP Top 10 project published in 2021 [4], indicate that
injection attacks rank among the top three common vulnerabilities in software projects.
SQL injection attacks, in particular, exploit inadequate input validation and poor website
management, allowing attackers to inject malicious SQL statements into queries generated
by web applications. This can lead to unauthorized access to backend databases and ex-
posure to sensitive information, such as usernames, passwords, email addresses, phone
numbers, and credit card details. Additionally, attackers can alter database schemas and
content, exacerbating the potential damage.
Various types of SQL injection attacks have been identified [5], and numerous meth-
ods for detecting and preventing them have been proposed. However, existing solutions
often have limitations and may not fully address the evolving nature of these attacks. As
new attack vectors emerge, it is crucial to identify and understand current technologies to
develop more effective countermeasures.
1.2. Motivation
Software testing is a crucial and costly industry activity, often accounting for over
50% of the total software development cost [6]. To address this challenge in testing SQL
injection for agile software development, we propose a software testing tool for automatic
testing, activated upon the tester’s submission of the software version and relevant infor-
mation. This tool customizes defense functions tailored to the specific software under test,
enhancing the testing process’s efficiency. Following the completion of testing, the tool
adjusts the defense strength vector to optimize subsequent tests.
Upon receiving the version information, the tool tests the failed cases in the previous
round and further designs a choice based on common tester practices. Prioritizing previ-
ously failed test cases allows testers to obtain real-time information about iterative ver-
sions promptly. After testing, adjustments to the test process are made to facilitate future
tests, thus improving overall testing efficiency.
To demonstrate the impact of test prioritization on time and cost, we conducted a
pre-test to measure the time differences using various prioritization methods to identify
the first SQL injection vulnerabilities. For instance, in target1 (powerhr), the time differ-
ence between the TEUQSB and BUSQET prioritization orders was up to 64 s, as shown in
Figure 1. Similar differences were observed for target2 (dcstcs) and target3 (healthy), with
time differences exceeding 15 s, as illustrated in Figure 1. This substantial variance under-
scores the importance of optimizing test prioritization strategies. Each letter in the se-
quences BUSQET represents a different type of SQL injection, detailed in Table 1, with
further explanations provided in Section 2.2 SQL Injection.
1.3. Contribution
In this subsection, we outline the critical contributions of our work:
• We proposed a new algorithm to prioritize SQL Injection vulnerability test cases. This
algorithm is part of a comprehensive framework (TPSQLi) that includes the design
of weight functions, dynamic adjustments, and evaluation methods to ensure effi-
cient and effective prioritization.
• We enhanced the testing process in SQLMAP by prioritizing test cases that failed in
previous rounds, thereby improving the efficiency and effectiveness of the testing
cycle.
• Our framework (TPSQLi) is designed to adapt to immediate feedback and evolving
security threats, ensuring continuous adjustments to testing priorities. This adapta-
bility significantly enhances the efficiency of security testing, particularly for regres-
sion testing, ensuring that testing remains relevant and effective in addressing the
ever-changing landscape of web application security.
• Our framework (TPSQLi) performs better than ART4SQLi [9], the current state-of-
the-art (SOTA) test prioritization framework, by designing more efficient and adapt-
able prioritization mechanisms.
1.4. Organization
The rest of this paper is organized as follows: Section 2 introduces the preliminary
knowledge essential for our study, providing the foundational background. Section 3 of-
fers a comprehensive review of related research, establishing the context and significance
of our research. Section 4 presents the TPSQLi penetration testing framework and the test
Appl. Sci. 2024, 14, 8365 4 of 21
prioritization algorithm unit, detailing the conceptual design and implementation pro-
cesses with insights into practical applications. Section 5 discusses the evaluation results,
presenting a comparative analysis with the state-of-the-art ART4SQLi [9] across 10 test
cases. Finally, Section 6 concludes the paper by summarizing the main findings and their
implications.
2. Preliminaries
This section will introduce the fundamental concepts of software testing, test priori-
tization, and SQL injection, which are essential to understanding our proposed method-
ology.
3. Related Work
SQL injection (SQLi) remains a significant threat to the security of web applications,
prompting the development of various detection and testing methodologies [21]. In 2019,
Zhang et al. proposed ART4SQLi, an adaptive random testing method based on SQLMAP
that prioritizes test cases to efficiently identify SQLi vulnerabilities, reducing the number
of attempts needed by over 26% [9]. Unlike static order test prioritization, which calculates
the distance between payloads, our tool employs a dynamic adjustment mechanism, en-
hancing detection efficiency.
Furthermore, most research needs to address the test prioritization of SQL injection.
Al Wahaibi et al. introduced SQIRL in 2023, which utilizes deep reinforcement learning
with grey-box feedback to intelligently fuzz input fields, generating diverse payloads, dis-
covering more vulnerabilities with fewer attempts, and achieving zero false positives [22].
However, this method does not include test prioritization. Similarly, Erdődi et al. (2021)
simulated SQLi attacks using Q-learning agents within a reinforcement learning frame-
work, modeling SQLi as a security capture-the-flag challenge, enabling agents to learn
generalizable attack strategies [25]. In the same year, Kasim developed an ensemble clas-
sification-based method that detects and classifies SQLi attacks as simple, unified, or lat-
eral, utilizing features from the OWASP dataset to achieve high detection and classifica-
tion accuracy [26]. Additionally, in 2023, Ravindran et al. created a Chrome extension to
detect and prevent SQLi and XSS attacks by analyzing incoming data for suspicious pat-
terns, thus enhancing web application security [2]. In 2024, Arasteh et al. presented a ma-
chine learning-based detection method using binary Gray-Wolf optimization for feature
Appl. Sci. 2024, 14, 8365 6 of 21
selection, which enhances detection efficiency by focusing on the most compelling fea-
tures, achieving high accuracy and precision [23].
In test prioritization, Chen et al. investigated various techniques to optimize regres-
sion testing time in 2018. Based on test distribution analysis, their predictive test prioriti-
zation (PTP) method accurately predicts the optimal prioritization technique, significantly
improving fault detection and reducing testing costs [27]. Haghighatkhah et al. studied
the combination of diversity-based and history-based test prioritization (DBTP and HBTP)
in continuous integration environments, finding that leveraging previous failure
knowledge (HBTP) is highly effective. At the same time, DBTP is beneficial during early
stages or when combined with HBTP [28]. Alptekin et al. introduced a method to priori-
tize security test executions based on web page similarities, hypothesizing that similar
pages have similar vulnerabilities. This approach achieved high accuracy in predicting
vulnerabilities, speeding up vulnerability assessments, and improving testing efficiency
[29]. In 2023, Medeiros et al. proposed a clustering-based approach to categorize and pri-
oritize code units based on security trustworthiness models. This method helps develop-
ers improve code security early in development by identifying code units prone to vul-
nerabilities, reducing potential vulnerabilities and associated costs [30].
These studies collectively advance the fields of SQL injection detection and test pri-
oritization, providing robust methodologies to enhance web application security and op-
timize testing processes. However, to our knowledge, only ART4SQLi uses test prioritiza-
tion to boost SQL injection testing.
In this penetration framework, the initial step involves the user entering the URL to
be tested. Subsequently, a web crawler is employed to identify additional test targets, with
the crawling depth determined by user-defined settings. The next phase involves launch-
ing the SQL Injection Attack Detection Model, which encompasses four main components:
the Parameter Testing Panel, Test Prioritization Panel, Exploit Panel, and Report Panel. In
the Parameter Testing Panel, parameters can be transmitted using the GET or POST
method. For GET requests, parameters are extracted directly from the web pages. HTML
code is analyzed for POST requests to locate “form” tags. The subsequent Test Prioritiza-
tion Panel calculates an appropriate testing sequence for the Exploit Panel, utilizing a test
prioritization algorithm detailed in the following chapter. Once the parameters are ex-
tracted, they are tested based on the prioritization determined in the previous panel. The
test results are then fed into the test prioritization algorithm unit to refine future testing
sequences. Finally, the Report Panel displays the results of the detection process, provides
relevant information, and offers recommended solutions.
4.4.1. Pseudocode
Algorithm 1 outlines the test prioritization algorithm for SQL injection vulnerability
detection in web applications. The algorithm takes as input an 𝑛-dimensional Strength–
Weakness (SW) vector, where 𝑛 represents different techniques, and a payload for each
technique.
The process of our algorithm is detailed below:
1. Initialization (Set Initial Variables):
a. Strength–Weakness Vector (‘SW_Vector’): Initialize a fundamental score
for each technique using a function discussed in the subsequent subsec-
tion. This vector quantifies each SQL injection technique’s relative
strengths or weaknesses based on historical data or predefined metrics.
b. Exploit Time (‘exploit_time’): Initialize this variable to record the fastest
exploit time for each technique. For instance, if technique A exploits a
vulnerability in 2 s, while technique A takes 5 s, this would influence the
prioritization of these techniques.
c. Is Exploit (‘is_exploit’): Initialize this boolean variable to record the suc-
cess status of each technique. If a technique fails, it will be marked as
False, affecting its priority in future tests.
2. Execution Loop:
a. Payload Selection: For each payload that failed in the previous round
(higher risk payloads), determine whether it can be successfully ex-
ploited. If not, reduce its risk and select the payload with the highest fun-
damental score for testing.
b. Execution: Start the execution loop by recording the start time to establish
a baseline duration. Execute the selected payload, performing the specific
task assigned by the system. After the payload execution, record the end
time to calculate the total execution time, which is critical for performance
analysis and optimization.
c. Outcome Evaluation: Determine whether the exploit was successful.
There are two possible outcomes:
Failure Case:
i. If ‘is_exploit’ for the technique is False, add the execution time and
subtract one point from the fundamental score.
ii. If ‘is_exploit’ is True, only subtract one point from the fundamental
score without adding the execution time.
Success Case:
i. Set the payload’s risk to high.
ii. If ‘is_exploit’ for the technique is False, add the execution time and
set ‘is_exploit’ to True.
3. Iteration and Update:
a. Iterate: Continue testing the next payload until all payloads have been
executed.
b. Update: After executing all payloads, update the SW-vector based on the
recorded exploit time and the ‘is_exploit’ status.
Our proposed algorithm ensures that the payloads with higher risks are prioritized
and the execution times are minimized by continuously updating and adapting the
strength–weakness vector based on the performance of each technique. For an example of
the algorithm in action, please refer to Section 4.4.4.
Appl. Sci. 2024, 14, 8365 9 of 21
• Rule 3: The score calculation considers the difference in exploit time using the recip-
rocal ratio of these times.
Moreover, we use the following mathematical formulas to express the model:
• Techniques: 𝑛.
• Successful exploits: 𝑎.
• Failed exploits: 𝑛 − 𝑎.
• Success exploit time: 𝑆1 , 𝑆2 , … , 𝑆𝑎 .
• Fail exploit time: 𝐹1 , 𝐹2 , … , 𝐹𝑎 .
• Weights (Both 𝑊𝐹𝑖 , 𝑊𝑆𝑖 , 𝑊𝑖 set to zero in the initial process):
1 1
𝑊𝑆𝑖 = ∙𝑛∙ , ∀𝑖 = 1,2, … , 𝑎
𝑆𝑖 1 (2)
∑𝑎𝑘=1
𝑆𝑘
For a deeper understanding of our algorithm, please refer to Section 4.4.4 for a simple
example.
dynamically based on success or failure. Successful exploits prompt the setting of the pay-
load’s risk to high and update the technique’s status.
For failed payloads, the system subtracts points from their fundamental score and re-
tests the next payload. If a payload is successfully exploited, its execution time is recorded,
and the strength–weakness vector is updated accordingly. This process continues until all
payloads are tested, ensuring comprehensive coverage of potential vulnerabilities.
5. Evaluation
The experimental setup included a Windows 10 machine from ASUS, model
P2520LA, which was produced in China. The machine equipped with 12 GB of RAM and
an Intel(R) Core(TM) i5-5200U CPU, operating at 2.2 GHz with four cores. The penetration
testing framework, as outlined in Section 4, was implemented using Python 3.9.7. Our
study focused on two specific targets, DVWA SQL-blind and DVWA SQL, along with
eight real-world cases, including login pages, blogs, business websites, and eCommerce
sites. Our framework was first evaluated on an open-source software project, with the
specific test targets depicted in Table 2.
Appl. Sci. 2024, 14, 8365 12 of 21
We performed five rounds of testing, recording the results and calculating the
weights using Equations (1) and (2). The compiled scores are presented in Table 3. These
scores informed the development of new testing priorities, transitioning from the existing
BEUSTQ to a new order based on the calculated weights. Detailed timing information for
each round is available in Appendix A.
(a) (b)
Appl. Sci. 2024, 14, 8365 14 of 21
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Comparison between the original tool (ART4SQLi) and the optimized tool (TPSQLi) was
conducted across various cases: (a) DVWA (SQL blind); (b) DVWA (SQL); (c) R1; (d) R2; (e) R3; (f)
R4; (g) R5; (h) R6; (i) R7; (j) R8.
The last column in Table 4, labeled |𝑍|, displays the values obtained from the statis-
tical 𝑍-𝑡𝑒𝑠𝑡, calculated using the mean (𝑇𝑚𝑒𝑎𝑛 ) and standard deviation (𝑇𝑠𝑡𝑑 ), specified in
Equation (4). This column is intended to determine whether there is a statistically signifi-
cant difference between the results of the two approaches (ART4SQLi and TPSQLi) across
the ten test cases, with a confidence level of 95%, i.e., 𝑍0.95 = 1.645.
𝑇𝑚𝑒𝑎𝑛,TPSQLi − 𝑇𝑚𝑒𝑎𝑛,ART4SQLi
𝑍=
2 2 (4)
√𝑇𝑠𝑡𝑑,TPSQLi + 𝑇𝑠𝑡𝑑,ART4SQLi
As shown in Table 4, TPSQLi outperformed ART4SQLi across all metrics, with the
exception of 𝑇𝑚𝑒𝑎𝑛 for R4. Notably, the test prioritization did not alter the order for R4
and DVWA (SQL), resulting in the same order as observed in ART4SQLi. The |𝑍| values,
except for those corresponding to DVWA (SQL) and R4, exceeded the critical value 𝑍0.95 =
1.645, as indicated in the rightmost column. This suggests that the 𝑍-𝑡𝑒𝑠𝑡 confirms a sig-
nificant reduction in execution time between ART4SQLi and TPSQLi for the 10 test cases.
We designed a new metric, the False Positive Measure (𝐹𝑃𝑀), to systematically com-
pare the false positives. This metric is defined as:
𝑇𝑜𝑡𝑎𝑙 𝐸𝑥𝑒𝑐𝑢𝑡𝑒𝑑 𝑃𝑎𝑦𝑙𝑜𝑎𝑑𝑠
𝐹𝑃𝑀 = (5)
𝐹𝑎𝑙𝑠𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒
A higher 𝐹𝑃𝑀 indicates a more effective process, implying fewer false positives than
the number of executed payloads. To further quantify the improvement of TPSQLi over
ART4SQLi, we also introduce an Improved Rate (𝐼𝑅), calculated as:
𝐹𝑃𝑀𝑇𝑃𝑆𝑄𝐿𝑖 − 𝐹𝑃𝑀𝐴𝑅𝑇4𝑆𝑄𝐿𝑖
𝐼𝑅 = × 100% (6)
𝐹𝑃𝑀𝐴𝑅𝑇4𝑆𝑄𝐿𝑖
The 𝐼𝑅 expresses the percentage improvement offered by TPSQLi over ART4SQLi.
As shown in Table 5, TPSQLi delivers significant enhancements, particularly for test tar-
gets R1 and R3, with 𝐼𝑅 values of 19.95% and 16.62%, respectively. On average, TPSQLi
demonstrates a 4.65% overall improvement, indicating consistent efficiency gains across
different web applications. Notably, no improvement was observed on test targets
DVWA(SQL) and R4, as the original order in both cases was optimal, resulting in identical
𝐹𝑃𝑀 values for both TPSQLi and ART4SQLi.
This analysis highlights that our test prioritization method effectively reduces false
positives and improves the accuracy of SQL injection detection. The increase in 𝐹𝑃𝑀 val-
ues and positive IR percentages confirm that TPSQLi is superior to ART4SQLi, particu-
larly in handling test execution time and precision.
Table 5. Comparison of TPSQLi and ART4SQLi Based on False Positive Measure (𝐹𝑃𝑀) and Im-
proved Rate (𝐼𝑅) Across Various Test Targets.
6. Conclusions
In this research, we propose a comprehensive framework for regression testing of
SQLi, focused on optimizing test prioritization through the design of weight functions
and evaluation methods. The proposed module for calculating test prioritization is
Appl. Sci. 2024, 14, 8365 17 of 21
flexible, accommodating various technologies and targets, and includes dynamic adjust-
ment capabilities. The weight calculation method accounts for the time differential in the
exposure of weaknesses and considers the probability of successful exploitation by differ-
ent technologies. Our findings demonstrate that the time to expose the first weakness is
significantly reduced, and the overall exposure time is faster than the original test order.
TPSQLi effectively accelerates the testing process, as evidenced by statistical 𝑍-𝑡𝑒𝑠𝑡 cal-
culations confirming significant differences compared to the ART4SQLi. We also discuss
the false positive to check the effectiveness of TPSQLi and compare it with ART4SQLi.
Moreover, future work will explore machine learning methods, such as large language
models for feature extraction and reinforcement learning for dynamically adjusting test
prioritization, to enhance test prioritization further, aiming to improve the efficiency of
SQL Injection testing.
Author Contributions: Conceptualization, F.W., G.-Y.Y. and Y.-Z.G.; data curation, G.-Y.Y.; formal
analysis, G.-Y.Y.; investigation, Y.-Z.G., G.-Y.Y. and F.W.; methodology, G.-Y.Y. and Y.-Z.G.; imple-
mentation, G.-Y.Y., Y.-Z.G. and P.-H.H.; experiment, G.-Y.Y. and Y.-Z.G.; visualization, G.-Y.Y.; writ-
ing—original draft preparation, G.-Y.Y.; writing—review and editing, G.-Y.Y., Y.-W.T., K.-H.Y., F.W.
and W.-L.W., supervision, F.W., G.-Y.Y. and K.-H.Y.; project administration, F.W. and K.-H.Y. All
authors have read and agreed to the published version of the manuscript.
Funding: This research was partly supported by the National Science and Technology Council
(NSTC), Taiwan, ROC, under the projects MOST 110-2221-E-002-069-MY3, NSTC 111-2221-E-A49-
202-MY3, NSTC 112-2634-F-011-002-MBK and NSTC 113-2634-F-011-002-MBK. We also received
partial support from the 2024 CITI Visible Project: Questionnaire-based Technology for App Layout
Evaluation, Academia Sinica, Taiwan, ROC.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in the study are included in the article, further
inquiries can be directed to the corresponding author/s.
Acknowledgments: We thank the anonymous reviewers for their valuable comments, which make
this manuscript better. We also thank Jui-Ning Chen from Academia Sinica, Taiwan, for her invalu-
able comments. We appreciate the Speech AI Research Center of National Yang Ming Chiao Tung
University for providing the necessary computational resources. Additionally, we utilized GPT-4 to
assist with wording, formatting, and stylistic improvements throughout this research.
Conflicts of Interest: Author You-Zong Gu is employed by CyberLink Corporation. The remaining
authors declare that the research was conducted in the absence of any commercial or financial rela-
tionships that could be construed as a potential conflict of interest.
Appendix A
The unit of time data is seconds.
References
1. Sharma, P.; Johari, R.; Sarma, S. Integrated approach to prevent SQL injection attack and reflected cross site scripting attack. Int.
J. Syst. Assur. Eng. Manag. 2012, 3, 343–351.
2. Ravindran, R.; Abhishek, S.; Anjali, T.; Shenoi, A. Fortifying Web Applications: Advanced XSS and SQLi Payload Constructor
for Enhanced Security. In Proceedings of the International Conference on Information and Communication Technology for
Competitive Strategies, Jaipur, India, 8–9 December 2023; pp. 421–429.
3. Odion, T.O.; Ebo, I.O.; Imam, R.M.; Ahmed, A.I.; Musa, U.N. VulScan: A Web-Based Vulnerability Multi-Scanner for Web Ap-
plication. In Proceedings of the 2023 International Conference on Science, Engineering and Business for Sustainable Develop-
ment Goals (SEB-SDG), Kwara, Nigeria, 5–7 April 2023; pp. 1–7.
4. OWASP Top Ten. Available online: https://owasp.org/www-project-top-ten/ (accessed on 11 July 2024).
5. Shehu, B.; Xhuvani, A. A literature review and comparative analyses on sql injection: Vulnerabilities, attacks and their preven-
tion and detection techniques. Int. J. Comput. Sci. Issues (IJCSI) 2014, 11, 28.
6. Wang, F.; Wu, J.-H.; Huang, C.-H.; Chang, K.-H. Evolving a test oracle in black-box testing. In Proceedings of the Fundamental
Approaches to Software Engineering: 14th International Conference, FASE 2011, Held as Part of the Joint European Conferences
on Theory and Practice of Software, ETAPS 2011, Saarbrücken, Germany, March 26–April 3 2011; pp. 310–325.
7. OWASP ZAP. Available online: https://www.zaproxy.org/ (accessed on 15 July 2024).
8. Sqlmap. Available online: https://sqlmap.org (accessed on 28 June 2024).
9. Zhang, L.; Zhang, D.; Wang, C.; Zhao, J.; Zhang, Z. ART4SQLi: The ART of SQL injection vulnerability discovery. IEEE Trans.
Reliab. 2019, 68, 1470–1489.
10. Hailpern, B.; Santhanam, P. Software debugging, testing, and verification. IBM Syst. J. 2002, 41, 4–12.
11. Spillner, A.; Linz, T. Software Testing Foundations: A Study Guide for the Certified Tester Exam-Foundation Level-ISTQB® Compliant;
Dpunkt. Verlag: Heidelberg/Wieblingen, Germany, 2021.
12. Yoo, S.; Harman, M. Regression testing minimization, selection and prioritization: A survey. Softw. Test. Verif. Reliab. 2012, 22,
67–120.
13. Huang, G.-D.; Wang, F. Automatic test case generation with region-related coverage annotations for real-time systems. In Pro-
ceedings of the International Symposium on Automated Technology for Verification and Analysis, Taipei, Taiwan, 4–7 October
2005; pp. 144–158.
14. Strandberg, P.E.; Sundmark, D.; Afzal, W.; Ostrand, T.J.; Weyuker, E.J. Experience report: Automated system level regression
test prioritization using multiple factors. In Proceedings of the 2016 IEEE 27th International Symposium on Software Reliability
Engineering (ISSRE), Ottawa, ON, Canada, 23–27 October, 2016; pp. 12–23.
15. Bajaj, A.; Sangwan, O.P. A systematic literature review of test case prioritization using genetic algorithms. IEEE Access 2019, 7,
126355–126375.
16. Elbaum, S.; Rothermel, G.; Penix, J. Techniques for improving regression testing in continuous integration development envi-
ronments. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, Hong
Kong, China, 16–21 November 2014; pp. 235–245.
17. Strandberg, P.E.; Afzal, W.; Ostrand, T.J.; Weyuker, E.J.; Sundmark, D. Automated system-level regression test prioritization in
a nutshell. IEEE Softw. 2017, 34, 30–37.
18. Wang, F.; Huang, G.-D. Test Plan Generation for Concurrent Real-Time Systems Based on Zone Coverage Analysis. In Proceed-
ings of the International Workshop on Formal Approaches to Software Testing, Tokyo, Japan, 10–13 June 2008; pp. 234–249.
19. Marchand-Melsom, A.; Nguyen Mai, D.B. Automatic repair of OWASP Top 10 security vulnerabilities: A survey. In Proceedings
of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, Seoul, Republic of Korea, 27 June–19
July 2020; pp. 23–30.
20. Bobade, N.D.; Sinha, V.A.; Sherekar, S.S. A diligent survey of SQL injection attacks, detection and evaluation of mitigation
techniques. In Proceedings of the 2024 IEEE International Students' Conference on Electrical, Electronics and Computer Science
(SCEECS), Bhopal, India, 24–25 February 2024; pp. 1–5.
21. Marashdeh, Z.; Suwais, K.; Alia, M. A survey on sql injection attack: Detection and challenges. In Proceedings of the 2021 Inter-
national Conference on Information Technology (ICIT), Amman, Jordan, 14–15 July 2021; pp. 957–962.
22. Al Wahaibi, S.; Foley, M.; Maffeis, S. {SQIRL}:{Grey-Box} Detection of {SQL} Injection Vulnerabilities Using Reinforcement
Learning. In Proceedings of the 32nd USENIX Security Symposium (USENIX Security 23), Anaheim, CA, USA, 9–11 August
2023; pp. 6097–6114.
23. Arasteh, B.; Aghaei, B.; Farzad, B.; Arasteh, K.; Kiani, F.; Torkamanian-Afshar, M. Detecting SQL injection attacks by binary
gray wolf optimizer and machine learning algorithms. Neural Comput. Appl. 2024, 36, 6771–6792.
24. Nasereddin, M.; ALKhamaiseh, A.; Qasaimeh, M.; Al-Qassas, R. A systematic review of detection and prevention techniques of
SQL injection attacks. Inf. Secur. J. A Glob. Perspect. 2023, 32, 252–265.
25. Erdődi, L.; Sommervoll, Å.Å.; Zennaro, F.M. Simulating SQL injection vulnerability exploitation using Q-learning reinforce-
ment learning agents. J. Inf. Secur. Appl. 2021, 61, 102903.
26. Kasim, Ö. An ensemble classification-based approach to detect attack level of SQL injections. J. Inf. Secur. Appl. 2021, 59, 102852.
27. Chen, J.; Lou, Y.; Zhang, L.; Zhou, J.; Wang, X.; Hao, D.; Zhang, L. Optimizing test prioritization via test distribution analysis.
In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the
Foundations of Software Engineering, Lake Buena Vista, FL, USA, 4–9 November 2018; pp 656–667.
Appl. Sci. 2024, 14, 8365 21 of 21
28. Haghighatkhah, A.; Mäntylä, M.; Oivo, M.; Kuvaja, P. Test prioritization in continuous integration environments. J. Syst. Softw.
2018, 146, 80–98.
29. Alptekin, H.; Demir, S.; Şimşek, Ş.; Yilmaz, C. Towards prioritizing vulnerability testing. In Proceedings of the 2020 IEEE 20th
International Conference on Software Quality, Reliability and Security Companion (QRS-C), Macau, China, 11–14 December
2020; pp 672–673.
30. Medeiros, N.; Ivaki, N.; Costa, P.; Vieira, M. Trustworthiness models to categorize and prioritize code for security improvement.
J. Syst. Softw. 2023, 198, 111621.
31. Khalid, A.; Yousif, M.M. Dynamic analysis tool for detecting SQL injection. Int. J. Comput. Sci. Inf. Secur. (IJCSIS) 2016, 14, 224–
232.
32. HITCON ZeroDay. Available online: https://zeroday.hitcon.org/ (accessed on 28 June 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual au-
thor(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.