Kill Box Community Research Methods
Kill Box Community Research Methods
FEBRUARY,2025
“Hard work beats talent when talent does
not work hard”
TOGETHER WE CAN … IN GOD WE TRUS
TSET 1
Answers to the Questions
• Experiments – Conducting controlled tests to analyze specific variables and their effects.
(a) New Knowledge – This refers to original findings that contribute to a field of study, such as
discovering a new technology or developing a novel scientific method.
(b) Existing Knowledge – Refers to already established facts, theories, and data that are documented in
academic literature, textbooks, or previous research.
(c) Research Gap – An area within a field that has not been sufficiently studied or lacks concrete
conclusions, presenting an opportunity for new research.
(d) Literature Review – A comprehensive analysis of existing research on a particular topic to understand
the progress made, identify gaps, and build on previous work.
• The image depicts a flooded area with damaged houses and infrastructure, indicating a
need to research the causes and impacts of urban flooding.
• Research Problem: The increasing occurrences of urban flooding and their socio-
economic impact on local communities.
Main Objective:
• To investigate the causes, effects, and possible solutions to urban flooding in the
affected region.
Specific Objectives:
(a) How literature review brings clarity to a research problem and improves methodology (2 marks)
• A literature review helps refine the research problem by identifying gaps in existing
studies.
• It also provides insights into various methodologies used by other researchers, guiding
the selection of appropriate methods for the study.
(b) Is it advisable to conduct a literature review after identifying a research problem? (2 marks)
• Yes, it is advisable because a literature review helps validate the research problem,
ensuring it has not already been extensively studied.
• It also provides theoretical and empirical frameworks that help in designing research
methods and interpreting results.
• It ensures that the researcher builds on existing knowledge rather than duplicating past
research.
• It helps in identifying the best methodologies and avoiding common pitfalls encountered
in previous studies.
• Participant Observation: The researcher actively engages in the activities of the study
subjects while observing their behavior (e.g., a researcher living in a flood-prone community to
understand their experiences).
1. Observation of phenomena
2. Literature review
3. Scholarly interactions with colleagues
4. Personal experiences
TEST 2
PART A: Choose the correct answer
a) Case Study
b) Phenomenology
c) Ethnography
d) Grounded Theory
2. How does research contribute to security in software development?
c) By staying updated with the latest security trends and implementing robust security protocols
a) Random sampling
b) Cluster sampling
c) Clauses sampling
d) Judgment sampling
a) Secondary data
b) Primary data
c) Quantitative data
d) Qualitative data
a) A citation appears at the end of a document, while a reference occurs within the text
b) A citation is a specific mention of a source within the text, while a reference is the complete
source information listed at the end
c) A reference is used for paraphrased content, while a citation is used for direct queries
PART B
A researcher wants to determine the sample size for a study. The population size is 10,000 people. The
desired confidence level is 95%, and the margin of error is 5%. Assuming the sample proportion p=0.5,
calculate the required sample size, the z-score corresponding to the confidence level (95%, Z=1.50) (6
marks).
Formulate a research problem based on any IT/software/computer issue of
your choice, including its main and specific objectives, as well as relevant
research questions. Your problem statement should be approximately one-
quarter (25%) of a page long.
Problem Statement
E-commerce platforms are increasingly targeted by cybercriminals, leading to financial losses, data
breaches, and reputational damage. Traditional cybersecurity measures often fail to detect sophisticated
attacks such as phishing, ransomware, and zero-day exploits. With the rise of AI-driven cyber threats, e-
commerce businesses need advanced security mechanisms that can detect, analyze, and mitigate risks
in real time. This study investigates how AI-powered threat detection can enhance cybersecurity in e-
commerce platforms, ensuring better protection against cyberattacks while maintaining user trust and
business continuity.
Main Objective
To assess the effectiveness of AI-powered threat detection in improving cybersecurity for e-commerce
platforms.
Specific Objectives
Research Questions
1. What are the most prevalent cybersecurity threats faced by e-commerce platforms?
3. How can AI be leveraged to detect and mitigate cybersecurity threats in real time?
4. What are the key components of an AI-driven cybersecurity framework for e-commerce
businesses?
2. Research Problem: Enhancing Security in OTP-Based Authentication Systems
Problem Statement
One-Time Password (OTP) authentication is widely used in modern applications for user verification and
security. However, OTP-based systems are vulnerable to various security threats, including phishing
attacks, SIM swapping, and man-in-the-middle attacks. Many users also experience usability challenges,
such as OTP delivery delays and expiration issues, which hinder a seamless authentication process. To
address these concerns, this research aims to explore and implement more secure and user-friendly OTP
authentication mechanisms, such as multi-factor authentication (MFA) and cryptographic
enhancements. By identifying and mitigating vulnerabilities in OTP verification, this study seeks to
improve both security and user experience in authentication systems.
Main Objective
To analyze and enhance the security and usability of OTP-based authentication systems by identifying
vulnerabilities and proposing improved mechanisms.
Specific Objectives
1. To investigate the common security threats associated with OTP authentication systems.
5. To assess the effectiveness of the proposed framework through testing and user
feedback.
Research Questions
1. What are the major security threats affecting OTP authentication systems?
3. How can alternative authentication methods enhance security and usability in OTP
verification?
3. Research Problem: Enhancing Access to Agricultural Financing for Small-Scale Farmers in Rural
Areas
Problem Statement
Small-scale farmers in rural areas face significant challenges in accessing financial resources necessary
for improving agricultural productivity. Limited access to credit, high-interest rates, lack of collateral,
and inadequate financial literacy hinder their ability to invest in modern farming techniques, quality
inputs, and post-harvest management. Despite various financial inclusion initiatives, rural farmers often
struggle to secure funding due to stringent lending conditions and inadequate support from financial
institutions. Addressing this issue is crucial for improving food security, enhancing rural livelihoods, and
promoting economic development. This study aims to explore the barriers to agricultural financing for
small-scale farmers and propose sustainable solutions to improve financial accessibility.
Main Objective
To assess the challenges and opportunities in accessing agricultural financing for small-scale farmers in
rural areas and recommend strategies for improving financial inclusion.
Specific Objectives
1. To identify the key barriers preventing small-scale farmers from accessing financial
resources.
4. To propose effective strategies to enhance access to credit and other financial services
for small-scale farmers.
Research Questions
1. What are the primary barriers hindering small-scale farmers from accessing agricultural
financing?
Introduction
Research plays a crucial role in technological advancements by driving innovation, improving efficiency,
and solving real-world problems. Through continuous research, new technologies emerge, existing ones
evolve, and industries adapt to changing needs. Without research, progress in fields such as artificial
intelligence, cybersecurity, and software development would be slow or nonexistent.
Main Body
Conclusion
Research is the backbone of technological advancements. It drives innovation, improves efficiency, and
enhances the quality of life. Without continuous research, the world would not see rapid progress in
medicine, communication, energy, or artificial intelligence.
Introduction
Research in computer science involves various methodologies to study, analyze, and improve computing
systems. Different approaches help researchers solve problems, validate theories, and develop new
technologies.
Main Body
1. Case Studies
2. Experimental Research
3. Survey Research
4. Qualitative Research
5. Quantitative Research
6. Action Research
7. Simulation-Based Research
8. Comparative Research
9. Theoretical Research
• Developing new concepts and frameworks.
12. Meta-Analysis
Conclusion
Research methodologies in computer science vary based on the problem being studied. Each approach
has its strengths and helps improve computing technologies, making them more efficient and user-
friendly.
Introduction
Ethics in research ensures that studies are conducted responsibly, protecting participants’ rights and
data. In computing research, privacy and data security are key concerns.
Main Body
1. Informed Consent
2. Confidentiality
3. Data Security
4. Avoiding Harm
5. Transparency
7. Accountability
Conclusion
Ethical research ensures privacy, security, and fairness. Following ethical guidelines helps build trust and
promotes responsible technological advancements.
Introduction
SQL query optimization directly impacts database performance. Slow queries increase server load,
whereas optimized queries improve responsiveness. This section analyzes the impact of optimization on
query execution time.
Main Body
1. Reduced Execution Time
• Example: Using indexes reduces CPU load by minimizing full table scans.
• Example: A slow e-commerce website can lose customers due to long page loads.
6. Better Scalability
• Example: A social media app can support more concurrent users with efficient queries.
• Example: Companies spend less on cloud database services when queries run efficiently.
• Example: A well-optimized database avoids the need for frequent hardware upgrades.
Conclusion
Optimizing SQL queries significantly improves execution time, system performance, and user experience.
Businesses benefit from lower costs, faster applications, and efficient data retrieval.
Research in Computing
Introduction
Research in computing explores new technologies, improves software efficiency, and enhances
cybersecurity. It drives innovation in artificial intelligence, cloud computing, and software engineering.
Main Body
2. Improving Cybersecurity
4. Software Optimization
6. Human-Computer Interaction
7. Edge Computing
9. Blockchain Technology
Conclusion
Research in computing drives technological progress, making systems smarter, faster, and more secure.
It benefits industries, businesses, and individuals worldwide.
Introduction
Research in software development drives innovation, improves efficiency, and enhances security.
Without research, software solutions would be outdated and inefficient.
Main Body
4. Cybersecurity Enhancements
Conclusion
Research in software development fuels innovation, enhances security, and improves efficiency. It
ensures software meets modern technological needs.
Study population refers to the specific group of individuals or units that a study is focused on.
This group is selected based on certain characteristics or criteria relevant to the research
question. The study population represents the larger population from which the sample is drawn,
allowing researchers to make inferences or generalizations about the whole population. For
example, if a study is investigating the effects of a new teaching method on primary school
students' performance, the study population might consist of all primary school students within a
particular district or country [1]. The study population ensures that the data collected reflects the
characteristics of the broader group being studied [2].
The selection of a study population is crucial to the validity of the research findings. It must be
carefully chosen to ensure that the results are representative of the larger population. Researchers
may define the study population by factors like age, gender, health status, or location. For
instance, a medical study on heart disease may focus on middle-aged adults who have a family
history of the condition [3]. By carefully defining the study population, researchers ensure that
the data collected will provide meaningful insights relevant to the broader context of the research
problem [2]. Proper selection of the study population allows for better generalization of the
research findings to the wider population [4].
Choosing an accurate sample from the study population is essential for obtaining reliable and
generalizable research results. Several factors contribute to selecting an accurate sample:
Sampling characteristics in the study population are important to understand because they help
ensure that the sample selected accurately represents the larger population while saving time and
resources. Key characteristics of sampling include:
I. Sampling as a Mechanism:
Sampling allows researchers to gather data from a smaller group of individuals rather
than surveying the entire target population. This mechanism helps provide insights into
the larger population without the need to measure every single individual.
II. Study Population and Sample:
The study population consists of the entire group of people relevant to the research. A
sample, on the other hand, is a smaller subset of this group. It is crucial that the sample is
representative of the study population to ensure the findings can be generalized to the
broader group.
III. Reducing Survey Fatigue:
One of the primary advantages of sampling is that it reduces survey fatigue. If researchers
surveyed everyone in the population, it could lead to response bias or fatigue, reducing
the quality of the data. By using a sample, researchers can maintain higher response rates
and more reliable results.
IV. Cost and Time Efficiency:
Surveying an entire population can be costly and time-consuming. Sampling is more
cost-effective and allows researchers to obtain meaningful data more quickly. By
gathering information from a smaller group, researchers can still achieve the same level
of insight with less expense and effort.
V. Response Rate Patterns:
Tracking the response rate patterns of different groups within the sample helps
researchers determine how many respondents to select. For example, if a certain group
tends to respond more frequently, fewer participants from that group might be needed to
obtain accurate results.
VI. Generalizability of Results:
While the study is based on a sample, the findings are meant to be applicable to the entire
target population. The sample serves as a representation, allowing researchers to make
conclusions about the broader group without needing to survey everyone.
This method is used to select sample objects from a population based on probability theory.
Everyone is included in the sample and has an equal chance of being selected. There is no bias in
this type of sample. Every person in the population has the opportunity to be part of the research.
I. Simple Random Sampling: Simple random sampling is the easiest way to select a sample.
Here, each member has an equal chance of being part of the sample. The objects in this
sample are chosen at random, and each member has exactly the same probability of being
selected.
II. Cluster sampling: Cluster sampling is a method in which respondents are grouped into
clusters. These groups can be defined based on age, gender, location, and demographic
parameters.
III. Systematic Sampling: In systematic sampling, individuals are chosen at equal intervals
from the population. A starting point is selected, and then respondents are chosen at
predefined sample intervals.
IV. Stratified Sampling: Stratified random sampling is a process of dividing respondents into
distinct but predefined parameters. In this method, respondents do not overlap but
collectively represent the entire population.
B. Sampling techniques: Non-probabilistic sampling
In most cases, of the total study population, perceptions can only be obtained from predefined
samples. This comes with its own advantages and disadvantages. Some of them are listed below.
Advantages
Disadvantages
I. Insufficient samples
II. Possibility of bias
III. Precision problems (if sampling is poor)
IV. Difficulty obtaining the typical sample
V. Lack of quality sources
VI. Possibility of making mistakes.
Conclusion,
In conclusion, the concept of the study population plays a vital role in the design and execution
of research. By defining the specific group of individuals or units that the research will focus on,
researchers can ensure that the findings are relevant and applicable to the larger population. The
study population represents the foundation for drawing inferences and generalizations, making it
essential that it is carefully selected based on characteristics relevant to the research question.
Whether the study population consists of students, adults with a certain health condition, or
specific geographical groups, the accuracy and generalizability of the research results rely
heavily on how well the population is defined.
The selection process of the study population must consider various factors, such as age, gender,
health status, and location, depending on the nature of the research. A well-defined population
ensures that the data collected will provide meaningful insights that can be extended to the
broader context. Researchers must take into account the potential for biases or
misrepresentations in the population to avoid skewing the results. Additionally, ethical
considerations, such as obtaining informed consent from participants, should also be
incorporated when selecting the study population. By carefully considering these factors,
researchers can enhance the validity of their study and contribute to knowledge that can have a
positive impact on the field being studied. Therefore, the study population is not just a group of
individuals but a key determinant in the reliability and applicability of research findings.
1.0 INTRODUCTION
Computer research is a fast-changing and broad area that studies design, development, and
performance of computational systems in order to solve critical problems in different fields[1].
Planning about the theoretical foundation to practical and eventually technology innovations,
computing drives progress in artificial intelligence, cyber security, quantum computing, data
science, and many other fields[2]. By taking into account both historical milestones and
contemporary advancements, research in computing makes a foundation forward in future
development of technology and society.
Computing research is very broad and spans sub-disciplines such as software engineering,
human-computer interaction, cloud computing, and cryptography[1]. Such fields collectively
consider problems ranging from those related to execution of algorithms to the ethical
implications of technology. For instance, the strong roots of modern computing were laid by
outstanding early works on computational theories of such pioneers as Alan Turing [3], whereas
all integrated circuits produced by John von Neumann paved a way of fast and reliable
machines[3].
Research in computing is basely concerned in exploring the future and wide-ranging methods
and applications of computing including its impact on society with a focus on the challenges and
future thrusts of computing. A look at the development and evolution of this domain could
enable us to appreciate how computing research will futuristically leverage technological
progress and global trends.
This technology has thus given the world a new dimension and a paradigm shift in the storing
and verifying of data without any centralized authority. It is a booming technology that is being
implemented in something other than crypto currency, such as supply chain management, and
digital identity options [5]. For instance, VeChain uses the Blockchain to enhance product
authenticity and traceability, which are better managed through a transparent logistics system.
It is about the connection of devices and sensors to live interconnection and automations in
happenings of real time. Research tends to address scalability and security in IoT systems as they
are deployed in smart cities and industries. For example, the Nest thermostat is indicative of
many smart home solutions that uses IoT technology in saving energy using personal behavioral
characteristics.
The study in HCI is targeted at having smooth interfaces between human beings and computing
machines via virtual reality (VR) and augmented reality (AR) technologies. Such systems would
generate disruptions in educational, health, and collaborative dynamics [7]. Microsoft HoloLens
provides 3D representations of human anatomy to the medical students while they interact with
real human anatomy as part of their training in anatomy.
Cloud computing is now in a hybrid-junction future of its evolution, while pen edge computing
brings the data processing very closer to devices for real-time applications. Both of these
technologies bring improvements in scenarios such as autonomous vehicles and industrial IoT.
Examples are Amazon Web Services (AWS) where in cloud and edge computing solutions are
developed and delivered for flexibility for programmers.
Together, these trends form the leading edge of computing-related research and indeed
demonstrate how technological advancement has meddled with the most current issues of our
time. Every one of these subjects goes toward a better understanding of how advances in
computing are shaping this planet.
In computer science, experimental research typically involves the design and execution of
experiments intended to assess the behavior and performance characteristics of systems under
prescribed conditions. An example is testing machine learning models against benchmark
datasets to assess accuracy and robust performance. Empirical methods, on the other hand,
observe and analyze phenomena in the real world, such as user behaviors in software systems.
Some Researchers, have noted that theoretical methods aim at providing mathematical models,
algorithms, and proofs to solve fundamental problems in computing: Computation complexity
theory is one of such theories, as it categorizes problems depending on the difficulty in solving
them and the resources required. This kind of theory, on which efficient algorithms and
cryptosystems design are based, essentially underpins these phenomena.
A case study looks at specific instances of systems or technologies very closely in order to make
more general conclusions. For example, studying the adoption of Blockchain technology in
supply chain management will shed light on some of the challenges and advantages of the
technology. Observational methods are concerned with understanding the way systems are used
in practice, mostly through user interaction analysis.
v. Data-Centric Research
The explosion of big data sees more and more researchers applying data-driven techniques to
detect patterns and insights from more extensive datasets. Data mining and statistical techniques
are typically used on large datasets, as for example in social media sentiment analysis and
genomic studies.
These approaches, therefore, incorporate many different dimensions of computing: theory into
application. Their systematic application is the science for continuous evolution in computing
technologies as well as in their acquisition in everyday contexts.
4.0 APPLICATIONS AND IMPACTS
4.1 APPLICATIONS
It has a ripple effect in broad scope; for example, computing research pervades various industries
and thus transforms how organizations think and how people interface with technology. Progress
in research has resulted in advances such as AI-enabled diagnostic tools and telemedicine
platforms in health care to improve patient outcomes and accessibility. From computing
research, algorithms are built for fraud detection, automated trading, or risk assessment-driving
those applications within the financial services domain that bring more efficiency and security to
typical financial transactions [8]. Similarly, robotics and IoT research has changed the whole
manufacturing industry through automation and predictive maintenance.
4.2 IMPACTS
This computing research is, however, more social-oriented: it can influence the society as a
whole-education, governance, global communication. Technologies such as e-learning platforms
and adaptive learning algorithms that have entered the education system, as stated before, are
rendering personalized education hardly accessible any more. Besides that, computing research
has also concerned cyber security innovations, which will ensure the safeguarding of critical
infrastructure and protection of user data in a fast digitalizing world [9]. Further, environmental
significance has been studied regarding technology: research works toward energy-efficient
computing to eventually cut down carbon footprints, such as optimizing data centers and making
future green technology. Overall, the continuous evolution of computing research develops
innovation, economic growth, and social advancement.
6.0 CONCLUSION
The computing research is a form of research without which modern society will not survive, as
it gives innovative solutions to very complex problems in different fields. However, it quickly
has raised questions regarding the dimensions of sustainability, inclusivity, and ethical
accountability into which such anthropology can be translated. The advances that deal with AI,
quantum computing, and Blockchain are evidence of how revolutionary this research could be,
but they also highlight access disparities when it comes to being part of those communities as
well as concerns about potential misuse. The greater dependence on computational power, thus,
accentuates the environmental costs further, fortifying the need for greener and energy-efficient
advancements.
Example i: A survey that asks students about their daily exercise habits [3].
Example ii: An observational study that counts how many people use bicycles in a city park on a
weekend [4].
Example i: Implementing a new reading program in one school and comparing reading scores to
another school that does not use the program [12].
Example ii: Evaluating the impact of a community health initiative in one neighborhood while
comparing it to a similar neighborhood that did not receive the program [13].
Conclusion
Quantitative research designs are fundamental tools in the research process, providing structured
methods to collect and analyze numerical data. Each design type serves a distinct purpose, from
describing phenomena to uncovering relationships or testing causality. Descriptive, correlational,
experimental, quasi-experimental, longitudinal, and cross-sectional designs collectively empower
researchers to address diverse questions across various disciplines. By carefully selecting the
appropriate design, researchers can ensure their findings are valid, reliable, and meaningful.
Ultimately, quantitative research fosters evidence-based decision-making and contributes
significantly to advancing knowledge in scientific and applied fields.