Research Methods Material
Research Methods Material
FACULTY OF EDUCATION
RESEARCH METHODS
Course Lecturer: Dr. K. O. Ogunyemi
Course Description: The course examines issues like: selecting a topic, writing a good
introduction, stating statement of the problems, purpose, research questions/hypothesis, literature
review, and methodology. It also treats concepts such as: research design, population, sampling
techniques, research instruments, procedure for data collection and analysis, summary,
conclusion, references and abstract.
Course Content
1
Validity and reliability of the instrument
Data collection procedure
Data analysis
8. Summary, Conclusion, References, Abstract
Reading List
Akuezuilo, E. O. (2002) Research and Statistics in Education and Social Sciences: Methods and
Application. Awka: NuelCenti Publishers & Academic Press Ltd.
Kerlinger, F. N. (1978) Foundations of Behavioural Research. New York: Holt Rinhart and
Winston
2
MEANING OF RESEARCH
Research is the systematic process of investigating and studying a subject in depth to discover,
interpret, or revise facts, theories, or applications. It involves the careful collection, organization,
and analysis of data to increase understanding or solve a specific problem. Research can also be
described as an objective, impartial, empirical, and logical analysis and recording of controlled
observations that may lead to the development of generalizations, principles or theories, resulting
to some extent in prediction. Simply put, research is the systematic way of finding out the
solution to a specific problem.
Types of Research:
EDUCATIONAL RESEARCH
Educational research is the systematic study of educational problems and issues. It is a process of
gathering data, analyzing it, and drawing conclusions in order to improve education. Educational
3
research is conducted by a variety of people, including teachers, professors, researchers,
administrators and policymakers. It is an important part of the field of education and plays a vital
role in improving teaching and learning.
There are many different characteristics of educational research. Some of the most important
characteristics include:
Systematic: Educational research is a systematic process. This means that it follows a set
of steps, such as defining the problem, collecting data, analyzing data, and drawing
conclusions.
Empirical: Educational research is based on evidence. This means that researchers
collect data and use it to support their conclusions.
Objective: Educational research is objective. This means that researchers try to be
impartial and unbiased in their work.
Generalizability: Educational research findings should be applicable to other settings
and populations.
Rigorous: Educational research is rigorous. This means that researchers use sound
methods and procedures to collect and analyze data.
Innovative: Educational research is innovative. This means that researchers are always
looking for new ways to improve education.
4
Advancement of Knowledge: Through research, the frontiers of knowledge are in the
discipline of education are extended. Much of human knowledge today is drawn from
conclusions of scientific researches accumulated over the years.
Lack of funding: Educational research is often underfunded in Nigeria, which can make
it difficult to conduct high-quality research.
Poor communication network.
Poor record keeping culture.
Unattractive working conditions for researchers in Nigeria.
Difficulties in conducting research in schools: Schools in Nigeria can be difficult
places to conduct research. Students and teachers are often busy, and there are many
regulations that must be followed.
Heterogeneity of the population: Students in Nigeria vary widely in their backgrounds,
abilities, and experiences. This can make it difficult to generalize the results of research
to a larger population.
Difficulties in measuring student learning: There is no single, perfect measure of
student learning in Nigeria. This can make it difficult to assess the effectiveness of
educational interventions.
Bias in research: Researchers in Nigeria can be biased in their interpretation of research
results. This can lead to false conclusions about the effectiveness of educational
interventions.
Poor research infrastructure: There is a lack of research facilities and resources in
Nigeria, which can make it difficult to conduct high-quality research.
Lack of qualified researchers: There is a shortage of qualified researchers in Nigeria,
which can make it difficult to conduct high-quality research.
Lack of access to data: There is a lack of access to data on education in Nigeria, which
can make it difficult to conduct high-quality research.
Political interference/Attitude of the government: There is sometimes political
interference in educational research in Nigeria, which can make it difficult to conduct
independent research.
Despite these challenges, there are a number of Nigerian researchers who are working to
improve the quality of educational research in Nigeria. These researchers are working to develop
new research methods, to build research capacity, and to increase access to data. Their work is
essential for improving education in Nigeria.
Here are some of the things that can be done to address the challenges of educational research in
Nigeria:
Increase funding for educational research: This would allow researchers to conduct
larger-scale studies and to collect data over a longer period of time.
5
Work with schools to make them more research-friendly: This could involve
developing relationships with school administrators, teachers, and students, and working
with them to develop research protocols that are respectful of the needs of the school
community.
Use mixed-methods research: This approach combines quantitative and qualitative data
collection methods, which can help to address the challenges of measuring student
learning and of identifying bias in research.
Increase collaboration between researchers and policymakers: This would allow
researchers to share their findings with policymakers and to work together to develop
policies that are based on evidence.
By addressing these challenges, we can make educational research more effective and more
useful for improving education in Nigeria.
6
SOURCES OF RESEARCH PROBLEMS
The sources of research problems are diverse and often arise from real-world challenges, gaps in
existing knowledge, or theoretical considerations. Identifying a research problem is a crucial step
in the research process, as it provides direction and focus for the study. Here are the primary
sources of research problems:
3. Practical Issues or Social Problems: These include real-world challenges that require
solutions. Examples include environmental issues like climate change or public health crises like
obesity, internet fraud, drug abuse, etc.
4. Theoretical Frameworks: The need to test, refine, or expand existing theories can also be a
catalyst to further researches. For example, a researcher may investigate whether a psychological
theory applies in a new cultural context.
6. Policy and Government Initiatives: Policies or regulations can generate research questions
regarding their effectiveness or implications. Example includes studying the impact of subsidy
removal on standard of living.
9. Institutional Priorities: Research problems aligned with the goals or focus areas of
institutions, such as universities, research organizations, or funding agencies.
10. Curiosity or Personal Interest: Researchers' intrinsic curiosity about specific phenomena or
topics can spur them into conducting researches to address such phenomena.
7
11. Unexpected Findings from previous researches: Anomalies or surprising results from prior
studies can lead to new research questions.
Selecting an educational research problem involves a systematic process to ensure the topic is
relevant, feasible, and contributes to the field. Here are the key steps in this process:
1. Clarity and Precision: The problem should be clearly stated, unambiguous, and easily
understood.
2. Educational in Nature/Relevant to Education: The problem should address a
significant issue in education and have the potential to contribute to theory, practice, or
policy. It should align with current educational priorities or challenges.
8
3. Feasibility: The problem should be realistic to study within the constraints of time,
resources, and expertise. Ensure you have access to necessary data, participants, and
tools.
4. Researchable: Being researchable means that pertinent data for solving the problem are
available and accessible. The problem should be open to investigation through systematic
inquiry using available methods and techniques. It should not be based on speculation or
questions that cannot be empirically tested.
5. Significance: A good research problem should address gaps in existing knowledge or
provide solutions to practical challenges in education. It should have theoretical or
practical implications that benefit students, educators, or policymakers.
6. Originality: The problem should offer a new perspective, address a unique context, or
explore an under-researched area. However, it may also involve replicating or extending
previous studies in new settings or with different variables.
7. Alignment with Goals: The problem should align with your personal interests, academic
goals, or institutional priorities. This ensures sustained motivation throughout the
research process.
8. Practical Applicability: The results of the research should have practical implications
for teaching, learning, or policymaking. For instance studying the impact of online tools
on remote learning during a pandemic can directly inform educators.
A research variable is any characteristic, number, or quantity that can be measured or observed
in a study. Variables are essential because they allow researchers to describe, compare, and
predict phenomena. They can vary between individuals, groups, situations, or over time.
a) Independent Variables (IV): These are the variables that researchers manipulate or control
to observe their effects on other variables. They are the "cause" in a cause-and-effect
relationship.
b) Dependent Variables (DV): These are the outcomes or effects observed and measured in
response to changes in the independent variable. They are the "effect" in a cause-and-effect
relationship.
c) Extraneous Variables: These are variables not of primary interest but could influence the
dependent variable if not controlled.
d) Controlled Variables: These are variables that are kept constant or controlled throughout the
experiment to ensure that they do not influence the outcome.
e) Moderator Variables: Influence the strength or direction of the relationship between the
independent and dependent variables.
9
KEY SECTIONS OF CHAPTER ONE
The background to the study is a foundational section in a research paper or proposal that
provides context for the research problem. It explains the broader area of study, introduces the
research topic, identifies gaps in existing knowledge, and sets the stage for why the research is
important. Essentially, it builds a narrative to justify the need for the research.
The Statement of the Problem is a critical section in educational research that outlines the
specific issue or challenge the study aims to address. It identifies gaps, inadequacies, or
inefficiencies in the current educational system, policies, or practices that need investigation.
This statement serves as the foundation for the research by clearly articulating what the problem
is, describing previous research efforts aimed at solving the problem, identifying the gaps or the
inadequacies of the previous researches, and how the current research will contribute to the
resolution of the problem.
The purpose of the study highlights the main focus of the study. It states clearly what the
research aims to achieve. It usually has two aspects: the main purpose which is often derived
from the title of the research; and the specific purposes which is often a breakdown of the
variables into specific units of investigation.
d. Research Questions
A research question is a specific, clear, and focused question that the researcher aims to answer
through their study. It is a question posed by the researcher, answer to which would lead to the
solution of the research problem. It defines the purpose of the research and sets the direction for
the entire investigation. A good research question is specific and focused, that is, it targets a
particular aspect of the topic to avoid being too broad or vague. Research questions usually align
with the purpose of the study
e. Research Hypotheses
A hypothesis is a tentative statement or prediction that can be tested through research. It can also
be described as a conjectural proposition, an informed or intelligent guess about the solution to a
problem. It suggests a possible relationship between variables and provides a foundation for
empirical investigation.
The significance of the study in educational research refers to the section of a research proposal
or report that explains the value and relevance of the research. It highlights why the study is
10
important and how it contributes to the field of education, stakeholders, and society at large. This
section answers questions such as:
The delimitation of the study in educational research refers to the boundaries or scope that the
researcher intentionally sets as the focus of the study. It specifies what the study will cover and,
equally important, what it will not cover. Delimitations are choices made by the researcher to
narrow the study's focus and make the research feasible, relevant, and manageable within the
available time and resources. It also specifies the geographical location of the study.
The operational definition of terms refers to the process of defining key concepts and variables
in specific, measurable, and practical terms as they are used within a research study. These
definitions explain how terms will be understood and applied in the context of the study,
ensuring clarity and consistency.
In educational research, operational definitions are crucial because they transform abstract
concepts (e.g., "student achievement," "motivation") into measurable indicators that can be
observed, assessed, or analyzed.
11
CHAPTER TWO: REVIEW OF RELATED LITERATURE
1. Establish Context: It situates the research within the existing body of knowledge,
providing background and context for the study.
2. It affords the researcher the opportunity of having a very deep understanding of the
research problem.
3. Identify Gaps: Highlights areas where research is limited, inconsistent, or absent,
justifying the need for the current study.
4. Refine Research Questions: Helps in sharpening or focusing the research questions by
understanding the scope and limitations of prior work.
5. Avoid Duplication: Ensures that the study does not replicate previous work
unnecessarily, unless replication is the goal for validation.
6. Provide Theoretical Framework: Identifies and evaluates relevant theories or models
that underpin the research.
7. Support Methodological Choices: Guides the selection of research methods by
reviewing approaches used in similar studies.
8. Demonstrate Credibility: Shows that the researcher has a strong understanding of the
field, enhancing the reliability of the study.
1. Introduction:
o Explains the purpose and scope of the literature review.
o Outlines the organization and structure of the review.
2. Theoretical framework
3. Conceptual Review
12
emphasizing empirical evidence or specific study results, a conceptual review explores
the foundational ideas and constructs that underpin the research area. This type of review
provides a clear understanding of the conceptual relationships that shape the study. It is
often used to define terms, and identify gaps or inconsistencies in the way concepts are
understood or applied in the literature.
4. Empirical Review
5. Appraisal of Literature
The appraisal of a literature review refers to the process of critically evaluating and
assessing the quality, relevance, and credibility of the literature reviewed in a research
study. It involves analyzing the strengths and weaknesses of the existing body of
research, identifying biases or limitations as well as existing gaps in the body of
knowledge and determining the need for further studies.
SOURCES OF LITERATURE
Primary Sources: These are original, firsthand sources that provide direct evidence or data
related to the research topic. Examples include empirical studies and research articles presenting
new data, dissertations and theses with original research, original legal texts, and
autobiographies.
Secondary Sources: These are sources that analyze, interpret, or summarize primary data or
research. These include literature reviews, biography, textbooks and book chapters that discuss
or critique primary research, articles that summarize research findings, such as news articles or
academic reports.
Tertiary Sources: These are sources that compile, organize, and reference primary and
secondary sources, providing overviews or quick access to information. Examples include
encyclopedias, dictionaries, handbooks, databases and indexes.
Preliminary Sources
A preliminary source is one which provides information leading to the location and retrieval of
major sources of literature. This includes:
13
a. The catalogue: This provides information leading to the location and retrieval of books in the
library.
b. The index: This gives information leading to the retrieval of articles published in a wide range
of journals
c. The abstract: The abstract consists of a short account of a research work in addition to
information necessary for the retrieval of the work.
1. Academic Journals
2. Books
3. Conference Proceedings
5. Government Reports
6. Research Databases (Online databases that aggregate academic articles, journals, theses, and
other scholarly content e.g.: Google Scholar, ResearchGate, PubMed, JSTOR, ERIC (Education
Resources Information Center), ScienceDirect.
14
METHODOLOGY
RESEARCH DESIGN
Research design refers to a plan or blue print which specifies how data relating to a given
population should be collected and analysed. Research designs include the following:
Data for Historical research are obtained from two main sources--- primary and secondary
sources. In evaluating the data collected, two forms of criticism are employed. These are external
and internal criticisms. External criticism seeks to establish the authenticity of the data where the
research tries to ascertain if the material was actually written by the author or whether the author
was competent to handle the topics discussed in the material. Internal criticism on the other hand
is concerned with accuracy of the content of the source. Here the research tries to determine if
the statements in the material are accurate representation of historical facts or mere fabrications
by the author.
3. Case Studies: The case study is an intensive study geared towards a thorough understanding
of a given social unit. The social unit may be an individual, a community or an institution. It
should be noted that case studies are of limited generalizability.
15
4. Causal-Comparative Research (Ex Post Facto Research Design): This type of studies
seeks to establish cause-effect relationships. The researcher has no control over the variables of
interest and therefore cannot manipulate them. The researcher only attempts to link some already
existing situations to some variables as causative agents. Examples include “The influence of
gender on students’ performance in SSCE” or “The effects of school location on students’
attitude towards Mathematics”.
Population in research refers to all members or elements of a well-defined group. It is the entire
set of individuals or entities that meet the criteria specified by the researcher for investigation in
a particular study. There are two types of population—finite and infinite population. Finite
Population has a fixed a countable number of individuals or elements. The size is small/limited
and known. Infinite population is so large that it might be impossible to count or identify every
individual.
SAMPLE
A sample is a smaller group of elements drawn through a definite procedure from a specified
population. It is a subset of the population selected for study. It represents the larger population
and is used to make inferences about it.
SAMPLING ERROR
This refers to the failure of any sample to represent the population from which it was drawn. A
sampling error reflects the difference between the characteristics of a sample and those of the
population from which it was drawn
SAMPLING TECHNIQUES
Sampling technique is a plan specifying how elements will be drawn from the population.
Sampling techniques can also be referred to as the methods used to select a subset (sample) from
a larger population for analysis. These techniques are broadly categorized into probability
sampling and non-probability sampling.
Probability sampling techniques are techniques that give all members of the population equal
chances of being selected as part of the sample. This helps in reducing bias and ensuring
representativeness. The following are sub-categories of the probability sampling technique
16
a. Simple Random Sampling Technique: Simple random sampling (SRS) is a probability
sampling technique where each member of a population has an equal and independent chance
of being selected. It ensures that the sample represents the entire population without bias. It
can be carried out through a toss of coin, use of slips of paper (balloting), or the use of
computer to generate random numbers.
b. Proportionate Stratified Random Sampling Technique: Here, the population is first stratified
in terms of one or more variables of interest to the researcher. Elements are drawn randomly
from each stratum in such a way that the relative proportion of the strata in the resultant
sample are the same as exist in the parent population
c. Disproportionate Stratified Random Sampling Technique: this type of sampling is essentially
the same as proportionate random sampling except that in disproportionate sampling, the
relative proportions of the strata in the sample do not correspond to their relative proportions
in the population. This is because equal numbers of elements are selected from each stratum.
This means that some strata might be over-represented while some might be under-
represented.
d. Cluster Sampling or Area Sampling: Here, the population is divided into units or sections with
distinct boundaries. A specified number of these units will then be drawn randomly. All
elements in the unit or section drawn now constitute the sample.
Non-probability Sampling Techniques are those which do not specify the chance or probability
which an element has of being included in a given sample. This implies that all members of the
population do not have equal chances of being selected as part of the sample. The following are
the sub-categories of the non-probability sampling technique:
a. Systematic Sampling Technique (nth Sampling): Here, the element are drawn at specific
intervals from a list containing all elements in the population
b. Purposive or Judgemental Sampling: Here, specific elements or members of the population
which satisfy some predetermined criteria/conditions are selected
c. Quota Sampling: Quota sampling is a non-probability sampling technique where the
researcher selects participants based on specific characteristics or quotas to ensure
representation of different groups in the population. Unlike stratified sampling (which is
random), quota sampling relies on non-random selection, meaning participants are chosen
based on convenience or judgment after quotas are set.
d. Accidental Sampling: Here, only elements which the researcher can reach are included. The
only determining factors are the researcher’s convenience and economy in terms of time and
money.
17
TECHNIQUES AND INSTRUMENTS FOR DATA COLLECTION
A research instrument is any tool, device, or method used by researchers to collect, measure,
and analyze data in a study. It ensures systematic data collection and helps maintain reliability
and validity in research.
Advantages
It helps to watch and describe behaviour the way it occurs in the natural setting
First-hand information are obtained from observations
Disadvantages
The observer should try as much as possible not to interfere with the setting in which the
observation is taking place
A list of relevant aspects of the situation to be observed should be made. This could be in
the form of a checklist or a rating scale.
To overcome faking (Hawthorn effect), the observer may ignore the first two or three
observations. This is because with time, the group returns to its normal way of behaving
having overcome the influence of the presence of the observer.
B. The Questionnaire
18
number of response options are supplied. From these, the respondent is expected to pick any
option that best suits their choice.
ii. Unstructured/Open ended Questionnaire: This form of questionnaire does not provide any
response options for the respondents. Only questions pertinent to the problem are asked and
the respondents are free to supply their responses in their own words and in any manner they
deem fit.
C. Interview
This involves eliciting information from the respondent through verbal interaction. In
conducting an interview, the following guidelines must be followed
a. Rapport: Establish a good rapport with the interviewee before the commencement of the
interview
b. Avoidance of technical jargons: Technical jargons should be avoided as much as
possible. Where they are used, their meaning should be explained.
c. Ask probing questions: It is usually advisable to probe further the response given by the
interviewee for more details
d. Use non-leading questions: It is important to avoid the use of leading questions. A
leading question is one which tends to suggest a particular form of response
In research, the accuracy and consistency of measurement instruments are crucial to obtaining
credible results. Two key concepts that determine the effectiveness of a research instrument are
validity and reliability. These attributes ensure that the instrument measures what it is intended
to measure (validity) and does so consistently over time and across different conditions
(reliability).
Validity refers to the degree to which a research instrument measures what it is supposed to
measure. It determines the accuracy and appropriateness of the instrument in capturing the
intended concept or variable.
19
Types of Validity
Construct validity: This refers to the extent to which a research instrument (like a test, survey,
or questionnaire) truly measures the theoretical construct it is intended to measure. A construct
is an abstract concept or trait that cannot be directly observed, such as intelligence, motivation,
or anxiety. Construct validity ensures that the instrument accurately captures this concept, not
something else. In other words, construct validity answers the question: Is the instrument
measuring the concept it's supposed to measure, and is it doing so in the right way?
Content validity: This refers to the extent to which a research instrument (like a test or
questionnaire) adequately covers all the relevant dimensions or aspects of the concept it is
intended to measure. In other words, it assesses whether the instrument represents the full
content of the variable or construct being studied. For example, if you're developing a test to
measure mathematical ability, content validity ensures that the test includes questions from all
relevant areas of mathematics (e.g., algebra, geometry, calculus) rather than just focusing on one
narrow area. Content validity is often evaluated by subject matter experts (SMEs) who are
knowledgeable about the concept being measured. These experts review the instrument to ensure
that it comprehensively represents the construct.
Face validity: This refers to the extent to which a research instrument appears to measure what it
is intended to measure, based on a superficial or subjective assessment. It is a type of "surface-
level" validity and is concerned with whether the test or tool looks appropriate to non-experts,
such as participants or users of the instrument. In simpler terms, face validity answers the
question: Does this test look like it’s measuring what it’s supposed to measure?
Criterion-related validity: This refers to the extent to which a research instrument (such as a
test, questionnaire, or survey) is able to predict or correlate with an external criterion—another
measure or outcome that is theoretically related to the construct being measured. Essentially, it
assesses how well the instrument performs in comparison to some "gold standard" or established
measure that is known to accurately measure the same or a related concept. Criterion-related
validity can be broken down into two main subtypes:
i. Predictive validity which refers to how well a measure can predict future outcomes or
behaviours that are theoretically related to the construct being measured. For example,
the university entrance exam (UTME) is designed to predict future academic success
(CGPA). If the exam score is a strong predictor of a student’s CGPA in the university,
then it demonstrates high predictive validity.
ii. Concurrent Validity which refers to how well a measure correlates with an established
criterion that is measured at the same time. It involves comparing the new instrument
with another measure of the same construct that has already been validated, to see if they
produce similar results.
20
respondents. A research instrument is considered reliable if it produces consistent results when
repeated under similar conditions. In simple terms, reliability means that if the same test or
survey were given multiple times, it would yield the same or very similar results. The following
are some of the different types of reliability test:
i. Test-retest reliability: This is a measure of the stability of a research instrument over time. It
assesses whether an instrument yields consistent results when administered to the same group of
individuals at two different points in time, under similar conditions. If the instrument is reliable,
the scores from the two administrations should be highly correlated. In other words, test-retest
reliability checks how stable the instrument’s results are over time. If a test measures a stable
trait (like intelligence, personality traits, or attitudes), the scores should remain similar if the
same people take the test again after some period.
If the test-retest reliability is high, the correlation should be strong, indicating that the instrument
consistently measures the same construct over time.
ii. Split-half reliability: This is a method used to assess the internal consistency of a research
instrument, specifically how well the items within the instrument correlate with one another. It
tests whether the instrument consistently measures the same construct across different parts of
the instrument. In simple terms, split-half reliability involves dividing a test into two halves and
comparing the scores from each half to see if they are consistent with one another. If both halves
give similar results, the test is considered reliable in terms of internal consistency.
1. Administer the test: The full instrument (test, survey, questionnaire, etc.) is
administered to a group of participants.
2. Divide the test into two halves: After administration, the test is split into two parts.
There are several ways to divide the test e.g.
o Odd-even split: Odd-numbered items are placed in one half, and even-numbered
items are placed in the other half.
o First-half, second-half split: The first half of the items is in one group, and the
second half in another.
21
3. Calculate the correlation: The scores from each half of the test are correlated to assess
how closely they align with each other. A high positive correlation indicates high split-
half reliability.
iii. Inter-rater reliability (also known as inter-observer reliability): This refers to the degree
of agreement or consistency between different raters or observers when they assess the same
phenomenon using the same instrument or measurement tool. Essentially, it evaluates whether
different people who observe or score the same thing will give similar ratings or scores. In other
words, inter-rater reliability measures how consistently different raters apply the same criteria or
scoring rules when evaluating a subject, event, or behaviour.
22
REFERENCING OR CITATION IN RESEARCH
Referencing (or citation) in research refers to the practice of acknowledging the sources of
information, ideas, theories, or data that are used in a research paper, thesis, article, or any other
academic work. The purpose of referencing is to give credit to the original authors whose work
has contributed to your research, and to guide readers to the source material if they wish to
explore it further.
1. Acknowledges Original Authors: By citing sources, you give proper credit to the
original authors, preventing plagiarism and ensuring academic integrity.
2. Supports Your Research: Citations provide evidence to back up your arguments,
claims, or hypotheses, making your research more credible.
3. Enables Verification: References allow readers to verify the sources and information
you used in your research, ensuring transparency and trustworthiness.
4. Shows the Breadth of Your Research: Proper referencing demonstrates that you have
engaged with relevant literature and background studies in your field, reflecting the depth
and scope of your research.
5. Enables Further Research: By citing sources, you make it easier for others to locate
those sources, facilitating the development of further research.
Types of Citations
There are different types of citations, which depend on how you refer to the sources in your
research work. These can include:
1. In-Text Citations: These are brief references made within the body of your research to
acknowledge the work of others. In-text citations typically contain the author's surname
and the year of publication. Example: (Smith, 2020) or Smith (2020) found that...
2. Full Citations (Reference List or Bibliography): A complete citation appears in the
reference list or bibliography at the end of the research paper, providing detailed
information about each source you cited. (N.B. Kindly read further on the APA
referencing style which is currently adopted by the Faculty of Education, Adekunle
Ajasin University).
23
ABSTRACT WRITING IN RESEARCH
An abstract is a brief, concise summary of a research paper or study that provides readers with a
quick overview of the essential components of the research. It is typically the first part of a
research paper, thesis, dissertation, or article that a reader encounters, so it plays a crucial role in
conveying the purpose, methods, results, and conclusions of the study in a clear and succinct
manner. In research, the abstract serves as a standalone summary that allows readers to quickly
determine whether the full document is relevant to their interests or needs. A well-written
abstract can encourage further reading, while a poorly written one may cause potential readers to
skip the document.
1. Purpose/Objective (What is the research about?): The abstract should start by briefly stating
the research problem or the objective of the study. It should clarify the issue the research
addresses.
2. Methods (How was the research conducted?): A brief description of the research design,
methodology, and data collection techniques used should be included. This part does not need to
go into detail, but it should provide enough information to understand how the research was
carried out.
3. Results (What did the study find?): The abstract should briefly present the key findings or
results of the study. This includes the main outcomes or data trends, specific numbers or
statistical results may be included if they are essential or highly significant.
24