week 6-7 research
week 6-7 research
Introduction
QUANTITATIVE RESEARCH
If the researcher views quantitative design as a continuum, one end of the range
represents a design where the variables are not controlled at all and only observed. Connections
amongst variable are only described. At the other end of the spectrum, however, are designs
which include a very close control of variables, and relationships amongst those variables are
clearly established. In the middle, with experiment design moving from one type to the other, is
a range which blends those two extremes together.
2
Non-Experimental Research Design
1. Survey Research
Survey research uses interviews, questionnaires, and sampling polls to get a sense
of behavior with intense precision. It allows researchers to judge behavior and then present
the findings in an accurate way. This is usually expressed in a percentage. Survey research
can be conducted around one group specifically or used to compare several groups. When
conducting survey research it is important that the people questioned are sampled at
random. This allows for more accurate findings across a greater spectrum of respondents.
Remember!
It is very important when conducting survey research that you work with
statisticians and field service agents who are reputable. Since there is a high level
of personal interaction in survey scenarios as well as a greater chance for
unexpected circumstances to occur, it is possible for the data to be affected. This
can heavily influence the outcome of the survey.
There are several ways to conduct survey research. They can be done in person,
over the phone, or through mail or email. In the last instance they can be self-
administered. When conducted on a single group survey research is its own
category.
3
2. Correlational Research
Correlational research tests for the relationships between two variables. Performing correlational
research is done to establish what the effect of one on the other might be and how that affects the
relationship.
Remember!
Correlation does not always mean causation. For example, just because two data
points sync doesn‘t mean that there is a direct cause and effect relationship.
Typically, you should not make assumptions from correlational research alone.
3. Descriptive
As stated by Good and Scates as cited by Sevilla (1998), the descriptive method is
oftentimes as a survey or a normative approach to study prevailing conditions.
Remember!
4. Comparative
4
fifty or more cases. The number of cases is limited because one of the concerns of
comparative research is to establish familiarity with each case included in a study. (Ragin,
Charles 2015)
5. Ex Post Facto
Remember!
A true experiment and ex post facto both are attempting to say: this independent variable is
causing changes in a dependent variable. This is the basis of any experiment - one variable is
hypothesized to be influencing another. This is done by having an experimental group and a
control group. So if you're testing a new type of medication, the experimental group gets the
new medication, while the control group gets the old medication. This allows you to test the
efficacy of the new medication. . (Kowalczyk 2015)
Experimental Research
Though questions may be posed in the other forms of research, experimental research is
guided specifically by a hypothesis. Sometimes experimental research can have several
hypotheses. A hypothesis is a statement to be proven or disproved. Once that statement is made
experiments are begun to find out whether the statement is true or not. This type of research is the
bedrock of most sciences, in particular the natural sciences. Quantitative research can be exciting
and highly informative. It can be used to help explain all sorts of phenomena. The best
quantitative research gathers precise empirical data and can be applied to gain a better
understanding of several fields of study. (Williams 2015)
Types of Experimental research
5
1. Quasi-experimental Research
Design involves selecting groups, upon which a variable is tested without any
random pre-selection process. For example, to perform an educational experiment, a class
might be arbitrarily divided by alphabetical selection or by seating arrangement. The
division is often convenient especially in an educational situations cause a little disruption
as possible.
INSTRUMENT DEVELOPMENT
Before the researchers collect any data from the respondents, the young researchers will need to
design or devised new research instruments or they may adopt it into the other researches (the tools
they will use to collect the data).
If the researcher/s is planning to carry out interviews or focus groups, the young researchers will
need to plan an interview schedule or topic guide. This is a list of questions or topic areas that all
the interviewers will use. Asking everyone the same questions means that the data you collect will
be much more focused and easier to analyze.
6
If the group wants to carry out a survey, the young researchers will need to design a questionnaire.
This could be on paper or online (using free software such as Survey Monkey). Both approaches
have advantages and disadvantages.
If the group is collecting data from more than one ‗type‘ of person (such as young people and
teachers, for example), it may well need to design more than one interview schedule or
questionnaire. This should not be too difficult as the young researchers can adapt additional
schedules or questionnaires from the original.
When designing the research instruments ensure that:
REMEMBER!
Any questionnaires ask people for any relevant information about themselves, such as their
gender or age, if relevant. Don‘t ask for so much detail that it would be possible to identify
individuals though, if you have said that the survey will be anonymous.
The Instrument
Instrument is the generic term that researchers use for a measurement device (survey, test,
questionnaire, etc.). To help distinguish between instrument and instrumentation, consider that
the instrument is the device and instrumentation is the course of action (the process of developing,
testing, and using the device).
Usability
Usability refers to the ease with which an instrument can be administered, interpreted by the
participant, and scored/interpreted by the researcher. Example usability problems include:
Students are asked to rate a lesson immediately after class, but there are only a few minutes before
the next class begins (problem with administration).
Students are asked to keep self-checklists of their after school activities, but the directions are
complicated and the item descriptions confusing (problem with interpretation).
Teachers are asked about their attitudes regarding school policy, but some questions are worded
poorly which results in low completion rates (problem with scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues. For now, we
can identify five usability considerations:
How long will it take to administer?
Are the directions clear?
8
How easy is it to score?
Do equivalent forms exist?
Have any problems been reported by others who used it?
Validity
Validity is the extent to which an instrument measures what it is supposed to measure and performs
as it is designed to perform. It is rare, if nearly impossible, that an instrument be 100% valid, so
validity is generally measured in degrees. As a process, validation involves collecting and analyzing
data to assess the accuracy of an instrument. There are numerous statistical tests and measures to
assess the validity of quantitative instruments, which generally involves pilot testing. The remainder
of this discussion focuses on external validity and content validity.
External validity is the extent to which the results of a study can be generalized from a sample to a
population. Establishing eternal validity for an instrument, then, follows directly from sampling.
Recall that a sample should be an accurate representation of a population, because the total
population may not be available. An instrument that is externally valid helps obtain population
generalizability, or the degree to which a sample represents the population.
Content validity refers to the appropriateness of the content of an instrument. In other words, do the
measures (questions, observation logs, etc.) accurately assess what you want to know? This is
particularly important with achievement tests. Consider that a test developer wants to maximize the
validity of a unit test for 7th grade mathematics. This would involve taking representative questions
from each of the sections of the unit and evaluating them against the desired outcomes.
Reliability
Reliability can be thought of as consistency. Does the instrument consistently measure what it is
intended to measure? It is not possible to calculate reliability; however, there are four general
estimators that you may encounter in reading research:
9
Test-Retest Reliability: The consistency of a measure evaluated over time.
Parallel-Forms Reliability: The reliability of two tests constructed the same way, from the same
content.
Internal Consistency Reliability: The consistency of results across items, often measured with
Cronbach‘s Alpha.
10
GUIDELINES IN WRITING RESEARCH
METHODOLOGY
Methodology section is one of the parts of a research paper. This part is the core of your
paper as it is a proof that you use the scientific method. Through this section, your study‘s validity
is judged. So, it is very important. Your methodology answers two main questions:
While writing this section, be direct and precise. Write it in the past tense. Include enough
information so that others could repeat the experiment and evaluate whether the results are
reproducible the audience can judge whether the results and conclusions are valid.
The explanation of the collection and the analysis of your data are very important because;
Readers need to know the reasons why you chose a particular method or procedure instead
of others.
Readers need to know that the collection or the generation of the data is valid in the field of
study.
Discuss the anticipated problems in the process of the data collection and the steps you took
to prevent them.
Present the rationale for why you chose specific experimental procedures.
Provide sufficient information of the whole process so that others could replicate your study.
You can do this by: giving a completely accurate description of the data collection equipment
and the techniques. Explaining how you collected the data and analyzed them.
11
Specifically;
Present the basic demographic profile of the sample population like age, gender, and the
racial composition of the sample. When animals are the subjects of a study, you list their
species, weight, strain, sex, and age.
Explain how you gathered the samples/ subjects by answering these questions:
- Did you use any randomization techniques?
- How did you prepare the samples?
Explain how you made the measurements by answering this question.
What calculations did you make?
Describe the materials and equipment that you used in the research.
Describe the statistical techniques that you used upon the data.
12
Name: Score:
Strand/Section/Grade: Date:
DIRECTIONS: Read the question carefully. Write your answer on the space provided.
1. there is a predictor variable or group of subjects that cannot be
manipulated by the experimenter.
2. the research focuses on verifiable observation as opposed to
theory or logic.
3. uses interviews, questionnaires, and sampling polls to get a
sense of behavior with intense precision.
4. tests for the relationships between two variables. Performing
correlational research is done to establish what the effect of
one on the other might be and how that affects the
relationship.
5. It is conducted in order to explain a noticed occurrence. In
correlational research the survey is conducted on a minimum
of two groups.
6. This research method involves the discretion, recognition,
analysis and interpretation of condition that currently exist.
7. This research examine patterns of similarities and differences
across a moderate number of cases
8. Though questions may be posed in the other forms of
research, experimental research is guided specifically by a
hypothesis. Sometimes experimental research can have
several hypotheses.
9. It is a statement to be proven or disproved. Once that
statement is made experiments are begun to find out whether
the statement is true or not.
10. This research can be exciting and highly informative.
11. This research design that can establish cause and effect
relationships.
12. the extent to which an instrument measures what it is
supposed to measure and performs as it is designed to
perform.
13. refers to the appropriateness of the content of an instrument
13
ACTIVITY
DIRECTIONS: Make a reflection Relating Reliability and Validity at least 250 words. (25 points)
Reliability is directly related to the validity of the measure. There are several important
principles. First, a test can be considered reliable, but not valid. Consider the SAT, used as a
predictor of success in college. It is a reliable test (high scores relate to high GPA), though only a
moderately valid indicator of success (due to the lack of structured environment – class attendance,
parent-regulated study, and sleeping habits – each holistically related to success).
Second, validity is more important than reliability. Using the above example, college
admissions may consider the SAT a reliable test, but not necessarily a valid measure of other
quantities colleges seek, such as leadership capability, altruism, and civic involvement. The
combination of these aspects, alongside the SAT, is a more valid measure of the applicant‘s
potential for graduation, later social involvement, and generosity (alumni giving) toward the alma
mater.
Finally, the most useful instrument is both valid and reliable. Proponents of the SAT argue that it is
both. It is a moderately reliable predictor of future success and a moderately valid measure of a
student‘s knowledge in Mathematics, Critical Reading, and Writing.
Compiled by:
14