0% found this document useful (0 votes)
2 views

week 6-7 research

The document provides a comprehensive overview of data collection methods, focusing on quantitative and qualitative research methodologies. It outlines various types of quantitative research, including non-experimental and experimental designs, and emphasizes the importance of instrument development, validity, and reliability in research. Additionally, it offers guidelines for writing a research methodology section, detailing how to effectively present data collection and analysis processes.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

week 6-7 research

The document provides a comprehensive overview of data collection methods, focusing on quantitative and qualitative research methodologies. It outlines various types of quantitative research, including non-experimental and experimental designs, and emphasizes the importance of instrument development, validity, and reliability in research. Additionally, it offers guidelines for writing a research methodology section, detailing how to effectively present data collection and analysis processes.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

12

PRACTICAL RESEARCH 2 Weeks 6-


7 UNDERSTANDING DATA AND WAYS TO
SYSTEMATICALLY COLLECT DATA

Introduction

These information‘s are a compiled, resources gathered from an extensive literature


review; much of the information is verbatim from the various web sites. The objective is to
familiarize the readers in terms with the data collection tools, methodology, and sampling. It is
important to note that while quantitative and qualitative data collection methods are different
(cost, time, sample size, etc.), each has value. Most often uses deductive logic, in which
researchers start with hypotheses and then collect data which can be used to determine whether
empirical evidence to support that hypothesis exists.

QUANTITATIVE RESEARCH

If the researcher views quantitative design as a continuum, one end of the range
represents a design where the variables are not controlled at all and only observed. Connections
amongst variable are only described. At the other end of the spectrum, however, are designs
which include a very close control of variables, and relationships amongst those variables are
clearly established. In the middle, with experiment design moving from one type to the other, is
a range which blends those two extremes together.

TYPES OF QUANTITATIVE RESEARCH

Quantitative research is a type of empirical investigation. That means the research


focuses on verifiable observation as opposed to theory or logic. Most often this type of
research is expressed in numbers. A researcher will represent and manipulate certain
observations that they are studying. They will attempt to explain what it is they are seeing and
what affect it has on the subject. They will also determine and what the changes may reflect. The
1
overall goal is to convey numerically what is being seen in the research and to arrive at specific
and observable conclusions. (Klazema 2014)

2
Non-Experimental Research Design

Non-experimental research means there is a predictor variable or group of subjects that


cannot be manipulated by the experimenter. Typically, this means that other routes must be used to
draw conclusions, such as correlation, survey or case study. (Kowalczyk 2015).

Types of Non-Experimental Research

1. Survey Research

Survey research uses interviews, questionnaires, and sampling polls to get a sense
of behavior with intense precision. It allows researchers to judge behavior and then present
the findings in an accurate way. This is usually expressed in a percentage. Survey research
can be conducted around one group specifically or used to compare several groups. When
conducting survey research it is important that the people questioned are sampled at
random. This allows for more accurate findings across a greater spectrum of respondents.

Remember!

 It is very important when conducting survey research that you work with
statisticians and field service agents who are reputable. Since there is a high level
of personal interaction in survey scenarios as well as a greater chance for
unexpected circumstances to occur, it is possible for the data to be affected. This
can heavily influence the outcome of the survey.

 There are several ways to conduct survey research. They can be done in person,
over the phone, or through mail or email. In the last instance they can be self-
administered. When conducted on a single group survey research is its own
category.

3
2. Correlational Research

Correlational research tests for the relationships between two variables. Performing correlational
research is done to establish what the effect of one on the other might be and how that affects the
relationship.

Remember!

 Correlational research is conducted in order to explain a noticed occurrence. In


correlational research the survey is conducted on a minimum of two groups. In
most correlational research there is a level of manipulation involved with the
specific variables being researched. Once the information is compiled it is then
analyzed mathematically to draw conclusions about the effect that one has on the other.

 Correlation does not always mean causation. For example, just because two data
points sync doesn‘t mean that there is a direct cause and effect relationship.
Typically, you should not make assumptions from correlational research alone.

3. Descriptive

As stated by Good and Scates as cited by Sevilla (1998), the descriptive method is
oftentimes as a survey or a normative approach to study prevailing conditions.

Remember!

 Descriptive method involves the discretion, recognition, analysis and interpretation


of condition that currently exist. Moreover, according to Gay (2007) Descriptive
research design involves the collection of the data in order to test hypotheses or to
answer questions concerning the current status of the subject of the study. It
determines and reports the way things are.

4. Comparative

Comparative researchers examine patterns of similarities and differences across a


moderate number of cases. The typical comparative study has anywhere from a handful to

4
fifty or more cases. The number of cases is limited because one of the concerns of
comparative research is to establish familiarity with each case included in a study. (Ragin,
Charles 2015)

 Like qualitative researchers, comparative researchers consider how the different


parts of each case - those aspects that are relevant to the investigation - fit together;
they try to make sense of each case. Thus, knowledge of cases is considered an
important goal of comparative research, independent of any other goal.

5. Ex Post Facto

According to Devin Kowalczyk, that Ex post facto design is a quasi-experimental


study examining how an independent variable, present prior to the study, affects a dependent
variable.

Remember!

 A true experiment and ex post facto both are attempting to say: this independent variable is
causing changes in a dependent variable. This is the basis of any experiment - one variable is
hypothesized to be influencing another. This is done by having an experimental group and a
control group. So if you're testing a new type of medication, the experimental group gets the
new medication, while the control group gets the old medication. This allows you to test the
efficacy of the new medication. . (Kowalczyk 2015)

Experimental Research

Though questions may be posed in the other forms of research, experimental research is
guided specifically by a hypothesis. Sometimes experimental research can have several
hypotheses. A hypothesis is a statement to be proven or disproved. Once that statement is made
experiments are begun to find out whether the statement is true or not. This type of research is the
bedrock of most sciences, in particular the natural sciences. Quantitative research can be exciting
and highly informative. It can be used to help explain all sorts of phenomena. The best
quantitative research gathers precise empirical data and can be applied to gain a better
understanding of several fields of study. (Williams 2015)
Types of Experimental research

5
1. Quasi-experimental Research

Design involves selecting groups, upon which a variable is tested without any
random pre-selection process. For example, to perform an educational experiment, a class
might be arbitrarily divided by alphabetical selection or by seating arrangement. The
division is often convenient especially in an educational situations cause a little disruption
as possible.

2. True Experimental Design

According to Yolanda Williams (2015) that a true experiment is a type of


experimental design and is thought to be the most accurate type of experimental research.
This is because a true experiment supports or refutes a hypothesis using statistical analysis.
A true experiment is also thought to be the only experimental design that can establish cause
and effect relationships. So, what makes a true experiment?

There are three criteria that must be met in a true experiment

1. Control group and experimental group


2. Researcher-manipulated variable
3. Random assignment

INSTRUMENT DEVELOPMENT

Developing a research instruments

Before the researchers collect any data from the respondents, the young researchers will need to
design or devised new research instruments or they may adopt it into the other researches (the tools
they will use to collect the data).

If the researcher/s is planning to carry out interviews or focus groups, the young researchers will
need to plan an interview schedule or topic guide. This is a list of questions or topic areas that all
the interviewers will use. Asking everyone the same questions means that the data you collect will
be much more focused and easier to analyze.
6
If the group wants to carry out a survey, the young researchers will need to design a questionnaire.
This could be on paper or online (using free software such as Survey Monkey). Both approaches
have advantages and disadvantages.

If the group is collecting data from more than one ‗type‘ of person (such as young people and
teachers, for example), it may well need to design more than one interview schedule or

questionnaire. This should not be too difficult as the young researchers can adapt additional
schedules or questionnaires from the original.
When designing the research instruments ensure that:

 they start with a statement about.


 the focus and aims of the research project
 how the person‘s data will be used (to feed into a report?)
 confidentiality
 how long the interview or survey will take to complete.
 Usage of appropriate language
 every question must be brief and concise.
 any questionnaires use appropriate scales. For young people ‗smiley face‘ scales can
work well

REMEMBER!

Any questionnaires ask people for any relevant information about themselves, such as their
gender or age, if relevant. Don‘t ask for so much detail that it would be possible to identify
individuals though, if you have said that the survey will be anonymous.

The Instrument

Instrument is the generic term that researchers use for a measurement device (survey, test,
questionnaire, etc.). To help distinguish between instrument and instrumentation, consider that
the instrument is the device and instrumentation is the course of action (the process of developing,
testing, and using the device).

Instruments fall into two broad categories, researcher-completed and subject-completed,


distinguished by those instruments that researchers administer versus those that are completed by
7
participants. Researchers chose which type of instrument, or instruments, to use based on the
research question. Examples are listed below:

Researcher-completed Instruments Subject-completed Instruments

Rating scales Questionnaires

Interview schedules/guides Self-checklists

Tally sheets Attitude scales

Flowcharts Personality inventories

Performance checklists Achievement/aptitude tests

Time-and-motion logs Projective devices

Observation forms Sociometric devices

Usability

Usability refers to the ease with which an instrument can be administered, interpreted by the
participant, and scored/interpreted by the researcher. Example usability problems include:

Students are asked to rate a lesson immediately after class, but there are only a few minutes before
the next class begins (problem with administration).

Students are asked to keep self-checklists of their after school activities, but the directions are
complicated and the item descriptions confusing (problem with interpretation).

Teachers are asked about their attitudes regarding school policy, but some questions are worded
poorly which results in low completion rates (problem with scoring/interpretation).

Validity and reliability concerns (discussed below) will help alleviate usability issues. For now, we
can identify five usability considerations:
 How long will it take to administer?
 Are the directions clear?
8
 How easy is it to score?
 Do equivalent forms exist?
 Have any problems been reported by others who used it?

Validity

Validity is the extent to which an instrument measures what it is supposed to measure and performs
as it is designed to perform. It is rare, if nearly impossible, that an instrument be 100% valid, so
validity is generally measured in degrees. As a process, validation involves collecting and analyzing
data to assess the accuracy of an instrument. There are numerous statistical tests and measures to
assess the validity of quantitative instruments, which generally involves pilot testing. The remainder
of this discussion focuses on external validity and content validity.

External validity is the extent to which the results of a study can be generalized from a sample to a
population. Establishing eternal validity for an instrument, then, follows directly from sampling.
Recall that a sample should be an accurate representation of a population, because the total
population may not be available. An instrument that is externally valid helps obtain population
generalizability, or the degree to which a sample represents the population.

Content validity refers to the appropriateness of the content of an instrument. In other words, do the
measures (questions, observation logs, etc.) accurately assess what you want to know? This is
particularly important with achievement tests. Consider that a test developer wants to maximize the
validity of a unit test for 7th grade mathematics. This would involve taking representative questions
from each of the sections of the unit and evaluating them against the desired outcomes.

Reliability

Reliability can be thought of as consistency. Does the instrument consistently measure what it is
intended to measure? It is not possible to calculate reliability; however, there are four general
estimators that you may encounter in reading research:

Inter-Rater/Observer Reliability: The degree to which different raters/observers give consistent


answers or estimates.

9
Test-Retest Reliability: The consistency of a measure evaluated over time.

Parallel-Forms Reliability: The reliability of two tests constructed the same way, from the same
content.

Internal Consistency Reliability: The consistency of results across items, often measured with
Cronbach‘s Alpha.

10
GUIDELINES IN WRITING RESEARCH
METHODOLOGY

Methodology is the systematic, theoretical analysis of the methods applied to a field of


study. It comprises the theoretical analysis of the body of methods and principles associated with a
branch of knowledge.

Methodology section is one of the parts of a research paper. This part is the core of your
paper as it is a proof that you use the scientific method. Through this section, your study‘s validity
is judged. So, it is very important. Your methodology answers two main questions:

Guided Question to start writing a research methodology:

 How did you collect or generate the data?


 How did you analyze the data?

While writing this section, be direct and precise. Write it in the past tense. Include enough
information so that others could repeat the experiment and evaluate whether the results are
reproducible the audience can judge whether the results and conclusions are valid.

The explanation of the collection and the analysis of your data are very important because;

 Readers need to know the reasons why you chose a particular method or procedure instead
of others.
 Readers need to know that the collection or the generation of the data is valid in the field of
study.
 Discuss the anticipated problems in the process of the data collection and the steps you took
to prevent them.
 Present the rationale for why you chose specific experimental procedures.
 Provide sufficient information of the whole process so that others could replicate your study.

You can do this by: giving a completely accurate description of the data collection equipment
and the techniques. Explaining how you collected the data and analyzed them.

11
Specifically;

 Present the basic demographic profile of the sample population like age, gender, and the
racial composition of the sample. When animals are the subjects of a study, you list their
species, weight, strain, sex, and age.
 Explain how you gathered the samples/ subjects by answering these questions:
- Did you use any randomization techniques?
- How did you prepare the samples?
 Explain how you made the measurements by answering this question.
 What calculations did you make?
 Describe the materials and equipment that you used in the research.
 Describe the statistical techniques that you used upon the data.

The order of the methods section;


1. Describing the samples/ participants.
2. Describing the materials you used in the study
3. Explaining how you prepared the materials
4. Describing the research design
5. Explaining how you made measurements and what calculations you performed
6. Stating which statistical tests you did to analyze the data.

12
Name: Score:
Strand/Section/Grade: Date:

CHECK YOUR KNOWLEDGE (Short Answer Question) (2 POINTS EACH)

DIRECTIONS: Read the question carefully. Write your answer on the space provided.
1. there is a predictor variable or group of subjects that cannot be
manipulated by the experimenter.
2. the research focuses on verifiable observation as opposed to
theory or logic.
3. uses interviews, questionnaires, and sampling polls to get a
sense of behavior with intense precision.
4. tests for the relationships between two variables. Performing
correlational research is done to establish what the effect of
one on the other might be and how that affects the
relationship.
5. It is conducted in order to explain a noticed occurrence. In
correlational research the survey is conducted on a minimum
of two groups.
6. This research method involves the discretion, recognition,
analysis and interpretation of condition that currently exist.
7. This research examine patterns of similarities and differences
across a moderate number of cases
8. Though questions may be posed in the other forms of
research, experimental research is guided specifically by a
hypothesis. Sometimes experimental research can have
several hypotheses.
9. It is a statement to be proven or disproved. Once that
statement is made experiments are begun to find out whether
the statement is true or not.
10. This research can be exciting and highly informative.
11. This research design that can establish cause and effect
relationships.
12. the extent to which an instrument measures what it is
supposed to measure and performs as it is designed to
perform.
13. refers to the appropriateness of the content of an instrument

13
ACTIVITY

DIRECTIONS: Make a reflection Relating Reliability and Validity at least 250 words. (25 points)

Relating Reliability and Validity

Reliability is directly related to the validity of the measure. There are several important
principles. First, a test can be considered reliable, but not valid. Consider the SAT, used as a
predictor of success in college. It is a reliable test (high scores relate to high GPA), though only a
moderately valid indicator of success (due to the lack of structured environment – class attendance,
parent-regulated study, and sleeping habits – each holistically related to success).

Second, validity is more important than reliability. Using the above example, college
admissions may consider the SAT a reliable test, but not necessarily a valid measure of other
quantities colleges seek, such as leadership capability, altruism, and civic involvement. The
combination of these aspects, alongside the SAT, is a more valid measure of the applicant‘s
potential for graduation, later social involvement, and generosity (alumni giving) toward the alma
mater.

Finally, the most useful instrument is both valid and reliable. Proponents of the SAT argue that it is
both. It is a moderately reliable predictor of future success and a moderately valid measure of a
student‘s knowledge in Mathematics, Critical Reading, and Writing.

Compiled by:

SHIAHARI I. CORTEZ,R.N., M.Ed.

14

You might also like