0% found this document useful (0 votes)
445 views

FINALS Instruments in Quantitative Research An Overview

This document discusses instruments used in quantitative research. It describes common instruments like tests, questionnaires, interviews and observations. It also discusses developing your own instrument by adopting an existing one, modifying one, or creating a new one. The document emphasizes the importance of ensuring an instrument's validity and reliability. It defines different types of validity like face validity, content validity, construct validity, concurrent validity and predictive validity. It also defines four types of reliability: test-retest reliability, equivalent forms reliability, internal consistency reliability, and inter-rater reliability.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
445 views

FINALS Instruments in Quantitative Research An Overview

This document discusses instruments used in quantitative research. It describes common instruments like tests, questionnaires, interviews and observations. It also discusses developing your own instrument by adopting an existing one, modifying one, or creating a new one. The document emphasizes the importance of ensuring an instrument's validity and reliability. It defines different types of validity like face validity, content validity, construct validity, concurrent validity and predictive validity. It also defines four types of reliability: test-retest reliability, equivalent forms reliability, internal consistency reliability, and inter-rater reliability.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Instruments in Quantitative

Research: An Overview
A Lesson in Practical Research 2
Prepared by Ms. Arlene A. Bondad
Instruments
• Are tools used to gather data for a particular
research topic. Some of the common instruments
used for quantitative research are tests
(performance-based or paper-and-pencil),
questionnaires, interviews, and observations.
• The last two instruments are used more often in
qualitative research. However, they can also be
employed in quantitative studies as long as the
required responses or analyzed data are
numerical in nature.
Three ways of developing an
instrument for quantitative research
1. Adopting an Instrument- this means that you
will utilize an instrument that has been used
in well-known institutions or reputable
studies and publications.
• some of the popular sources of instruments
include professional journals and websites,
such as Tests in Print and the IRIS Digital
Repository.
Sometimes, however, the available tests do not
generate the exact data that you want to obtain.
In this case, you may either
2.modify an existing instrument or

3. create your own instrument.


As you develop your own
instrument…
• Be guided by the instruments used in studies
similar to yours. Make sure that the items
contained in your instruments are aligned
with your research questions, or objectives.
• Remember that inadequacies in your research
instrument will yield inaccurate data, thereby
making the results of your study questionable.
Instrument Validity
Whether your instrument is adopted, modified,
or self-created, it is necessary to ensure its
validity and reliability.

Validity refers to the degree to which an


instrument measures what it is supposed to
measure.
Types of Validity
• Face validity
• Content validity
• Construct validity
• Concurrent validity
• Predictive validity
Face Validity
• An instrument has face validity when
it appears to measure the variables
being studied. Hence, checking for
face validity is a subjective process. It
does not ensure that the instrument
has actual validity.
Content Validity
• It refers to the degree to which an instrument
covers a representative sample (or specific
elements) of the variable to be measured.
Similar to face validity, assessing content
validity is a subjective process which is done
with the help of a list of specifications.

• The list of specifications is provided by experts


in your field of study.
Construct Validity
• It is a degree to which an instrument
measures the variables being studied as a
whole. Thus, the instrument is able to detect
what should exist theoretically. A construct is
often an intangible or abstract variable such as
personality, intelligence, or moods. If your
instrument cannot detect this intangible
construct, it is considered invalid.
Criterion Validity
• This refers to the degree that an instrument
predicts the characteristics of a variable in a
certain way. This means that the instrument
produces results similar to those of another
instrument in measuring a certain variable.

• Therefore, a correlation between the results


obtained thru this instrument and another is
ensured. Hence, criterion validity is evaluated
through statistical methods.
Two types of Criterion Validity
1. Concurrent Validity-when it is able to predict
results similar to those of a test already
validated in the past.
• It is said to be insured when two
instruments are employed simultaneously.
• An example of testing concurrent validity is
whether an admission test produces results
similar to those of the National Achievement
Test.
Two types of Criterion Validity
2. Predictive validity-when it produces results
similar to those of another instrument that will
be employed in the future.

• An example of testing predictive validity is


employing college admission tests in
mathematics. This may be used to predict the
future performance of the students in
Mathematics.
Instrument Reliability

• Refers to the consistency of the measures of


an instrument.
• It is an aspect involved in the accuracy of
measurement.
Four types of reliability

1. test-retest reliability
2. Equivalent forms reliability
3. Internal consistency reliability
4. Inter-rater reliability
Test-retest reliability
• Is achieved by administering an instrument
twice to the same group of participants and
then computing the consistency of scores. It is
often ideal to conduct the retest-after a short
period of time (e.g. two weeks) in order to
record a higher correlation between the
variables tested in the study.
Equivalent forms reliability
• Is measured by administering two tests
identical in all aspects except the actual
wording of items. In short, the two tests have
the same coverage, difficulty level, test type,
and format. An example of a procedure
involving equivalent forms reliability is
administering a pre-test and post-test.
Internal Consistency Reliability

• Is a measure of how well the


items in two instruments
measure the same construct.
End

You might also like