0% found this document useful (0 votes)
14 views

Reliability and Validity

The document outlines a conceptual framework for research, focusing on methods of data collection, validity, and reliability in research instruments. It discusses various types of validity, including content, construct, and criterion validity, as well as reliability measures such as test-retest and internal consistency. Additionally, it provides references for further reading on research methodologies and validity issues.

Uploaded by

Tamanna Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Reliability and Validity

The document outlines a conceptual framework for research, focusing on methods of data collection, validity, and reliability in research instruments. It discusses various types of validity, including content, construct, and criterion validity, as well as reliability measures such as test-retest and internal consistency. Additionally, it provides references for further reading on research methodologies and validity issues.

Uploaded by

Tamanna Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

CONCEPTUAL

GOALS
FRAMEWORK

RESEARCH
QUESTIONS

METHODS VALIDITY
The extremely short woman has curly hair and brilliant blue
eyes.
A bright white light pierced the small dark space. A manager gives an employee constructive criticism on their
skills. "Your efforts are solid and you understand the product
knowledge well, just have patience."
A sales associate collects feedback from customers. "The
customer said the check-out button did not work.”

What country do you work in?


What is your most recent job title?
The age, weight, and height of a group of body types
to determine clothing size charts.
The origin, gender, and location for a census reading.
Interval
Ratio
Ordinal
Nominal



The extent to which the same answers can be obtained using the same instruments
more than one time

Is a concern every time a single observer is the source of data, because we have no
certain guard against the impact of that observer’s subjectivity” (Babbie, 2010)

According to Wilson (2010) reliability issues are most of the time closely associated
with subjectivity



Test-retest reliability

Internal consistency

Inter-rater reliability
• Refers to the consistency of outcomes over
time
• A short period is relative in terms of
reliability
• A test-retest correlation is used to compare
the consistency of your results
• scatter plot
• A measure that produces highly inconsistent scores over time cannot be a very good measure of a construct
that is supposed to be consistent
• Requires using the measure on a group of people at one time, using it again on the same group of people at a
later time, and then looking at test-retest correlation between the two sets of scores

• Typically done by graphing the data in a scatterplot


and computing Pearson’s r
• Rosenberg Self-Esteem Scale
• Known as internal reliability
• The consistency of results for various items when measured
on the same scale
• All the items on such measures are supposed to reflect the
same underlying construct, so people’s scores on those items
should be correlated with each other
• Example: Rosenberg Self-Esteem Scale

https://wwnorton.com/college/psych/psychsci/media/rosenberg.htm
• Can only be assessed by
collecting and analyzing data
• Split-half correlation
• The split-half correlation simply
means dividing the factors used
to measure the underlying
construct into two and plotting
them against each other in the
form of a scatter plot.
• Cronbach’s α
• α is the mean of all possible split-half correlations for a
set of items
• a value of +.80 or greater is generally taken to indicate
good internal consistency. And we make do with a value
of 0.70.
• Prevent personal bias
• Helps judge outcomes from the different perspectives
of multiple observers
• the extent to which different observers are consistent
in their judgments
Type of validity Description
The extent to which a research instrument accurately measures all aspects of a
Content validity
construct
The extent to which a research instrument (or tool) measures the intended
Construct validity
construct
The extent to which a research instrument is related to other instruments that
Criterion validity
measure the same variables
• measures whether the test covers all the content it needs to provide
the outcome you’re expecting
• the extent to which a measure “covers” the construct of interest
• refers to determining validity by evaluating what is being measured
• By omitting any of these critical factors, you risk significantly reducing
the validity of your research because you won’t be covering everything
necessary to make an accurate deduction
• Subset of content validity
• The extent to which a measurement method appears “on its
face” to measure the construct of interest
• A very weak kind of evidence that a measurement method is
measuring what it is supposed to
• Minnesota Multiphasic Personality Inventory-2 (MMPI-2)

• “I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick”
• Quantifying face validity might be a bit difficult
• If the method used for measurement doesn’t appear to test the accuracy of a measurement, its face
validity is low.
• The extent to which people’s scores on a measure are
correlated with other variables (known as criteria) that one
would expect them to be correlated with.
• A criterion can be any variable that one has reason to think
should be correlated with the construct being measured, and
there will usually be many of them

When the criterion When the criterion


is measured at the is measured at some
same time as the point in the future
construct, criterion (after the construct
validity is referred has been measured),
to as concurrent it is referred to
validity as predictive
validity
Convergent validity
• Criteria can also include other measures of the same construct

Discriminant validity
• The extent to which scores on a measure are not correlated with
measures of variables that are conceptually distinct



Content Construct Criterion Discriminant Convergent
Face Validity
Validity Validity Validity Validity Validity

Expert Concurrent Divergent Expert


Triangulation Triangulation
Review Validity Validation Review

Comparison
Literature Member Predictive Correlation Participant
with Similar
Review Checking Validity Analysis Feedback
Constructs

Pre-
Correlation Factor Literature
testing/Pilot Pilot Testing
Analysis Analysis Comparison
Studies
• Babbie, E. R. (2010) “The Practice of Social Research” Cengage Learning
• Cohen, L., Manion, L., Morrison, K, & Morrison, R.B. (2007) “Research methods in education” Routledge
• Oliver, V, 2010, 301 Smart Answers to Tough Business Etiquette Questions, Skyhorse Publishing, New York
USA
• Wilson, J. (2010) “Essentials of Business Research: A Guide to Doing Your Research Project” SAGE
Publications
• https://research-methodology.net/research-methodology/reliability-validity-and-repeatability/
• https://www.formpl.us/blog/research-reliability-validity
• Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology,
42, 116–131.
• https://www.fullstory.com/qualitative-data/
• https://dtpsychology.wordpress.com/2013/03/21/the-four-levels-of-measurement-noir-understanding-
the-differences-between-types-of-
data/#:~:text=All%20data%20can%20also%20be,statistics%20software%20to%20analyse%20data.

You might also like