0% found this document useful (0 votes)
2 views

Chapter 3 notes

The document outlines key concepts in hypothesis testing, including the importance of defining observable variables and conditions for observation. It discusses different types of variables (continuous, discrete, quantitative, qualitative) and measurement scales (nominal, ordinal, interval, ratio), as well as reliability and validity in research. Additionally, it addresses participant reactivity, experimenter bias, and strategies to minimize these biases, ensuring accurate and reliable research outcomes.

Uploaded by

cmaxwe16
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Chapter 3 notes

The document outlines key concepts in hypothesis testing, including the importance of defining observable variables and conditions for observation. It discusses different types of variables (continuous, discrete, quantitative, qualitative) and measurement scales (nominal, ordinal, interval, ratio), as well as reliability and validity in research. Additionally, it addresses participant reactivity, experimenter bias, and strategies to minimize these biases, ensuring accurate and reliable research outcomes.

Uploaded by

cmaxwe16
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

In order to test a hypothesis

 You must determine what you will observe


 How you will observe it
 Under what conditions it will be observed

Variables
 Any value or characteristic that can change or vary from one person to
another or from one situation to another
o Must be observable

 Can be directly or indirectly measured


o Must be replicable

 One that be consistently measured

Construct/ Hypothetical Construct


 What we use to describe what we actually observed
o A conceptual variable that cannot be directly observed

External factors of the construct


 An observable behavior or event that is presumed to reflect the construct
itself
o The operational definition

Continuous or discrete variables


 Continuous
o Measured along a continuum at any place beyond the decimal point,
meaning that it can be measured in whole units or fractional units
 Discrete
o Measured in whole units or categories that are not distributed along a
continuum

Quantitative or qualitative
 Quantitative can be continuous or discrete
 Qualitative can only be discrete
 Quantitative varies by the amount
o Numeric value and often collected by measuring or counting

 Qualitative varies by class


o Often a category or label for the behavior and events researchers
observe and so describes nonnumeric aspects of phenomena

Scales of Measurement
 Rules for how the properties of numbers can change with different users
 4 levels
o Nominal, ordinal, interval and ratio

 All measured by three properties


o Order

 Does a larger number indicate a greater value than a smaller


number?
o Differences

 Does subtracting one set of numbers represent some meaningful


value?
o Ratio

 Does dividing or taking the ratio of, one set of numbers


represent some meaningful value?

Nominal
 Numbers identify something or someone- do not provide additional
information
o Zip code, license plate numbers, credit card numbers, ID numbers,
telephone numbers, SS numbers
 Identify locations, vehicles or individuals
 Measurements in which a number is assigned to represent something or
someone; often a coded value

Coding
 The procedure of converting a categorical variable to numeric values
o Race, sex, nationality, sexual orientation, hair/eye color, season or
birth, marital status (demographic/personal information)

Ordinal
 One that conveys order alone
o Fishing order in a competition, education level, ranking
 Indicate that only one value is greater than or less than another, so difference
between ranks do not have meaning
o Cannot say there is an equal difference in points

Interval
 Can be understood by two defining principles
o Equidistant scales

 Distributed in equal units


o No true zero

 When the value 0 truly indicates nothing on a scale of


measurement
 Rating scales
 Think of rating the rating you do at the end of a survey, rate your experience
from a 1 to 5

Ratio
 Measurements that have a true zero and are equidistant
o Any ratio scale value includes a value equal to 0 that indicates the
absence of the phenomenon being observed

Reliability
 A replicable variable is one that has a reliable measurement
o Consistent, stable or repeatable across measures or across
observations
 3 types
o Test-retest reliability

o Internal consistency

o Interrater reliability

Test-Retest reliability
 Extent to which measurements or observations are consistent across time
o When measure/observation is demonstrated at "Time 1" and again at
"Time 2"
 Stable measure
o Consistent over time

 Key advantage:
o Determine the extent to which items or measures are replicable or
consistent over time

Internal Consistency
 Measure of reliability used to determine the extent to which multiple items
used to measure the same variable are related

Interrater reliability (Interobserver reliability)


 Measure for the extent to which two or more raters of the same behavior or
event are in agreement with what they observed

Validity
 A valid variable that is correct or accurately observed
o Valid inasmuch as we measure what we intended to measure

 Ie if we are measuring attraction, measure is valid if we are


indeed measuring attraction
 Three types of validity
o Construct validity

o Criterion-related validity

o Content validity

Construct Validity
 The extent to which an operational definition for a variable or construct is
actually measuring that variable or construct

Criterion-related Validity
 The extent to which scores obtained on some measure can be used to infer or
predict a criterion or expected outcome
 Has many subtypes
o If one subtype is met, enough evidence to show/prove validity

Content Validity
 Determines whether the contents of a measure are adequate to capture or
represent the construct
o Items/content of a measure adequately reflect all the features of the
construct being measure

Face Validity
 The extent to which a measure for a variable or construct appears to measure
what it is supposed to measure

Intervention Fidelity
 Intervention is consistently delivered (reliability) and as it was intended to be
delivered (validity)
o How much of an intervention is delivered as it was intended

o A higher percentage means intervention is delivered how it was


intended
Participant Reactivity
 Reaction or response participants have when they know they are being
observed or measured
 3 types
o Participant expectancy

 Overly cooperative; behaving to please/ consistent with how the


researcher wants them to behavior
o Evaluation apprehension

 Overly apprehensive and withhold information the researcher is


trying to study
o Participant reluctance

 Overly antagonistic by behaving in contradicting ways with how


the researcher wants them to behave

Strategies to minimize participant reactivity:


 Reassure confidentiality
o Informed consent forms include statement of confidentiality or
anonymity; reinforce their responses will not be revealed
 Helps with evaluation apprehension
 Use deception when ethical
o Expectancy and Reluctancy happen when participants think they know
the research hypothesis; justified to use deception (when ethical) when
the true intent of the study is obvious
 Minimize demand characteristics
o Setting itself can give participants clues about how to react- typically in
ways that promote research hypotheses
 Demand Characteristics
 Any feature or characteristic of a research setting that
may reveal the hypothesis being tested or give the
participant a clue regarding how he or she is expected to
behave

Experimenter bias
 The extent to which the behavior of a researcher or experimenter
intentionally or unintentionally influences the results of a study
o Can happen anytime a researcher behaves or sets up a study in a way
that facilitates results in the direction that is predicted

Expectancy effects
 Preconceived ideas or expectations regarding how participants should behave
or what participants are capable of doing
o Can often lead to experimenter bias

Strategies to minimize experimenter bias:


 Get a second opinion
o Ask for feedback to ensure no bias in plan before conducting the study

 Standardize the research procedures


o Ensure that all participants are treated the same

 Conduct a double-blind study


o Blind study

 One in which the researcher or participants are unaware of the


condition that participants are assigned
o Double-blind study

 BOTH the researcher and participants are unaware of the


conditions in which participants are assigned
 When the researcher is blind to the predicted outcome;
the potential for experimenter bias is minimal

Sensitivity of a measure
 The extent to which a measure can change in the presence of a manipulation
o IE measuring attention in increments of 5 minutes; the measure would
not be "sensitive" to smaller increments of attention span between 1
and 4 minutes
 Range effect
o Typically occurs when scores for a measure are clustered at one
extreme
 Ceiling effect
o Scores that are clustered very high

 occurs when a measure is too easy or obvious


 Floor effect
o Scores that are clustered very low

 Occurs when a measure is too difficult or confusing

Strategies to maximize the sensitivity of a measure and minimize range effects:


 Perform a thorough literature review
o Review to see if a measure for the construct has been used; review
what has been used, etc and apply to own study
 Conduct a pilot study
o If a new measure, start small using a pilot study

 Can evaluate whether the measure is sensitive to detecting


changes in the presence of manipulation before spending
time/money on a full scale study
 Include manipulation checks
o Consider that the manipulation not the measure is the reason you are
not detecting differences between groups
 Use multiple measures
o Using two or more measures for the same variable can be more
informative and make it more likely that at least one measure will
detect differences between groups.

Pilot Study
 Small preliminary study used to determine the extent to which a
manipulation or measure will show an effect of interest

Manipulation check
 A procedure used to check or confirm that a manipulation in a study had the
effect that was intended

Observed score = true score + error


 Error
o Any influence in the response of a participant that can cause variability
in his or her response
 Potential errors are not well understood and therefore difficult to
anticipate
o When a result is not replicated, always consider potential sources of
error in measurement that can explain why a result was not replicated

You might also like