Causal Research Design CH11
Causal Research Design CH11
Causality: Causality applies when the occurrence of X increases the probability of the occurrence of Y.
Marketing effects are caused by multiple variables and the relationship between cause and effect tends to be
probabilistic. Moreover, we can never prove causality (i.e. demonstrate it conclusively); we can only infer a cause-
and-effect relationship.
Before making causal inferences, or assuming causality, three conditions must be satisfied:
1. Concomitant variation is the extent to which a cause, X, and an effect, Y, occur together or vary together in
the way predicted by the hypothesis under consideration.
2. The time order of occurrence condition states that the causing event must occur either before or
simultaneously with the effect; it cannot occur afterwards.
3. The absence of other possible causal factors means that the factor or variable being investigated should be
the only possible causal explanation.
• Test units: individuals, organizations, or other entities whose response to the independent variables or treatments
is being examined. Test units may include consumers, stores, or geographical areas. In the advertising creativity
example, the test units were consumers.
• Dependent variables: variables that measure the effect of the independent variables on the test units. These
variables may include sales, profits, and market shares. In the advertising creativity example, the dependent variable
was a measure of favorable attitudes towards the brand being advertised.
• Extraneous variables: Extraneous variables are all variables other than the independent variables that affect the
response of the test units. These variables can confound the dependent variable measures in a way that weakens or
invalidates the results of the experiment. In the advertising creativity example, product categories, brands, visuals
and number of words were extraneous variables that had to be controlled.
• Experiment: An experiment is formed when the researcher manipulates one or more independent variables and
measures their effect on one or more dependent variables, while controlling for the effect of extraneous variables.
• Experimental design: An experimental design is a set of procedures specifying: (1) the test units and how these
units are to be divided into homogeneous subsamples; (2) what independent variables or treatments are to be
manipulated; (3) what dependent variables are to be measured; and (4) how the extraneous variables are to be
controlled.
Definition of symbols:
X = the exposure of a group to an independent variable, treatment or event, the effects of which are to be
determined O = the process of observation or measurement of the dependent variable on the test units or group of
units
R = the random assignment of participants or groups to separate treatments.
• Horizontal alignment of symbols implies that all those symbols refer to a specific treatment group.
• Vertical alignment of symbols implies that those symbols refer to activities or events that occur simultaneously.
Validity in experimentation:
When conducting an experiment, a researcher has two goals: (1) to draw valid conclusions about the effects of
independent variables on the study group; and (2) to make valid generalizations to a larger population of interest.
Internal validity: A measure of accuracy of an experiment. It measures whether the manipulation of the
independent variables, or treatments, actually caused the effects on the dependent variable(s) : refers to
whether the observed effects on the test units could have been caused by variables other than the
treatment. If the observed effects are influenced or confounded by extraneous variables, it is difficult to
draw valid inferences about the causal relationship.
Internal validity is the basic minimum that must be present in an experiment before any conclusion about
treatment effects can be made. Without internal validity, the experimental results are confounded. Control of
extraneous variables is a necessary condition for establishing internal validity.
External validity: A determination of whether the cause-andeffect relationships found in the experiment can
be generalized: to what populations, settings, times, independent variables and dependent variables can the
results be projected?
It is desirable to have an experimental design that has both internal and external validity, but in applied
marketing research we often have to trade one type of validity for another. To control for extraneous
variables, a researcher may conduct an experiment in an artificial environment. This enhances internal
validity, but it may limit the generalizability of the results, thereby reducing external validity.
Factors that threaten internal validity may also threaten external validity, the most serious of these being
extraneous variables.
Extraneous Variables:
We classify extraneous variables in the following categories: history, maturation, testing effects, instrumentation,
statistical regression, selection bias and mortality.
1. History: Specific events that are external to the experiment but that occur at the same time as the
experiment.
2. Maturation: changes in the test units themselves that occur with the passage of time.
3. Testing effects: process of experimentation.
4. Main testing effect: a prior observation affects a later observation.
5. Interactive testing effect: a prior measurement affects the test unit response to the independent variable.
6. Instrumentation: changes in the measuring instrument, in the observers or in the scores themselves.
7. Statistical regression: participants with extreme scores move closer to the average score during the
experiment.
8. Selection bias: improper assignment of participants to treatment conditions.
9. Mortality: loss of participants while the experiment is in progress.
Confounding variables: Synonymous with extraneous variables, confounding variables are used to illustrate that
extraneous variables can confound the results by influencing the dependent variable.
There are four ways of controlling extraneous variables: randomization, matching, statistical control, and design
control.
1. Randomization refers to the random assignment of participants to experimental groups by using random
numbers. Treatment conditions are also randomly assigned to experimental groups.
2. Matching: involves matching participants on a set of key background variables before assigning them to the
treatment conditions.
3. Statistical control involves measuring the extraneous variables and adjusting for their effects through
statistical analysis.
4. Design control involves the use of experiments designed to control specific extraneous variables.
Demand artefacts: Responses given because participants attempt to guess the purpose of the experiment and
respond accordingly.
These limitations have given rise to the use of grounded theory approaches, especially in developing an
understanding of consumer behavior that may be impossible to encapsulate through experiments.