0% found this document useful (0 votes)
56 views

Causal Research Design CH11

The document discusses causal research design and experimentation. It defines key concepts like independent and dependent variables. It also explains how to establish internal validity and control for extraneous variables through randomization, matching, and design control. Experimental designs are classified into pre-experimental, true experimental, quasi-experimental, and statistical categories.

Uploaded by

chadwakhemissi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Causal Research Design CH11

The document discusses causal research design and experimentation. It defines key concepts like independent and dependent variables. It also explains how to establish internal validity and control for extraneous variables through randomization, matching, and design control. Experimental designs are classified into pre-experimental, true experimental, quasi-experimental, and statistical categories.

Uploaded by

chadwakhemissi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Causal research design: Experimentation

Causality: Causality applies when the occurrence of X increases the probability of the occurrence of Y.

Marketing effects are caused by multiple variables and the relationship between cause and effect tends to be
probabilistic. Moreover, we can never prove causality (i.e. demonstrate it conclusively); we can only infer a cause-
and-effect relationship.

Conditions for causality:

Before making causal inferences, or assuming causality, three conditions must be satisfied:

1. Concomitant variation is the extent to which a cause, X, and an effect, Y, occur together or vary together in
the way predicted by the hypothesis under consideration.
2. The time order of occurrence condition states that the causing event must occur either before or
simultaneously with the effect; it cannot occur afterwards.
3. The absence of other possible causal factors means that the factor or variable being investigated should be
the only possible causal explanation.

Definitions and concepts:


• Independent variables: variables or alternatives that are manipulated (i.e. the levels of these variables are changed
by the researcher) and whose effects are measured and compared. These variables, also known as treatments, may
include price levels, package designs and advertising themes. In the advertising creativity example, the independent
variable was the text used to communicate a message (i.e. ‘more’ or ‘less’ creative).

• Test units: individuals, organizations, or other entities whose response to the independent variables or treatments
is being examined. Test units may include consumers, stores, or geographical areas. In the advertising creativity
example, the test units were consumers.

• Dependent variables: variables that measure the effect of the independent variables on the test units. These
variables may include sales, profits, and market shares. In the advertising creativity example, the dependent variable
was a measure of favorable attitudes towards the brand being advertised.

• Extraneous variables: Extraneous variables are all variables other than the independent variables that affect the
response of the test units. These variables can confound the dependent variable measures in a way that weakens or
invalidates the results of the experiment. In the advertising creativity example, product categories, brands, visuals
and number of words were extraneous variables that had to be controlled.

• Experiment: An experiment is formed when the researcher manipulates one or more independent variables and
measures their effect on one or more dependent variables, while controlling for the effect of extraneous variables.

• Experimental design: An experimental design is a set of procedures specifying: (1) the test units and how these
units are to be divided into homogeneous subsamples; (2) what independent variables or treatments are to be
manipulated; (3) what dependent variables are to be measured; and (4) how the extraneous variables are to be
controlled.

Definition of symbols:
X = the exposure of a group to an independent variable, treatment or event, the effects of which are to be
determined O = the process of observation or measurement of the dependent variable on the test units or group of
units
R = the random assignment of participants or groups to separate treatments.

In addition, the following conventions are adopted:

• Movement from left to right indicates movement through time.

• Horizontal alignment of symbols implies that all those symbols refer to a specific treatment group.

• Vertical alignment of symbols implies that those symbols refer to activities or events that occur simultaneously.

Validity in experimentation:
When conducting an experiment, a researcher has two goals: (1) to draw valid conclusions about the effects of
independent variables on the study group; and (2) to make valid generalizations to a larger population of interest.

 Internal validity: A measure of accuracy of an experiment. It measures whether the manipulation of the
independent variables, or treatments, actually caused the effects on the dependent variable(s) : refers to
whether the observed effects on the test units could have been caused by variables other than the
treatment. If the observed effects are influenced or confounded by extraneous variables, it is difficult to
draw valid inferences about the causal relationship.

Internal validity is the basic minimum that must be present in an experiment before any conclusion about
treatment effects can be made. Without internal validity, the experimental results are confounded. Control of
extraneous variables is a necessary condition for establishing internal validity.

 External validity: A determination of whether the cause-andeffect relationships found in the experiment can
be generalized: to what populations, settings, times, independent variables and dependent variables can the
results be projected?

It is desirable to have an experimental design that has both internal and external validity, but in applied
marketing research we often have to trade one type of validity for another. To control for extraneous
variables, a researcher may conduct an experiment in an artificial environment. This enhances internal
validity, but it may limit the generalizability of the results, thereby reducing external validity.

Factors that threaten internal validity may also threaten external validity, the most serious of these being
extraneous variables.

Extraneous Variables:
We classify extraneous variables in the following categories: history, maturation, testing effects, instrumentation,
statistical regression, selection bias and mortality.
1. History: Specific events that are external to the experiment but that occur at the same time as the
experiment.
2. Maturation: changes in the test units themselves that occur with the passage of time.
3. Testing effects: process of experimentation.
4. Main testing effect: a prior observation affects a later observation.
5. Interactive testing effect: a prior measurement affects the test unit response to the independent variable.
6. Instrumentation: changes in the measuring instrument, in the observers or in the scores themselves.
7. Statistical regression: participants with extreme scores move closer to the average score during the
experiment.
8. Selection bias: improper assignment of participants to treatment conditions.
9. Mortality: loss of participants while the experiment is in progress.

Controlling extraneous variables:

Confounding variables: Synonymous with extraneous variables, confounding variables are used to illustrate that
extraneous variables can confound the results by influencing the dependent variable.

There are four ways of controlling extraneous variables: randomization, matching, statistical control, and design
control.

1. Randomization refers to the random assignment of participants to experimental groups by using random
numbers. Treatment conditions are also randomly assigned to experimental groups.
2. Matching: involves matching participants on a set of key background variables before assigning them to the
treatment conditions.
3. Statistical control involves measuring the extraneous variables and adjusting for their effects through
statistical analysis.
4. Design control involves the use of experiments designed to control specific extraneous variables.

A classification of experimental designs:


 Pre-experimental designs:
Designs that do not control for
extraneous factors by
randomization.
 True experimental designs:
distinguished by the fact that
the researcher can randomly
assign participants to
experimental groups and also
randomly assign treatments to
experimental group.
 Quasi-experimental designs:
apply part of the procedures of
true experimentation but lack full experimental control.
 Statistical designs: allow for the statistical control and analysis of external variable.
Pre-experimental True experimental designs Quasi-experimental Statistical designs
designs designs
1. One-shot case: a 1. Pre-test–post-test 1. Time series design 1. A randomized block
single group of control group design involves a series of design is useful when
participants is the experimental group periodic there is only one major
exposed to a is exposed to the measurements on the external variable that
treatment X, and treatment, but the dependent variable for might influence the
then a single control group is not. a group of dependent variable.
measurement of Pre-test and post-test participants. The The participants are
the dependent measures are taken on treatment is then blocked or grouped on
variable is taken. both groups. administered by the the basis of the
(Also known as the researcher or occurs external variable. The
after-only design) naturally. After the researcher must be
treatment, periodic able to identify and
2. Post-test-only control
measurements are measure the blocking
2. One-group pre- group design:
continued to variable. By blocking,
test– post-test experimental group is
determine the the researcher ensures
design: a group of exposed to the
treatment effect. that the various
participants is treatment, but the
experimental and
measured twice. control group is not and
control groups are
no pre-test measure is
2. Multiple time series matched closely on the
taken.
3. Static group design A time series external variable.
design:. One design that includes
group, called the another group of 2. A Latin square design
3. Solomon four-group
experimental participants to serve allows the researcher
design:
group (EG), is as a control group. to control statistically
Should be considered when
exposed to the two non-interacting
information about the
treatment, and the external variables as
changes in the attitudes of
other, called the well as to manipulate
individual participants is
control group (CG), the independent
desired. This overcomes the
is not. variable.
limitations of the pre-test–
Measurements on post-test control group and
both groups are 3. Factorial design A
post-test-only control group
made only after the statistical experimental
designs in that it explicitly
treatment, and design used to
controls for the interactive
participants are measure the effects of
testing effect, in addition to
not assigned at two or more
controlling for all the other
random. independent variables
extraneous variables (it is
at various levels and to
very expensive and time
allow for interactions
consuming.)
between variables.
  The main disadvantage of a factorial design
is that the number of treatment
combinations increases multiplicatively with
an increase in the number of variables or
levels.

Laboratory vs Field experiments:


Laboratory environment: An artificial setting for experimentation in which the researcher constructs the desired
conditions.

Field environment: An experimental location set in actual market conditions.

Demand artefacts: Responses given because participants attempt to guess the purpose of the experiment and
respond accordingly.

Experimental vs non-experimental designs:


Experimentation has limitations of time (particularly if the researcher is interested in measuring the long-term
effects of the treatment), cost and administration (, particularly in a field environment), with these limitations
meaning that experimental techniques have a relatively low penetration into marketing research practice.

 These limitations have given rise to the use of grounded theory approaches, especially in developing an
understanding of consumer behavior that may be impossible to encapsulate through experiments.

Test Marketing (Market testing):


An application of a controlled experiment done in limited, but carefully selected, test markets. It involves a replication
of the planned national marketing program for a product in test markets.
Often, the marketing mix variables (independent variables) are varied in test marketing and the sales (dependent
variable) are monitored so that an appropriate national marketing strategy can be identified.

 The two major objectives of test marketing are:


(1) to determine market acceptance of the product.
(2) to test alternative levels of marketing mix variables.

 Test-marketing procedures may be classified as:


(1) standard test markets: the product is sold through regular distribution channels.
(2) controlled and mini-market tests: conducted by an outside research company in field experimentation. The
research company guarantees distribution of the product in retail outlets that represent a predetermined
percentage of the market.
(3) simulated test marketing: A quasi-test market in which participants are preselected; they are then
interviewed and observed on their purchases and attitudes towards the product.

You might also like