0% found this document useful (0 votes)
5 views

Lesson 13

Uploaded by

Denny Vell
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lesson 13

Uploaded by

Denny Vell
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Experimental Psychology

Why we need
Statistics
Denny Vell M. Devaras, RPm
Chapter Objectives

● Learn how hypotheses are tested in experiments


● Understand the meaning of significance levels
● Learn how to summarize data with descriptive
statistics
STATISTICS
Statistics are quantitative measurements of samples
Statistics provide researchers with objective and consensual
techniques for describing their results
We work with statistics because they allow us to evaluate
objectively the data we worked so hard to collect
TYPES OF STATISTICS

Descriptive Statistics – describe sample tendency


and variability

Inferential Statistics – allows us to draw conclusions


about a parent population from a sample
STATISTICAL INFERENCE
Statistical Inference - the process of drawing conclusions or making
predictions about a population based on sample data

Population – all people, animals, or objects that have at least one


characteristic in common
Sample – a group that represents the larger population
Randomly selected samples will have generalization
Outlier - an observation or data point that significantly differs from other
observations in a dataset
Example
Suppose two groups of students in a class are asked to report their
weights.
To get the statistical inference, we must first get the mean (average)
weight of each group
Sample: 12 Students each group
Mean average: 152 pounds (Group 1); 147 pounds (Group 2)
Variability
Variability is the amount of change or fluctuation we see in something. It is
one of the most important concepts you need to understand to analyze the
results of experiments
TESTING THE NULL HYPOTHESIS
In an experiment, we assume an independent variable has no effect until
we can determine it through statistics
the null hypothesis (H 0) says that any differences we see between
treatments amount to nothing
We hold to the assumption that the null hypothesis is correct until the
evidence shows the assumption can be rejected.
When the experiment is over, we would like to be able to reject the null
hypothesis by showing that the effect produced by the independent
variable led to real differences in the responses of the groups
When we reject the null hypothesis, the results are statistically significant
TESTING THE NULL HYPOTHESIS
There is no way to directly test the alternative hypothesis (H1)
Therefore, we can never really prove that our research hypothesis is
correct
We can only show that the null hypothesis is probably wrong.
We reject the null hypothesis when the difference between treatments is
so large that chance variations cannot explain it
The Process of Statistical Inference
Researchers use the following process to apply statistical inference
1. Consider the population to be sampled: Because of variability, individual
scores on the dependent variable will differ
2. Consider different random samples within the population: Their scores on
the dependent variable will also differ because of normal variability.
Assume the null hypothesis is correct
3. Apply the treatment conditions to randomly selected, randomly assigned
samples
4. After the treatment, the samples now appear to belong to different
populations: Reject the null hypothesis.
Note:
Whether or not we will reject the null
hypothesis depends largely on variability
The more variability, the harder it will be to
reject the null hypothesis
Applying Statistical Inference: An
Example
Sample Hypothesis: Time passes quickly when you are having fun
This is a directional hypothesis; it predicts the way the difference
between groups will go
Two-Group Between-Subjects Design
Operational Definition of “having fun” is looking at a collection of cartoons
Experimental Group will be given the cartoons with punchlines (captions)
Control group will be given the cartoons minus the captions
Both groups were given exactly 10 mins (but they won’t be told about it)
After the subjects examine the cartoons, they were ask to estimate the
amount of time that has passed since they started.
Applying Statistical Inference: An
Example

H0 = The time estimates of the two groups will


look similar
H1 = The experimental groups will make shorter
time estimates than the control group will
Applying Statistical Inference: An
Example
● To the experimental group, it seemed
like they had been at their task for less
Mean
than the actual clock time
Experimental Control
● The control group estimated the Group Group
elapsed time to be longer than it really 9.6 mins 14.2 mins
was

● Can we conclude that having fun


makes the time pass more quickly? No.
Not yet.
Applying Statistical Inference: An
Example
First We have to consider the variability in the data
• Who are the population? How did we draw the sample?
• Sample: College Sophomores
• There could be differences in their abilities to estimate elapsed time
(Some accurate, some inaccurate)
• If we could somehow test all college sophomores and asked them to
estimate the length of a 10-minute interval, we might get a frequency
distribution with a normal curve (Most statistical tools used by
psychologists include this kind of assumption)
• Test statistics have known distributions that we can use to make
inferences about our data
Applying Statistical Inference: An
Example
We test the null hypothesis (H0) for two reasons
• First, it is the most likely explanation of what has occurred
• Second, there is no way we can directly verify the
alternative to the null hypothesis
Choosing a significance level
Significance Level - a criterion for deciding whether to reject the null
hypothesis or not
• We generally reject the null hypothesis if the probability of obtaining this
pattern of data by chance alone is 5% (p<.05)
• If we want a stricter criterion we could choose a significance level of p<01
• An even stricter criterion would be p<.001 (1 in 1000)
• To make a valid test of a hypothesis, we must think ahead and decide
what the significance level will be before running the experiment
Applying Statistical Inference: An
• Example
In the time-estimation experiment, we are testing the notion that time
passes quickly when a person is having fun
• Assume we had chosen p<.05 as the significance level for this
experiment.
• If the results could have occurred by chance 5% of the time or more, we
would not reject the null hypothesis
• If the results could have occurred by chance less than 5% of the time,
however, we are able to reject the null hypothesis:
• This is just another way of saying that the independent variable
apparently had an effect: It altered the behavior of the treatment groups.
• Our groups started out the same, but after the experiment their scores on
time estimation now look as if they came from two different populations.
Experimental Errors
Experimental errors are variations in subjects’ scores produced by
uncontrolled extraneous variables in the experimental procedure,
experimenter bias, or other influences on subjects not related to effects of the
independent variable
Experimental Errors

Type 1 Error: We reject the null hypothesis when it is really true


• The odds of making a Type 1 error are equal to the value we choose as
the significance level for rejecting the null hypothesis

Type 2 Error: We can fail to reject the null hypothesis even though it is really
false
• The probability of making a Type 2 error is affected by the amount of
overlap between the populations being sampled.
Applying Statistical Inference: An
Example
Experimental Errors
• The probability of making a Type 2 error is represented by the greek letter
β (beta)
• But say we knew that the odds of making a Type 2 error were equal to
exactly .75 in our experiment. Then the odds of making a correct decision
would be equal to 1 – .75, or .25. If we are likely to be wrong 3 out of 4
times (.75), then we should be right 1 out of 4 times (.25)
• The odds of correctly rejecting the null hypothesis when it is false are
always equal to 1 – β
• This 1-β is also referred to as the power of the statistical test
Experimental Errors
How can we reduce the value of β?
• Even though we cannot measure the precise value of β, we can reduce it
by increasing our sample size
• we can also reduce β by reducing the variability in our sample data
• We can also reduce β by using more powerful statistical tests, called
parametric tests
• We can also reduce β if we accept a less extreme significance level.
Experimental Errors
• The probability of making a Type 1 error is represented by the Greek letter
α (alpha)
• There is some chance (α) that we will reject the null hypothesis when we
should have conditionally retained it
• There is a 1 – α chance that we will fail to reject the null hypothesis when
that is the correct decision
The Odds
of Finding
Significance
The importance of variability
Variability – the amount of change or fluctuation we see in something
• As the amount of variability in the distribution goes up, the critical regions
fall farther from the center of the distribution
• When there is more variability, larger differences between means of
samples are required to reject the null hypothesis
One-tailed and Two-tailed Tests
One-tailed Test – A statistical procedure used when a directional prediction
has been made
• The critical region of the distribution of the test statistic is measured in just
one tail of the distribution

Two-tailed Test – A statistical procedure used when a non-directional


prediction has been made
• The critical region of the distribution of the test statistic is divided over both
tails of the distribution
One-tailed and Two-tailed Tests

Directional Hypothesis – A statement that predicts the exact pattern of


results that will be observed

Non-directional Hypothesis – A statement that predicts a difference


between treatment groups without predicting the exact pattern of results
Inferential Statistics

Inferential Statistics – statistics that can be used as indicators of what is


going on in the population
• They are also called test statistics because they can be used to evaluate
results
• It is a numerical summary of what is going on in our data
• The larger the value of the test statistic, the more likely it is that the
independent variable produced a change in subject’s responses
Organizing and summarizing data

Organizing Data
• Statistical work will go more quickly and be more accurate if you begin by
organizing the data and labeling them in a clear and orderly way.

Summarizing Data
• Raw Data – The data we record as we run the experiment
• Summary Data – Descriptive statistics computed from the raw data of an
experiment including the measures of central tendency and variability
Measures of Central Tendency
Measures of Central Tendency – summary statistics that describe what is
typical of a distribution of scores
• Mode – the score that occurs most often
• Median – the score that divides the distribution in half so that the half of
the scores in the distribution fall above the median, half below
• Mean – the arithmetic average (add all the scores and divide by the total
number of scores)
• Skewed distribution – the mean median and mode will be different and
each can lead to different impressions about the data.
Measures of Variability
Range – the difference between the largest and smallest scores in a set of
data
• The range is often a useful measure; it can be computed quickly, and it
gives a straightforward indication of the spread between the high and low
scores of the distribution
• The problem with using the range is that it does not reflect the precise
amount of variability in all the scores
Measures of Variability
Variance – the average squared deviation of scores from their mean
• The variance tells us something about how much scores are spread out,
or dispersed, around the mean of the data.

Standard Deviation (s)


• Reflects the average deviation of scores about the mean
Thanks!
Questions?

You might also like