BESC1526+Pre+lecture+reading+lecture+4+final (Conflict)
BESC1526+Pre+lecture+reading+lecture+4+final (Conflict)
Page | 1
4.1. What is the difference between inductive and
deductive thinking?
Inductive reasoning
Inductive reasoning (as opposed to deductive reasoning) is reasoning in which the first
observations are viewed as supplying strong evidence for the truth of the conclusion.
However, the truth of the conclusion of an inductive argument is only [highly] probable,
based upon the evidence given.
Simple example:
1. All swans that have ever been observed are black
2. Therefore, all swans are black
Statement 1. indicates people have viewed many swans and all have been the same colour –
strong evidence that there is not an exception
Statement 2. is a conclusion that all the swans in existence are the same colour [even though
not all swans were observed] – and because of this, the statement has the probability of being
incorrect
Inductive reasoning is the main argument associated with a priori thinking, philosophical
arguments, and the typical strategy used in everyday thinking to come to conclusions. It is a
reasoning strategy that utilizes specific observations to derive general principles. Consider
when you attempt to predict how your best friend will react to some surprising news. You
will base your conclusion on instances of how he/she has reacted in the past to similar news
Psychologists (and other scientists) use inductive reasoning to develop theories to test or
validate. By careful observation of human behaviour, psychologists observe surprising but
consistent instances of human behaviour [specific observations that people are reluctant to get
involved in emergency situations involving others]. The progress of inductive reasoning is to
move from particular/individual instances/observations to a broader generalization (that all
people are apathetic when it comes to helping victims of emergency situations). From this
general theory, psychologists would then use deductive reasoning to complete scientific
research to determine whether this effect is true and then determine what factors influence
this – are there gender or cultural or age differences in helping behaviour.
Page | 2
Deductive reasoning
Deductive reasoning (also known as logical deduction) is the process of reasoning from one
or more statements (also known as premises) to reach a logically certain conclusion. It differs
from inductive reasoning as it relies on a posteriori thinking and the conclusion is certain
(and not just highly probable).
Deductive reasoning links premises with conclusions. If all premises are true, the terms are
clear, and the rules of deductive logic are followed, then the conclusion reached is necessarily
true.
Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic) in
the following way: In deductive reasoning, a conclusion is reached reductively by applying
general rules that hold over the entirety of a closed domain of discourse, narrowing the range
under consideration until only the conclusion(s) is left. In inductive reasoning, the conclusion
is reached by generalizing or extrapolating from, i.e., there is epistemic uncertainty.
However, the inductive reasoning mentioned here is not the same as induction used in
mathematical proofs – mathematical induction is actually a form of deductive reasoning.
Simple example:
A simple example of a deductive argument:
If, statement 1. (or premise 1) is true – that is all humans are mortal
If statement 2. (or premise 2) is true – that is Socrates is a member of the human race
Then, statement 3. (or conclusion) has to true – Socrates must be mortal because this is an
attribute of being human.
Page | 3
Consider watching the following video
[run time: 6.43 minutes]:
Page | 4
4.2. What is Karl Popper’s notion of falsification?
Sir Karl Popper (28 July 1902 – 17 September 1994) was an Austrian-British philosopher
and professor. He is generally regarded as one of the greatest philosophers of science of the
20th century.
Popper is known for his rejection of the classical inductivist views on the scientific method,
in favour of empirical deductivist strategies (notably falsification). Popper argued that a
theory in the empirical sciences can never be proven [that is, demonstrate that the theory is
100% true in all possible instances]. Researchers can complete research to support a theory
but there may some exception that will arise in the future.
Popper argues that it is more logical to disprove a theory – that is, if a researcher finds one
instance where a theory fails to predict an event, then the whole theory is wrong and can be
rejected. Poppers recommendation was that researchers should attempt to falsify theories by
choosing to complete decisive [critical] experiments.
Popper used the black swan fallacy to explain falsification. Researchers could spend all their
lives checking the colour of swans [and while they looked in Australia, they would continue
to only find black swans]. However, Popper suggested they should focus on trying to find a
different coloured swan. A critical experimental strategy would be not to continue to look in
Australia, but to search other regions around the world. In doing this, researchers would
quickly find a white swan [native to Europe]. Hence, one critical study that discovers the
existence of white swans falsifies the premise that all swans are black. Falsification is a
much more efficient strategy in evaluating a theory than continually trying to prove theories
to be correct/true.
A further recommendation by Popper was for researchers to refrain from including
contradictions into a theory to make the theory less falsifiable – that is, once a theory is
shown to be false, the total theory should be rejected and replaced by another. Too often
Popper argues, theories are shown to be incorrect but are continually used. An example he
used was Newton’s theory of matter - it had severe limitations but was supported for over 100
years, that is, until Einstein proposed his radically new theory of relativity [which resolved all
the contradictions of Newton’s theory].
Page | 5
Consider watching the following video
[run time: 8.56 minutes]:
Page | 6
4.3. What is the hypothetico-deductive approach in
modern day scientific research?
Note:
1. Confirming the hypothesis by this method can never absolutely verify (prove the truth
of the theory)
2. However, disconfirming or rejecting the hypothesis falsifies the hypothesis and thus
the theory in general
Einstein said, "No amount of experimentation can ever prove the theory of relativity
right; but a single experiment can prove me wrong.
Modern psychological research is still dominated by studies that try to demonstrate a theory
to be true. However, the best research involves completing hypothetico-deductive research
with the primary focus on disproving or falsifying well-constructed theories
Page | 7
Consider watching the following short
video [run time: 1.10 minutes]:
Page | 8
4.4. What is the Null hypothesis and the resultant
type 1 and type 2 errors?
In the modern scientific world, researchers should assume there are no differences between
groups or that a new treatment is no better than current treatments. From this base,
researchers can then develop research methodologies that demonstrates differences –
developing hypothetico-deductive explanations and trying to falsifying these statements
In research methodology, the term "null hypothesis" refers to a general statement or default
position that there is no relationship between two measured phenomena, or no difference
among groups. Rejecting or disproving the null hypothesis—and thus concluding that there
are grounds for believing that there is a relationship between two phenomena (e.g. there is a
difference between two groups or that a potential treatment has a measurable positive
effect)—is a central task in the modern practice of science, and provides a scientific basis for
rejecting a hypothesis.
Thus, the null hypothesis is assumed to be true until evidence indicates otherwise. In
statistics, the null hypothesis is often denoted H0 (read “H-naught”, "H-null", or "H-zero").
While there is a statistical debate how the null hypothesis should be evaluated [significance
testing or hypothesis testing – this is not relevant at this stage to your learning], the key role
of the null hypothesis is to set the basis to statistically test whether a theory is correct or not.
However, even testing a hypothesis may lead to incorrect conclusions – these incorrect
conclusions may be of two forms: a type 1 error or a type 2 error
Page | 9
Hit – in reality there were no group differences and the research confirmed that there were no
group differences – and therefore the null hypothesis is accepted
Correct rejection – in reality there were group differences but the research found there were
group differences – hence the null hypothesis is rejected
False alarm [type 1 error] – in reality there is no group difference, but the researcher
erroneously concludes there is a difference and incorrectly rejects the null
hypothesis
Miss [type 2 error] – in reality there is a group difference, but the researcher found no
differences, they then have accepted the status quo when in reality they should have
rejected the null hypothesis
Definition
In statistics, a null hypothesis is a statement that one seeks to disprove by the use of research
evidence. Most commonly it is a statement that the phenomenon being studied produces no
effect or makes no difference. An example of a null hypothesis is the statement "This diet has
no effect on people's weight." Usually an experimenter frames a null hypothesis with the
intent of rejecting it: that is, intending to run an experiment which produces data that shows
that the phenomenon under study does make a difference. Hence, the diet in question will be
effective and reduce people’s weight.
A type 1 error (or error of the first kind) is the incorrect rejection of a true null hypothesis.
Usually a type 1 error leads one to conclude that a supposed effect or relationship exists
when in fact it doesn't. Examples of type 1 errors include a test that shows a patient to
have a disease when in fact the patient does not have the disease.
A type 2 error (or error of the second kind) is the failure to reject a false null hypothesis.
Examples of type 2 errors would be a blood test failing to detect the disease it was
designed to detect, in a patient who really has the disease.
In summary, when comparing two means, concluding the means were different when in
reality they were not different would be a Type 1 error; concluding the means were not
different when in reality they were different would be a Type 2 error.
Page | 10
Consider watching the following long
video [run time: 11.23 minutes]:
Page | 11
4.5. What is the Signal Detection Theory?
According to the theory, there are a number of determiners of how a human goes about
detecting a signal.
1. Signal to noise ratio – a low [light] stimulus is easier to detect in a dark environment
compared to the same low light but in a bright [day] setting. Similar, an audio
recording [or someone talking on a mobile phone] can be heard at a much lower
volume in a very quiet setting compared to a noisy room.
2. Ability of the human operator – a person who has a problem hearing sounds will
need a louder volume to hear a message compared to someone with normal hearing
abilities
3. Motivation of the human operator – a sentry in wartime will be more likely to detect
fainter stimuli [enemy craft] than the same sentry in peacetime due to a lower
criterion; however the sentry might also be more likely to treat innocent stimuli
[friendly craft] as a threat [thus make more false alarms].
For a human being, experience, expectations, physiological state (e.g., fatigue or drug use)
and other factors can affect the ability to detect a signal
Much of the early work in signal detection theory was done studying radar operators who
were challenged by trying to detect enemy craft in the face of lots of other noise [friendly
craft, larger animals or birds, poor weather, etc].
Page | 12
In Psychology
Signal detection theory (SDT) is used when psychologists want to measure the way humans
make decisions under sub-optimal conditions [level of uncertainty increases].
Absolute threshold – researchers can apply signal detection theory to situations where stimuli
were either present or absent, and the observer has to categorize each trial as having the
stimulus present or absent, the trials are sorted into one of the four same categories under
determining type 1 and type 2 errors. Based on the proportions of these types of trials,
numerical estimates of sensitivity can be obtained.
Difference thresholds- Researcher can also research sensitivity and decision making to
change in the environment. A human operator may be required to detect when an object
appears brighter [or louder] than before. Signal detection theory can also be applied to
memory experiments and a wide range of other decision making tasks [eg, is the person
[criminal] applying for parole going to be law-abiding in the general community or is there a
high risk that the person will re-offend and harm innocent people in the community].
Page | 13