Unit 2 Notes
Unit 2 Notes
Unit 2 Notes
Hemansh, PSY/23/22 (with Vriti’s & Astha’s
help)
Unit 2:
Sensing and Perceiving: Sensation to Representation, approaches to perception,
perception of object and forms, perception of constancies and deficits of perception,
Attention: nature & theories, when attention fails us, Automatic and Controlled Processes in
Attention
Stimulation refers to sensory inputs from one's environment that excite neural and mental
activity.
Stimulation occurs externally on sensory organs, surfaces, tissues or cells
Stimulation precedes sensation. The stimulus triggers the cascade of neural signalling
that produces a sensation
Stimulation does not require conscious awareness or perception. You may not
experience or feel the stimulation.
Stimulation to Sensation
In all the sense organs, it is the job of the sensory receptors, such as the eyes and ears, to
convert incoming stimulus information into electrochemical signals—neural activity—the only
language the brain understands.
Psychologists use the term transduction for the sensory process that converts the
information carried by a physical stimulus, such as light or sound waves, into the form of
neural messages.
The process of transduction, which is the conversion of physical stimulation into neural
messages, can be broken down into the following steps:
1
2. Activation of Receptors: When the appropriate stimulus reaches a sense organ, it
activates specialized neurons known as receptors.
3. Conversion to Nerve Signal: The receptors respond by converting their excitation
into a nerve signal. Thus, a sensation is generated.
# Please keep in mind, however, that the stimulus itself terminates in the receptor: The only
thing that flows into the nervous system is information carried by the neural impulse.
Sensation is defined simply as the process by which a stimulated receptor (such as the eyes
or ears) creates a pattern of neural messages that represent the stimulus in the brain, giving
rise to our initial experience of the stimulus.
Psychologists who study sensation do so primarily from a biological perspective
involves converting stimulation into a form the brain can understand (neural signals)
Sensation occurs internally within the sensory systems and pathways of the body and
brain.
Sensation involves the conscious awareness, detection or feeling of a stimulus. It is a
mental experience.
Our sensory impressions of the world involve neural representations of stimuli—not the
actual stimuli themselves.
The brain senses the world indirectly because the sense organs convert stimulation into
the language of the nervous system: neural messages.
Different sensations occur because different areas of the brain receive the messages.
Absolute Threshold: the minimum amount of physical energy needed to produce a
sensory experience. The operational definition, as in a lab, is the intensity at which the
stimulus is detected accurately half of the time over many trials. It varies continually
with our mental alertness and physical condition.
Difference Threshold (also called the just noticeable difference or JND) is the
smallest physical difference between two stimuli that a person can reliably detect 50%
of the time. Weber’s law: The size of the JND is proportional to the intensity of the
stimulus The JND is always large when the stimulus intensity is high and small when the
stimulus intensity is low.
With regards to the last 2 points, Sensation DOES NOT EQUAL Perception.
Sensory Adaptation is the diminishing responsiveness of sensory systems to
prolonged stimulation
2
Perception is the set of processes by which we recognize, organize, and make sense of the
sensations we receive from environmental stimuli. Perception does not consist of just seeing
what is being projected onto your retina (or any other stimuli); the process is much more
complex. Your brain processes the visual stimuli, giving the stimuli meaning and interpreting
them.
Sensation to Perception
Perception occurs when the informational medium carries information about a distal object to
a person. When the person’s sense receptors notice the information, proximal stimulation
occurs and the person perceives an object.
Proximal & Distal Stimulations, & Perceptual Object - James Gibson (1966, 1979)
Proximal and distal stimulations are terms used to describe the relationship between the
external world and our perception of it. Proximal stimulation is the stimulation that occurs
on our sensory receptors (such as the retina) when we perceive something. Distal
stimulation is the stimulation that is caused by the external object (such as a falling tree) in
the environment. Proximal and distal stimulations are related, but not identical, because our
perception is influenced by factors such as attention, expectations, and prior knowledge.
Therefore, we do not always perceive the distal object exactly as it is, but rather construct a
perceptual object (or a percept—a mental representation of a stimulus that is perceived)
based on the proximal stimulation and other sources of information. Perception occurs when a
perceptual object (i.e., what you see) is created in you that reflects the properties of the
external world.
3
Sensory Adaptation: How do we achieve perceptual stability when there is so
much instability at the level of sensory receptors?
Variation is necessary for perception! In sensory adaptation, receptor cells adapt to constant
stimulation by not firing until there is a change in stimulation. Through sensory adaptation,
we may stop detecting the presence of a stimulus.
The Ganzfeld Effect: The Ganzfeld effect (from German for "complete field"), or
perceptual deprivation, is a phenomenon of perception caused by exposure to an
unstructured, uniform stimulation field. When your eyes are exposed to a uniform field
of stimulation, you will stop perceiving that stimulus after a few minutes and see just a
grey field instead. This is because your eyes have adapted to the stimulus. Such a
uniform visual field is called Ganzfeld. The effect is the result of the brain amplifying
neural noise in order to look for the missing visual signals. The noise is interpreted in
the higher visual cortex, and gives rise to hallucinations. The Ganzfeld effect happens
when your brain is starved of visual stimulation and fills in the blanks on its own.
Our minds must take the available sensory information and manipulate that information to
create mental representations of objects, properties, and spatial relationships within our
environments. The way we represent these objects will depend in part on our viewpoint in
perceiving the objects.
Visual System
4
The precondition for vision is the existence of light. Light is electromagnetic radiation that can
be described in terms of wavelength. Humans can perceive only a small range of the
wavelengths that exist; the visible wavelengths are from 380 to 750 nanometres.
5
form. These areas in the brain process and interpret the incoming visual signals to
create a coherent perception of the environment.
A pathway in general is the path the visual information takes from its entering the human
perceptual system through the eyes to its being completely processed. The information from
the primary visual cortex in the occipital lobe is forwarded through two fasciculi (fibre
bundles):
The Dorsal or the ‘Where’ Pathway: Here, information ascends toward the parietal
lobe along the dorsal pathway. It is responsible for processing location and motion
information. It determines ‘where’ an object is located in relation to your body. (Is it in
front of you? Are you about to step on it?)
The Ventral or the ‘What’ Pathway: Here, information descends to the temporal
lobe along the ventral pathway. It is mainly responsible for processing the colour,
shape, and identity of visual stimuli. It determines ‘what’ an object is and ‘what’ the
context is (Is it a chair in the kitchen or a toilet in the bathroom).
This is the what–where hypothesis. Most of the research in this area has been carried
out with monkeys. In particular, a group of monkeys with lesions in the temporal lobe were
able to indicate where things were but seemed unable to recognize what they were. In
contrast, monkeys with lesions in the parietal lobe were able to recognize what things
were but not where they were.
The what–how hypothesis suggests the two pathways refer not to what things are and
to where they are, but rather to what they are and to how they function. According to this,
spatial information about where something is located in space is always present in visual
information processing. What differs between the two pathways is whether the emphasis is
on identifying what an object is (what-ventral stream) or, instead, on how we can situate
ourselves to grasp the object (how-dorsal stream).
Approaches to Perception
6
Bottom-up Theories
Bottom-up theories describe approaches in which perception starts with the stimuli whose
appearance you take in through your eye. Therefore, they are data-driven (i.e., stimulus-
driven) theories. These explain perception with the perceiver staring with small bits of
information from the environment that the then combines to form a concept, and doing so in
various ways.
This theory by James Gibson (1966) states that the information in our sensory receptors
along with sensory context is all we need to perceive anything. Thus, it eliminates the
consideration of needing any higher-level intelligent processes in perception. This theory also
states that we do not need said higher-level processes to mediate sensory experiences and
actual perception since, in the real world; there is usually sufficient contextual the information
to make perceptual judgements. Thus, we effectively use contextual information directly
without the need of higher-level processes (Gibson, 1979). Since this contextual information is
likely to be available in a real-world setting rather than a lab experiment, readily, Gibson's
model is also sometimes referred to as the Ecological Model. Furthermore, direct perception
may also play a role in interpersonal situations when we try to make sense of others’
emotions and intentions (Gallagher, 2008).
Neuroscience and Direct Perception: The text discusses the role of neuroscience in
perception. It mentions that mirror neurons, which activate when a person performs or
observes an action, start firing 30 to 100 milliseconds after a visual stimulus. This suggests
that we may understand expressions, emotions, and movements of others before we form
hypotheses about what we perceive. The text also mentions that separate neural pathways
process form, colour, and texture in objects. When judging the length of an object, people
cannot ignore its width, but they can judge colour, form, and texture independently of other
qualities.
2. Template Theories
These theories are suggestive of our minds storing a variety of templates, i.e. highly detailed
models for patterns we might recognize. According to these theories, we recognize а pattern
when an incoming pattern is compared to all of the templates stored and identified by the
template that best matches it. These theories also suggest that expertise is attained by
acquiring chunks of knowledge in long-term memory that can be later accessed for fast
recognition.
7
Neuroscience and Template Theories: This excerpt discusses how our brains recognize
letters versus digits. Research suggests a distinction in brain activation when processing
letters versus digits, with a specific area near the left fusiform gyrus being more active when
presented with letters. This "letter area" might specialize in processing letters, though its
involvement in digit recognition remains uncertain. This concept aligns with the idea of
specialized brain regions for different stimuli, such as facial recognition. Later, the text hints
at a discussion on the brain structures responsible for recognizing faces.
CAPTCHAs (Completely Automated Public Turing Test to Tell Computers and Humans Apart)
3. Feature-Matching Theories
The Pandemonium Model, Oliver Selfridge (1959): One such feature-matching model
has been called Pandemonium (“pandemonium” refers to a noisy, chaotic place and hell). In
this model, metaphorical “demons” with specific duties receive and analyse the features of a
stimulus. There are four kinds of demons: image demons, feature demons, cognitive demons,
and decision demons. The “demons” function as feature detectors. Demons at the bottom
(first) level of processing scan the input, and demons at higher levels scan the output from
lower-level demons.
First, demons can scream more loudly or softly, depending on the clarity and quality of
the input. This allows for the fact that real-life stimuli are often degraded or incomplete,
yet objects and patterns can still be recognized.
Second, feature demons can be linked to letter demons in such a way that features that
are more important carry greater weight. This takes into account that some features
matter more than others in pattern recognition do.
Last, the weights of the various features can be changed over time, allowing for
learning.
8
Most of the other feature models focus on distinguishing different features and their kinds like
global versus local features. Local features constitute small-make or detailed aspects of a
given pattern whereas global features what give a pattern its overall shape.
Studies using these models regarding these features show that in patterns where constituent
items are close together at a local level, perceiver has trouble identifying the local stimuli if
they are not concordant with the global stimuli. This is called the global precedence effect,
wherein the global features are dominant.
9
In contrast, when letters are more widely spaced, the effect is reversed. Then a local
precedence effect appears. Participants more quickly identify the local features of the
individual letters than the global ones, and the local features interfere with the global
recognition in cases of contradictory stimuli.
He suggested that we recognize 3-D objects by manipulating simple geometric shapes called
geons (for geometrical ions). According to his theory, we quickly recognize objects by
observing the edges of them and then decomposing the objects into geons. The geons also
can be recomposed into alternative arrangements. The geons are simple and function
regardless of viewpoint—they are viewpointinvariant. The objects constructed from geons
thus are recognized easily from many perspectives, despite visual noise.. It occurs even when
the stimulus object is degraded in some way. This is because you can still infer the presence
of the other geons.
10
Limitations:
His theory explains how we may recognize general instances of chairs, lamps, and
faces, but it does not adequately explain how we recognize particular chairs or
particular faces. So RBC theory cannot explain how we can distinguish one face from
the next.
Another problem with his approach and the bottom-up approach in general, is how to
account for the effects of prior expectations and environmental context on some
phenomena of pattern perception.
Top-Down Theories
According to constructivists, during perception, we quickly form and test various hypotheses
regarding percepts. The percepts are based on the following:
1. Context effect
Context effects are the influences of the surrounding environment on perception i.e. we
consider prior expectations like you would recognise a friend from far away because you
planned a hangout.
11
You probably read the two words as “they bake,” perceiving the character in question
unambiguously as an ‘h’ the first time and then, milliseconds later, as an ‘a’. The context
surrounding the character ‘t’ and ‘ey’ the first time and ‘b’ and ‘ke’ the second time,
obviously influenced what you perceived.
Experiment - For example, participants might see a scene of a kitchen followed by stimuli
such as a loaf of bread, a mailbox, and a drum. Objects that were appropriate to the
established context, such as the loaf of bread in this example, were recognized more rapidly
than were objects that were inappropriate to the established context. The strength of the
context also plays a role in object recognition
Three of the stimuli are shaped like triangles, and one is not. In each case, the stimulus is a
diagonal line [Figure 3.17(a)] plus other lines [Figure 3.17(b)]. Thus, the stimuli in this second
condition are more complex variations of the stimuli in the first condition. Participants,
however, can more quickly spot which of the three-sided, more complicated figures is
different from the others than they can spot which of the lines is different from the others.
In the Object-Superiority Effect, a target line that forms a part of a drawing of a 3-D object
is identified more accurately than a target that forms a part of a disconnected 2-D pattern.
12
The Word-Superiority Effect indicates that when people are presented with strings of
letters, it is easier for them to identify a single letter if the string makes sense and forms a
word instead of being just a nonsense sequence of letters. For example, it is easier to
recognize the letter ‘o’ in the word ‘house’ than in the word ‘huseo’.
Experiment: Sometimes a single letter was presented. At other times, the letter appeared in
the context of a word (such as WORD or WORK; notice that either D or K forms a common
English word in combination with the same three letters). At still other times, the letter was
presented with three other letters in a combination that did not form a word (OWRD or OWRK,
for instance). In each case, the stimuli were then masked, and the participant was asked
merely to say which letter, D or K, had been presented. Surprisingly, participants could much
more accurately identify letters presented in the context of words than non-sense words.
Gregory's theory of top-down processing proposes that perception relies heavily on the brain's
stored knowledge and expectations to, actively, construct interpretations of sensory
information, rather than just passively receiving inputs.
13
According to Gregory, we are forced to generate hypotheses because the sensory data are
incomplete. If we had perfect and comprehensive sensory data, we would have no need of
hypotheses, as we would ‘know’ what we perceived. Prior stored knowledge is important
in perceptual hypotheses generation as it fills the gaps in sensory data.
There are some stimuli with which we are so familiar (such as faces) that there can be a
strong bias towards accepting a particular perceptual hypothesis, resulting in a ‘false’
perception. This, as Gregory suggests, represents a tendency to go with the most likely
hypothesis, i.e. the hypothesis that the face in emerging outwards.
A primary example Gregory uses to illustrate his theory, with reference to perceptual
hypotheses, is the illusory contours (or subjective contours) phenomenon. This refers to
cases where viewers report seeing an outline or shape that is not actually present, based on
certain arranged cues. For instance, when positioned properly, the edges of partial circles can
give the illusion of complete circular shapes.
14
Perception of Objects and Forms
2. Gestalt Approach
The Gestalt approach to form perception that was developed in Germany in the early
twentieth century is particularly useful for understanding how we perceive groups of objects
or even parts of objects to form integral wholes The approach was founded by Kurt Koffka
(1886–1941), Wolfgang Köhler (1887–1968), and Max Wertheimer. It was based on the
notion that the whole differs from the sum of its individual parts. The Gestalt approach
is overarched by the Law of Prägnanz.
Law of Prägnanz: We tend to perceive any given visual array in a way that most simply
organizes the different elements into a stable and coherent form. Thus, we do not merely
experience a jumble of unintelligible, disorganized sensations.
15
Figure-ground is also called segregation of the whole display into objects (also called the
figure) and the background (also called the ground) is an important process known to
cognitive psychologists as form perception.
16
Experiments show that people rely on Gestalt principles automatically, even when
assessing never-before-seen shapes, suggesting these visual organizational habits operate at
a fundamental perceptual level. For example, study participants were quicker at recognizing a
triangle shape as part of a novel image because its visual proximity and closure fit Gestalt
expectations.
17
Evidence for Separate Systems
Prosopagnosia
o Inability to recognize faces after brain damage
o Ability to recognize objects is intact
Associative agnosia
o Difficulty with recognizing objects
o Can recognize faces
Perceptual Constancies
Perceptual constancy occurs when our perception of an object remains the same even
when our proximal sensation of the distal objects changes.
Depth Perception
Depth is the distance from a surface, usually using your own body as a reference surface
when speaking in terms of depth perception. Depth cues are visual cues that provide
information about the 3D structure of a scene and the relative distances of objects within it.
Monocular depth cues can be represented in just two dimensions and observed with
just one eye.
These are Texture gradients (Grain of item), Relative size (Bigger is closer),
Object Overlap/Interposition (Closer are in front of other objects), Linear
perspective (Parallel lines converge in distance), Atmospheric
Attenuation/Aerial perspective (Images seem blurry farther away), Motion
parallax (Objects get smaller at decreasing speed in distance), Shadowing &
Shading
Binocular depth cues are based on the receipt of sensory information in three
dimensions from both eyes.
These are Binocular convergence (Convergence Angles of Optical Axes of the
Two Eyes), Binocular disparity (Retinal Disparity)
18
Deficits in Perception
Agnosia: People who suffer from an agnosia have trouble perceiving sensory
information. Agnosias often are caused by damage to the border of the temporal and
occipital lobes or restricted oxygen flow to areas of the brain, sometimes because of
traumatic brain. They can perceive the colours and shapes of objects and persons, but
they cannot recognize what the objects are. They have trouble with the ‘what’
pathway. E.g., one gnostic patient, on seeing a pair of eyeglasses, noted first that there
was a circle, then that there was another circle, then that there was a crossbar, and
finally guessed that he was looking at a bicycle.
19
Prosopagnosia: Prosopagnosia results in a severely impaired ability to recognize
human faces. A person with prosopagnosia might not recognize her or his own face in
the mirror. This fascinating disorder has spawned much research on face identification,
a popular topic in visual perception. The functioning of the right-hemisphere fusiform
gyrus is strongly implicated in prosopagnosia.
Optic ataxia: It is an impaired ability to use the visual system to guide movement.
People with this deficit have trouble reaching for things. Ataxia results from a
processing failure in the posterior parietal cortex, where sensorimotor information is
processed. All of us have had the experience of coming home at night and trying to find
the keyhole in the front door. It is too dark to see, and we have to grope with our key
for the keyhole, often taking a while to find it. Someone with optic ataxia has this
problem even with a fully lit visual field. The ‘how’ pathway is impaired.
Colour-blindness is more common in men due to sex-linked genetic factors. It can also result
from brain lesions. Most color-blind people retain some color vision, despite the term "color
blindness.” More accurately called color vision deficiencies.
Attention
20
[Attention] is the taking possession of the mind, in clear and vivid
form, of one out of what seem several simultaneously possible
objects or trains of thoughts. . . . It implies withdrawal from some
things in order to deal effectively with others.
Attention is the means by which we actively select and process a limited amount of
information from all of the information captured by our senses, our stored memories, and our
other cognitive processes.
At one time, psychologists believed that attention was the same thing as
consciousness. Now they acknowledge that we attend to and process some sensory
information and memories without our conscious awareness.
Consciousness includes both the feeling of awareness and the content of awareness,
some of which may be under the focus of attention. Attention and consciousness,
therefore, form two partially overlapping sets.
# Conscious attention plays a causal role in cognition, and serves three purposes:
21
1. Signal detection and vigilance
Signal Detection:
Signal detection involves the ability to discern specific important stimuli (signals) from
a myriad of distractions or background noise.
It emphasizes the immediate identification and discrimination of relevant signals from
irrelevant information.
It's often associated with quick, precise responses to critical events or stimuli, requiring
acute discrimination abilities to identify, for instance, a distress signal among a
crowded beach.
Signal detection focuses on the instantaneous identification and differentiation of
pertinent information from the surrounding background.
Signal-detection theory (SDT) is a framework to explain how people pick out the
important stimuli embedded in a wealth of irrelevant, distracting stimuli. SDT often is
used to measure sensitivity to a target’s presence. When we try to detect a target
stimulus (signal), there are four possible outcomes (refer figure).
Signal-detection theory was one of the first theories to suggest an interaction between
the physical sensation of a stimulus and cognitive processes such as decision-making.
22
Vigilance AKA Sustained Attention:
2. Search
23
Search refers to a scan of the environment for particular features—actively looking for
something when you are not sure where it will appear.
Distracters: Non-target stimuli that divert our attention away from the target stimulus
Types of Search
o Feature Search: We look for just one feature (e.g., colour, shape, or size) that
makes our search object different from all others. Therefore, the number of
distracters does not really play a role in slowing us down.
o Conjunction Search: We have to combine two or more features to find the
stimulus we are looking for. Because in conjunction searches we look for a
combination of features, these searches are more difficult than feature searches
that look for just one feature. The number of targets and distracters affects the
difficulty of conjunction searches.
The work just reviewed on visual search tasks often involves “pop out” phenomena in which
certain stimuli seem to jump off the page or screen at the viewer, demanding attention.
Experimental psychologists have called this phenomenon Attentional Capture. By this, they
mean to imply that certain stimuli “cause an involuntary shift of attention.” Many have
described this phenomenon as a bottom-up process, driven almost entirely by properties of a
stimulus, rather than by the perceiver’s goals or objectives. Hence, the term attentional
capture, which implies that the stimulus somehow automatically attracts the perceiver’s
attention.
1. Feature-Integration Theory
24
2. Similarity Theory
According to similarity theory, the more similar target and distracters are, the
more difficult it is to find the target.
The difficulty of search tasks depends on how different distracters are from each
other. But it does not depend on the number of features to be integrated.
For instance, it is easier to read long strings of text written in lowercase letters
than text written in capital letters because capital letters tend to be more similar
to one another in appearance. Lowercase letters, in contrast, have more
distinguishing features.
3. Selective Attention
Selective Attention involves being able to choose and selectively attend to certain
stimuli in the environment while at the same time tuning other things out.
It refers the fact that we usually focus our attention on one or a few tasks or events
rather than on many. To say we mentally focus our resources implies that we shut out
(or at least process less information from) other, competing tasks.
Colin Cherry (1953) gave the Cocktail party problem (in Galotti, apparently it is
Moray who gave this in 1959). He devised a task known as shadowing. In shadowing,
you listen to two different messages. Cherry presented a separate message to each
ear, known as dichotic presentation.
o The participants were also able to notice physical, sensory changes in the
unattended message—for example, when the message was changed to a tone or
the voice changed from a male to a female speaker. However, they did not
notice semantic changes in the unattended message. When their name is
presented in the unattended channel, they will switch their attention to their
name. Some researchers have noted that those who hear their name in the
unattended message tend to have limited working-memory capacity. As a result,
they are easily distracted
25
Theories of Attention explaining Selective Attention
The models differ in two ways: (1) whether or not they have a distinct filter for incoming
information, and (2) if the filter occurs early or late in the processing of information.
Moray (1959)
He found that even when participants ignore most other high-level (e.g.,
semantic) aspects of an unattended message, they frequently still recognize
their names in an unattended ear.
Moray suggested that the reason for this effect is that messages that are of high
importance to a person may break through the filter of selective attention.
Some personally important messages are so powerful that they burst through
the filtering mechanism.
3. Attenuation Model
26
information that is important to us (e.g., our name), it will be picked up, even
though the signal has been weakened by the attenuator.
Because the unattended messages are weaker, only parts will be picked up that
are of significance, whereas the rest will go unnoticed.
Only a few words have permanently lowered thresholds. However, the context of
a word in a message can temporarily lower its threshold. If a person hears “The
dog chased the . . . ,” the word cat is primed—that is, especially ready to be
recognized. Even if the word cat were to occur in the unattended channel, little
effort would be needed to hear and process it.
27
ERPs emerged as a tool in the 1970s. Hillyard et al. (1973) conducted a notable study using
different tones in each ear, revealing that the N1 component of the ERP was more pronounced
when the target was in the attended ear, suggesting heightened processing of the target
while suppressing other stimuli. Subsequent research (Woldorff & Hillyard, 1991) identified an
earlier positive wave in the auditory cortex post-target onset. Modern studies continue to
leverage ERPs, exploring diverse topics such as how a mother's socioeconomic status affects
a child's selective attention, revealing that lower maternal education correlates with reduced
selective attention effects on neural processing in children. Similarly, in visual attention, the
occipital P1 wave is larger when the target appears in an attended visual field region
compared to an unattended one.
4. Divided Attention
In early research, investigators used a dual-task paradigm to study divided attention during
the simultaneous performance of two activities and found that improvements in performance
eventually would have occurred because of practice. They also hypothesized that the
performance of multiple tasks was based on skill resulting from practice. They believed it not
to be based on special cognitive mechanisms.
In later experiments, participants could perform both tasks at the same time without a loss in
performance. Spelke suggested that these findings showed that controlled task could be
automatized so that they consume fewer attentional resources. Furthermore, two discrete
controlled tasks may be automatized to function together as a unit. However, the tasks do not
become fully automatic. For one thing, they continue to be intentional and conscious. For
another, they involve relatively high levels of cognitive processing.
The psychological refractory period (PRP) effect explores how our ability to perform two
speedy tasks simultaneously is impacted. When faced with overlapping speeded tasks, the
responses for one or both tasks tend to slow down, especially if the second task starts shortly
after the first. This delay in performance is termed the attentional blink or PRP effect. Studies
suggest that while people can efficiently process basic physical aspects of incoming
information when engaged in multiple speeded tasks, more complex processing like decision-
making or retrieving information from memory causes a decline in speed, leading to the PRP
effect.
To understand our ability to divide our attention, researchers have developed capacity models
of attention. Two different perspectives were talked about which differed in terms of what the
source of attention is:
1. One model suggests that one single pool of attentional resources can be divided freely.
Daniel Kahneman (1973)
2. Another model suggests multiple sources of attention are available, one for each
modality (e.g., verbal or visual). Navon & Gopher (1979)
28
Attention, Capacity, and Mental Effort
29
Critiques & Limitations of Common Pool of Attentional Resources
Single-pool models likely oversimplify what is going on: People are much
better at dividing their attention when competing tasks are in different modalities.
At least some attentional resources may be specific to the modality (e.g., verbal or
visual) in which a task is presented. For example, most people can easily listen to
music and concentrate on writing simultaneously. But it is harder to listen to the
news station and concentrate on writing at the same time. The reason is that both
are verbal tasks. The words from the news interfere with the words you are thinking
about. Similarly, two visual tasks are more likely to interfere with each other than
are a visual task coupled with an auditory one.
Overly broad and vague in explaining all aspects of attention: These models
regarding allocation of attentional resources have been criticized severely as overly
broad and vague. Indeed, they may not stand alone in explaining all aspects of
attention, but they complement filter theories quite well.
Insufficient Explanation of Search-Related Phenomena: Additionally, for
explaining search-related phenomena, theories specific to visual search, models
proposing guided search or similarity seem to have stronger explanatory power than
filter or resource theories.
30
Divided attention is crucial in situations like driving, where missing a critical threat due to
divided focus can lead to accidents. Studies show talking on a cell phone significantly impairs
driving compared to listening to the radio, increasing the likelihood of misses and slower
reactions to crucial signals. Real-world data aligns with this, attributing accidents to cell
phone use, distractions within the vehicle, and external factors. Talking on a cell phone while
driving can be as dangerous as driving while intoxicated, leading to increased anger and
aggression, linked to more accidents. Interestingly, having a passenger appears safer than
talking on a cell phone, as passengers adjust their conversation based on traffic, unlike
distant callers. Texting while driving significantly decreases reaction time and increases
accident risk, despite most people being aware of its dangers.
31
Orienting: It is defined as the selection of stimuli to attend to. This kind of attention is
needed when we perform a visual search. The orienting network develops during the
first year of life.
o The superior parietal lobe
o The temporal parietal junction
o The frontal eye fields
o The superior colliculus.
o The modulating neurotransmitter for orienting is acetylcholine.
o Dysfunction within this system can be associated with autism.
People with ADHD have difficulties in focusing their attention in ways that enable them to
adapt in optimal ways to their environment.
o Typically first displays itself during the preschool or early school years.
o The disorder does not typically end in adulthood, although it may vary in its severity,
becoming either more or less severe.
o An estimated 3% to 5% of the general school-age population has some form of ADHD.
o Approximately three times more common in boys than girls do.
As per Posner and Raichle’s disengage and move operations, ADHD clients suffer not so
much from an inability to be alert or to devote mental resources to a task, as from an inability
to sustain vigilance on dull, boring, repetitive tasks, such as “independent schoolwork,
homework, or chore performance”
Partial Suspected Causes of ADHD: It has been investigated widely, but no one knows for
sure the cause of ADHD.
32
o Brain injury
o Food additives—in particular, sugar and certain dyes
o Differences in the frontal-subcortical cerebellar catecholaminergic circuits and in
dopamine regulation
o Inattention (i.e., difficulty staying focused and paying attention, easily distracted,
struggles to follow instructions)
o Hyperactivity (i.e., levels of activity that exceed what is normally shown by children of a
given age)
o Impulsiveness (i.e., acting without thinking, difficulty waiting for turn. interrupting or
intruding on others, blurting things out)
The three main types of ADHD, depending on which symptoms are predominant:
o Hyperactive-impulsive
o Inattentive
o A combination of hyperactive-impulsive and inattentive behaviour
Children with the inattentive type of ADHD show several distinctive symptoms:
ADHD (Attention-Deficit/Hyperactivity Disorder) and Autism are two distinct conditions that
share some similarities but also have key differences:
Treatment of ADHD: Most often treated with a combination of psychotherapy and drugs.
33
Drugs
o Ritalin (methylphenidate)
o Metadate (methylphenidate)
o Strattera (atomoxetine)
This last drug differs from other drugs used to treat ADHD in that it is not a stimulant. Rather,
it affects the neurotransmitter norepinephrine. The stimulants, in contrast, affect the
neurotransmitter dopamine.
A number of studies have noted that, although medication is a useful tool in the treatment of
ADHD, it is best used in combination with behavioural interventions.
Psychotherapy
Gardner's theory proposes that intelligence is not a single, general ability but rather involves
multiple distinct intelligences or cognitive skills. The theory identifies eight types of
intelligence: linguistic, logical-mathematical, spatial, bodily-kinaesthetic, musical,
interpersonal, intrapersonal, and naturalist.
The text explains that this theory has been helpful for treating and supporting children with
ADHD. Rather than viewing these children as deficient in some general intelligence, the
multiple intelligences perspective recognizes that they may have strengths in some areas and
challenges in others.
For example, a child with ADHD may excel in bodily-kinaesthetic or musical intelligence but
struggle with linguistic or logical-mathematical intelligence. Educational interventions can
focus on utilizing and developing the child's areas of strength to boost their achievements.
Interventions tailored to a child's predominant intelligences can improve outcomes for
children with ADHD.
People with ADHD have difficulties in focusing their attention in ways that enable them to
adapt in optimal ways to their environment.
Change Blindness is the inability to notice large changes to scenes when the scene is
somehow disrupted. Change blindness has been linked to another phenomenon known as
inattentional blindness.
Earliest discovery was by the father of cognitive psychology Ulric Neisser and his fellow
researchers during the 1970s.
Mental Workload - When our minds are overloaded with a demanding cognitive task,
it consumes most of our attention resources. This high mental workload leaves less
34
attention available to notice other things around us. The heavy cognitive load
contributes to inattentional blindness.
Working Memory - This system actively holds information in mind and manipulates it.
Working memory has limited capacity. When working memory is taxed by a difficult
task, fewer resources remain to process other information. This can lead to failures to
notice unexpected stimuli due to inattentional blindness. The limited capacity of
working memory reduces available attention.
Only 44% of participants ever reported seeing a gorilla, although this number was much
greater for the subjects watching the black team, who presumably shared more visual
features with the gorilla (dark colour) than did the white team.
Simons and Chabris (1999) concluded that unexpected events could be overlooked.
Presumably, we only perceive those events to which we attend, especially if the unexpected
event is dissimilar to the focus of our attention, and if our attention is tightly focused
somewhere else.
Spatial Neglect
It is an attentional dysfunction in which participants ignore the half of their visual field that is
contralateral to (on the opposite side of) the hemisphere of the brain that has a lesion.
35
Interestingly, when patients are presented with stimuli only to their right or their left side,
they often can perceive the stimuli, no matter which side they are on. This means that they
have no major visual-field defects. Thus, hemi-neglect is attentional, rather than sensory.
When stimuli are present in both sides of the visual field, people with hemi-neglect suddenly
ignore the stimuli that are contralateral to their lesion (i.e., if the lesion is in the right
hemisphere, they neglect stimuli in the left visual field).
This phenomenon is called extinction. The reason for extinction may be that patients are not
able to disengage their attention from the stimulus in the ipsilateral field (the part of the
visual field where the lesion is) in order then to shift their attention to the contralateral visual
field. Their attention is “stuck” on the ipsilateral object so that they cannot shift attention to
stimuli that appear on the contralateral side.
Automatic processing, Schneider and Shiffrin As per Schneider and Shiffrin (1977),
(1977) asserted, is used for easy tasks and Controlled Processing is used for difficult
36
with familiar items and have lesser tasks and ones that involve unfamiliar
demands on concentration or attention. processes.
Automatic processes develop when the Controlled processing occurs when the
mapping between stimuli and responses mapping is varied
is consistent In the varied-mapping condition, the
In Schneider and Shiffrin’s experiments set of target letters or numbers, called
consistent-mapping condition, the the memory set, consisted of one or
target memory set consisted of more letters or numbers, and the
numbers and the frame consisted of stimuli in each frame were also letters
letters, or vice versa. Stimuli that were or numbers. Targets in one trial could
targets in one trial were never become distractors in subsequent
distractors in other trials. The task in trials. In this condition, the task was
this condition was expected to require expected to be hard and to require
less capacity concentration and effort.
37
Practice: During the course of practice, implementation of the various steps becomes
more efficient. A person gradually combines individual effortful steps into integrated
components that are further integrated until the whole process is one single operation
that requires few or no cognitive resources
Instance Theory – Logan (1988): He suggested that automatization occurs because
we gradually accumulate knowledge about specific responses to specific stimuli.
Essentially, one does not necessarily get better at something, they just retrieve the
appropriate & specific response to a situation, automatically, from an accumulated
wealth of specific experiences.
38