Unit 5 Attention Perception Learning Memory and Forgetting
Unit 5 Attention Perception Learning Memory and Forgetting
PERCEPTION, LEARNING,
MEMORY AND FORGETTING
DIWAKAR
EDUCATION HUB
UNIT-5 ATTENTION, PERCEPTION, LEARNING, MEMORY AND FORGETTING
Attention can generally be defined as the ability to produce, select, manage and
maintain sufficient stimulation at a specific amount of time to process any kind of
information. It takes place on the cognitive level and has different types.
Attention is the key to achieve optimum functionality in our lives. The way to do this is
to parse the factors or stimuli we encounter as relevant or irrelevant. Actually, this is
when we make the most basic choices regarding the topics we are interested in or not.
Visual Attention
Generally speaking, visual attention is thought to operate as a two-stage process. In
the first stage, attention is distributed uniformly over the external visual scene and the
processing of information. In the second stage, attention is concentrated to a specific
area of the visual scene; it is focused on a specific stimulus. There are two major
models for understanding how visual attention operates, both of which are loose
metaphors for the actual neural processes occurring.
Spotlight Model
The term “spotlight” was inspired by the work of William James, who described
attention as having a focus, a margin, and a fringe. The focus is the central area that
extracts “high-resolution” information from the visual scene where attention is directed.
Surrounding the focus is the fringe of attention, which extracts information in a much
more crude fashion. This fringe extends out to a specified area, and the cutoff is called
the margin.
Zoom-Lens Model
First introduced in 1986, this model inherits all the properties of the spotlight model, but
it has the added property of changing in size. This size-change mechanism was
inspired by the zoom lens one might find on a camera, and any change in size can be
described by a trade-off in the efficiency of processing. The zoom-lens of attention can
be described in terms of an inverse trade-off between the size of focus and the
efficiency of processing.
Because attentional resources are assumed to be fixed, the larger the focus is, the
slower processing will be of that region of the visual scene, since this fixed resource
will be distributed over a larger area.
Cognitive Load
Think of a computer with limited memory storage: you can only give it so many tasks
before it is unable to process more. Brains work on a similar principle, called the
cognitive load theory. “Cognitive load” refers to the total amount of mental effort being
used in working memory. Attention requires working memory; therefore devoting
attention to something increases cognitive load.
1- Internal Factors
These determinants are personal because they depend on the individual's own
cognitive resources and brain functions. Some of them can be listed as follows:
Mental condition
Needs
Emotions
Mindset
Interests
Motivation
Physical state
2- External Factors
These determinants are usually based on the characteristics of the stimuli or come
from our surroundings. Some of them can be listed as follows:
Intensity
Uniqueness
Size
Color
Emotional Burden
Contrast
Selective Attention
When we focus our attention on anything, we actually choose to ignore many things.
For example, imagine going to a bookstore. There is a specific book you want to buy
and you are walking between the bookshelves to find that book.
Perhaps you are passing through hundreds of books without actually noticing any of
them. On the other hand, your eyes actually see all of them and possibly record them
deep into your mind, but you don't even realise it. This here is a great selective
attention example.
Now that you have an understanding of the concept, let’s go over the selective
attention definition.
It is proven that the capacity of our brains to take care of everything around us is very
limited, so it is impossible for us to pay attention to each of these sensory experiences.
Therefore, while our brain focuses our attention on some important elements of our
environment, it puts all other stimuli in the background.
This model was defined by Donald Broadbent in 1958. He used a filtering metaphor
of information processing to describe attention. Broadbent suggested that our filtering
of information occurs early on in the perceptual process. Physical characteristics like
colours, loudness or direction of the stimulants processed before were used to select
or reject a stimulus in later operations.
Perhaps we are exposed to millions of ads every day on the way to work or on the
road. We don't even realize that we've seen many of these. But some, on the other
hand, manage to attract our attention, especially if they address our current needs or
taste. This shows that these advertisements have contacted with us as a stimulus by
going through our selective perception.
Divided Attention
We use divided attention while simultaneously paying attention to two or more tasks.
This ability is also called Multitasking. Divided attention uses mental focus on a very
large scale. Nonetheless, this does not allow the brain to fully focus on any task.
Therefore, this type of attention does not last for long.
Alternating Attention
Alternating attention is the ability to change the focus of your attention and switch
between different tasks. In this type of attention, mental flexibility is required so that
one task does not limit the performance of others.
Sustained Attention
We often use sustained attention for tasks that take a long time or require intense
focus. This type of attention allows one to consistently perform a certain mental activity.
For example, when children study for an exam, they need to read and acquire the
information in a textbook for several hours.
Now that you have discovered how your mind works, you should be ready for the
improvement! Trying MentalUP, prepared by academicians and scientists especially to
boost cognitive functions,
processing.
One way to think of this concept is that sensation is a physical process, whereas
perception is psychological. For example, upon walking into a kitchen and smelling the
scent of baking cinnamon rolls, the sensation is the scent receptors detecting the odor
of cinnamon, but the perception may be “Mmm, this smells like the bread Grandma
used to bake when the family gathered for holidays.”
Although our perceptions are built from sensations, not all sensations result in
perception. In fact, we often don’t perceive stimuli that remain relatively constant over
prolonged periods of time. This is known as sensory adaptation. Imagine entering a
classroom with an old analog clock. Upon first entering the room, you can hear the
ticking of the clock; as you begin to engage in conversation with classmates or listen to
your professor greet the class, you are no longer aware of the ticking. The clock is still
ticking, and that information is still affecting sensory receptors of the auditory system.
The fact that you no longer perceive the sound demonstrates sensory adaptation and
shows that while closely associated, sensation and perception are different.
In the beginning of the nineteenth century, two vastly developed areas in psychology,
viz. perception and ' personality came nearer to each other. Numerous researches
were carried out to study the relation between perception and personality. Perceptual
characteristics have some sort of relation with individual's personality organization.
Various journals have published findings of perception personality relationship. While
going through this literature, one can find that the relation between perception and
personality was studied from different angles. In other words, there are various
approaches for studying the relation between-these two important fields which have
systematically, scientifically and experimentally developed . in the last fifty years.
Topological Approach :
As early as 1944, Thurstone described an extensive ' factorial exploration of various
perceptual tasks in order to isolate underlying variables which could be used to
account for individual differences. Since that time,tll©£§ have been many attempts to
relate various personality variables to differences in performance on perceptual tasks.
Brunswik soon discovered that many of'her Ss were less able to tolerate 'emotional
ambiguities' than others. She became interested, in whether or not this intolerance
extended also to the more traditional field of perception. As a result of some of her
explanations, she was able to offer rich evidence on the basis of interviews, clinical
evaluations etc.
The'new look'-is a phrase borrowed from’ the publicity releases from liaison Dior in
Paris, which described some startling changes in fashion. The great discovery of the
'new look* Was that the perceiver also counts. In somejways, the introduction of the
perceiver into the process of perception can be linked to the introduction of the
observer into the measurement of velocity in the theory of relativity.
Einstein's great contribution emerges when-he introduced the velocity of the observer
or his frame of reference into the measurement of the velocity of an object. In the same
way, the new look hoped to revolutionize » perception by introducing the
characteristics of the perceiver, that ,is, his personality (drives, needs etc.).
Unfortunately, the revolution in psychology did not go off as successfully as the
revolution in physics, but fizzled more like the revolution in fashion.
They noted that M&Ginnies’ taboo words were much less familiaf than the neutral
words. They demonstrated that the more familiar a word was, the brifer was its
recognition thresholds; (perceptual defence) for S f \ the taboo words, although
McGinnies defended his original interpretation. In general, the concept of perceptual
defence began the to lose status, even among directive state workers themselves.
McGinnigs' v defence consisted of noting that increased recognition thresholds for
neutral words were found when they followed immediately after taboo words -
constituting evidence for ’generalization' of the-avoidance (defensive) reaction.
furthermore, the analysis of pre-recognition responses suggested that for neutral words
there was a greater resemblance to the stimulus words than there' was for the (122)
taboo words.
Postman, Bronson and Gropper strongly the contested these explanations, suggesting
that uncontrolled variations in familiarity of words could account for most of the
perceptual defence effect. Solomon and Postman^155^ had already reported a study
which showed that recognition thresholds varied inversely with frequency of past y;
usage.
perceiving is of course related to the approach of various set theories. )Bruner and
Postman ^ Krere the original founders of the hypothesis or expectancy theory.
Perception involves the input of information from the environment. Input is not specified
in terms of stimulus energy, hut rather in terms of its signal value, as cue or clue. The
next process involves the checking or confirmation of the is organism's hypothesis. Iff
thereA confirmation, the hypothesis is strengthened and its arousal will he 'easier' in
future when similar 'information' from the environment is received. If the hypothesis is
not confirmed, the organism will introduce a new hypothesis, until one of them is
confirmed.
Gestalt psychology
school of psychology founded in the 20th century that provided the foundation for the
modern study of perception. Gestalt theory emphasizes that the whole of anything is
greater than its parts. That is, the attributes of the whole are not deducible from
analysis of the parts in isolation.
The word Gestalt is used in modern German to mean the way a thing has been
“placed,” or “put together.” There is no exact equivalent in English. “Form” and “shape”
are the usual translations; in psychology the word is often interpreted as “pattern” or
“configuration.”
Max Wertheimer noted that rapid sequences of perceptual events, such as rows of
flashing lights, create the illusion of motion even when there is none. This is known as
the phi phenomenon. Motion pictures are based on this principle, with a series of still
images appearing in rapid succession to form a seamless visual experience.
According to Gestalt psychology, the whole is different from the sum of its parts. Based
upon this belief, Gestalt psychologists developed a set of principles to explain
perceptual organization, or how smaller objects are grouped to form larger ones.
These principles are often referred to as the "laws of perceptual organization."
However, it is important to note that while Gestalt psychologists call these phenomena
"laws," a more accurate term would be "principles of perceptual organization." These
principles are much like heuristics, which are mental shortcuts for solving problems.
Follow the links below to find more information and examples of the different Gestalt
laws of perceptual organization.
Gestalt theory [1] has provided perceptual science with a conceptual framework
relating to brain mechanisms that determine the way we see the visual world. This is
referred to as "Perceptual Organization" and has inspired researchers in Psychology,
Neuroscience and Computational Design ever since.
The major Gestalt principles, such as the principle of Prägnanz, and more importantly
the Gestalt laws of perceptual organization, have been critically important to our
understanding of visual information processing, how the brain detects order in what we
see, and derives likely perceptual representations from statistically significant structural
regularities. The perceptual integration of contrast information across co-linear space
for the organization of objects in the 2D image plane into figure and ground convey the
most elementary basis to our understanding of the visual world. Gestalt theory
continues to generate powerful concepts and insights for perceptual science even
today, where it is to be placed in the context of image-base decision making by human
minds and machines.
However, in complex images, some visible stimulus fragments appear clearly aligned,
others do not. Specific phenomenal conditions need to be satisfied to enable collinear
interpolation in static 2D scenes, and the process of interpolation constrains the
spreading of surfaces across unspecified regions in the image [45,46].
Perceptual neuroscience has provided us with a diversified account for the many ways
in which visual sensitivity to ordered structure and regularities expresses itself in
behavior on the basis of cortical mechanisms. Multiple stages of neural processing
transform fragmented signals into visual key representations of 3D scenes that can be
used to control effective behavior.
Since our survival depends on our ability to pick up order in the physical world, and
since we conceive the physical world as an ordered one, our brain must be sensitive
to structural regularities in the physical world. Neural interactions “beyond the classic
receptive field” drive the visual processing of texture dissimilarities, boundary
completion, surface filling-in, and figure-ground segregation in the brain genesis of
“perceptual order”.
During the 1920s, a number of German psychologists including Max Wertheimer and
Wolfgang Kohler began studying different principles of perception that govern how
people make sense of an often disorderly world. Their work led to what is known as
the Gestalt laws of perceptual organization.
The Gestalt theory of perception proposes that people make sense of the world around
them by talking separate and distinct elements and combining them into a unified
whole.2
For example, if you look at shapes drawn on a piece of paper, your mind will likely
group the shapes in terms of things such as similarity or proximity. Objects that are
similar to one another tend to be grouped together. Objects that are near each other
also tend to be grouped together.
Examples
The "faces or vases" illustration is one of the most frequent demonstrations of figure-
ground. What you see when you look at the faces or vases illusion depends on
whether you see the white as the figure or the black as the figure.
If you see the white as the figure, then you perceive a vase. If you see the black as the
figure, then you see two faces in profile.
Perceptual Constancy: Size, Shape, and Color; Illusions
Perceptual constancy is perceiving objects as having constant shape, size, and color
regardless of changes in perspective, distance, and lighting.
KEY POINTS
Perceptual constancy refers to perceiving familiar objects as having
standard shape, size, color, and location regardless of changes in the
angle of perspective, distance, and lighting.
Size constancy is when people's perception of a particular object's size
does not change regardless of changes in distance from the object, even
though distance affects the size of the object as it is projected onto
the retina.
Shape constancy is when people's perception of the shape of an object
does not change regardless of changes to the object's orientation.
Distance constancy refers to the relationship between apparent distance
and physical distance: it can cause us to perceive things as closer or
farther away than they actually are.
Color constancy is a feature of the human color perception system that
ensures that the color of an object is perceived as similar even under
varying conditions.
Auditory constancy is a phenomenon in music, allowing us to perceive the
same instrument over differing pitches, volumes, and timbres, as well as in
speech perception, when we perceive the same words regardless of who is
speaking them.
Within a certain range, people's perception of a particular object's size will not change,
regardless of changes in distance or size change on the retina. The perception of the
image is still based upon the actual size of the perceptual characteristics. The visual
perception of size constancy has given rise to many optical illusions.
Shape Constancy
Regardless of changes to an object's orientation, the shape of the object as it is
perceived is constant. Or, perhaps more accurately, the actual shape of the object is
sensed by the eye as changing but then perceived by the brain as the same. This
happens when we watch a door open: the actual image on our retinas is different each
time the door swings in either direction, but we perceive it as being the same door
made of the same shapes.
Shape constancy
This form of perceptual constancy allows us to perceive that the door is made of the
same shapes despite different images being delivered to our retinae.
Distance Constancy
This refers to the relationship between apparent distance and physical distance. An
example of this illusion in daily life is the moon. When it is near the horizon, it is
perceived as closer to Earth than when it is directly overhead.
Color Constancy
This is a feature of the human color perception system that ensures that the color of an
object remains similar under varying conditions. Consider the shade illusion: our
perception of how colors are affected by bright light versus shade causes us to
perceive the two squares as different colors. In fact, they are the same exact shade of
gray.
Perceptual illusion
While the problem of perceptual illusion has not aroused quite the same degree of
empirical or theoretical interest among neurophysiologists as among experimental
psychologists there has nevertheless been a continuing concern with the neural
correlates of illusory phenomena.
That this is so was made clear in the classic experiments of Holway and Borings in
which distance stimuli were progressively reduced. Since then data from three
separate experiments9 " have shown that when distance stimuli are entirely eliminated
an object's apparent size decreases as a linear function of observer-object-distance,
i.e., apparent size follows the "law of the visual angle." "Cues" or stimuli for distance
fall into five classes:
(1) retinal disparity (or binocular parallax;
(2) muscular stimuli (convergence and accommodation);
(3) monocular parallax;
(4) atmospheric stimuli (aerial perspective and the Tyndall effect); and
(5) projected stimuli (perspective, texture, overlay, elevation in field, element and
interspace size, and element and interspace frequency. Normally all or most of these
stimuli for distance are present and visual size constancy is perfect. However, although
size constancy falls off as distance stimuli are systematically reduced (e.g., when
binocular parallax is eliminated by using one eye and monocular parallax by holding
the head stationary8 ) some degree of constancy obtains, i.e., apparent size does not
follow the law of the visual angle, as observer-object distance increases.
The trees on the near side of a lake are bigger and fewer per unit visual angle than the
smaller and more frequent trees on the far side, as indeed are the interspaces between
them. Many geometric optical illusions involve such distance stimuli.
Of two objects one is usually located in the context of larger and less frequent
elements or spaces consonant with nearness and the other in the context of smaller
and more frequent elements corresponding to greater distance. The former object is
judged smaller than the latter.
The Oppel-Kundt, Delboeuf, and Miiller-Lyer illusions are examples of size illusions in
which the retinal image of an object, usually a line or simple figure, is invariant but
projected stimuli for distance are varied. It should be noted, however, that as had been
pointed out elsewhere15 the Miiller-Lyer illusion as it is classically sbown represents
two separate effects.
The "short" version with "inboard" elements is probably a different illusion than the
"long" version with "outboard" elements. Evidence for this difference has been adduced
by Erlebacher and Sekuler.
Visual orientation constancy and illusion When the observer's head is tilted laterally as
posture is changed, the retinal orientation of the object's image relative to the normally
vertical meridian of the eye changes. When the observer is recumbent this change is
nearly 90 degrees. However, under conditions of normal illumination and when only the
object itself is visible in a dark room, its apparent tilt is relatively stable, a phenomena
called visual orientation constancy.17 In normal illumination the bar remains
perceptually invariant even for large lateral body tilts.
Form Perception
The Gestalt Psychologists studied extensively form perception, or the perception of
objects, shapes and patterns. Gestalt principles may be broken down into two
categories: perceptual organization (grouping) and depth perception.
Gestalt Principles of Perceptual Organization
How objects are grouped together (Links to an external site.)
o Continuity
We tend to perceive figures or objects as belonging together if they
appear to form a continuous pattern
o Closure (Connectedness)
We perceive figures with gaps in them to be complete
o Similarity
We perceive figures which look alike as being grouped together
o Proximity
We perceive things close together as being in sets
o Pragnanz
We perceive reality in the simplest way rather than inferring
complexity
Take a minute to take some notes: How are Gestalt grouping principles used
in The Human Condition (above)
Monocular depth cues
Depth cues that can be perceived by only one eye
o Interposition
When one object partly blocks your view of another, you perceive the
partially blocked object as farther away
o Linear perspective
Parallel lines that are known to be the same distance apart appear to
grow closer together, or converge, as they recede into the distance
o Relative size
Larger objects are perceived as being closer to the viewer, and
smaller objects as being farther away
o Texture gradient
Near objects appear to have sharply defined textures, while similar
objects appear progressively smoother and fuzzier as they recede
into the distance
o Atmospheric perspective
Objects in the distance have a bluish tint and appear more blurred
than objects closer to the viewer
Take a minute to take some notes: How are depth cues used in The Human
Condition (above)
Gestalt principles help explain how we perceive distance, depth, organization and
harmony on a 2 dimensional canvas.
For example, in the image below both of the orange (middle) circles are exactly the
same size. This one of many visual illusions, called the Ebbinghaus Illusion, which
explore the effect of context on perception. Our visual system compares the circles and
exaggerates the differences between them.
Along with information on motion, shape, and color, our brains receive input that
indicates both depth, the perception that different objects are different distances from
us, and the related concept of stereopsis, the solidity of objects. Studies show that
people have two ways of judging depth or distance: using monocular (one-eyed)
information, and using binocular (two-eyed) data. Monocular cues operate at distances
of around 100 feet or greater, where the retinal images seen by both eyes are almost
identical. These cues include:
1. Previous familiarity: If we know the range of sizes of people, cats, or trees, we
can judge how far away they are.
2. Occlusion: If one object partly hides another, we know that the object in front is
closer.
3. Perspective: Parallel lines such as the edges of a road, the intersections of walls
and ceilings, and railroad tracks, appear to converge at a distance. The relative
distances between objects in a scene with parallel lines are estimated by their
positions along the converging lines.
4. Motion parallax: As we move our heads or bodies, nearby objects appear to
move more quickly than distant objects; for example, telephone poles beside the
road appear to pass by much more quickly when viewed from a moving car than
do buildings or trees hundreds of feet back from the road.
5. Shadows and light: Patterns of light and dark can give an impression of depth,
and bright colors tend to seem closer than dull colors.
Even though these monocular cues provide some depth vision so that the world does
not look "flat" to us when we use just one eye, viewing a scene with two eyes-binocular
vision-gives most people a more vivid sense of depth and of stereopsis.
One illustration of how we can miss differences in scenes when we first rapidly scan
them is to make a copy of a photograph of a well-known person and to carefully alter
one or two aspects of the picture: make the eyebrows heavier, change the mouth, add
bags under the eyes. When the original and the altered copies are viewed side by side
and upside down, the changes can be very difficult to identify. If these two pictures are
viewed upright, the differences are immediately apparent. Another way to illustrate
visual attention is to make a pattern of identical marks or simple objects, such as a
sheet of paper with row upon row of X's, with one row containing one or two Z's. See
how long it takes to identify the odd letters or objects. Students can devise tests like
these in the experimental part of this unit.
The survey included 120 postgraduate students of the Faculty of Arts at the University
of Ljubljana. In order to measure their motivation, we employed several scales of the
Motivated Strategies for Learning Questionnaire (Pintrich et al., 1991). For the purpose
of this research, we created a new questionnaire for their evaluation of the learning
environment. The results revealed a high correlation between the intrinsic goal
orientation, self-efficacy, and control beliefs.
The most important factors of the learning environment that are connected with the
formation of intrinsic goal-orientation and the enjoyment of education are the
perception of the usefulness of the studied topics, a feeling of autonomy, and teacher
support.
To an extent, these findings are supported by the findings of those authors who
recommend using those methods of teaching that are in compliance with the student-
centred understanding of teaching and learning.
In the previous two decades, the research conducted on achievement goals and
achievement goal orientations has become highly prominent in the field of education
(e.g. Ames, 1992; Dweck, 1986; Nicholls, 1984; Urdan, 2004). Moreover, certain meta-
analyses have shown that this field has become predominant in the research of
motivation (Austin & Vancouver, 1996). In psychology, goals are understood as the
subject, activity or phenomenon at which our action is directed and with which we
satisfy our need (Locke & Latham, 1990),
Previous research has shown that, in order to understand the students’ approach to
studying, it is crucial to know the reasons for their dealing with a particular task and the
goals they set for themselves in the process. In this context, the authors predominantly
differentiate between mastery goals (i.e. intrinsic goals for which the emphasis is
placed on the development of competence) and performance goals (i.e. extrinsic goals
that place an emphasis on achievements and comparisons with others).
place (Cleveland & Fisher, 2014). For the most part, research has focused on the
different elements of classroom context. Bronfenbrenner (1979) defines the classroom
context as a microsystem, “a pattern of activities, roles and interpersonal relations
experienced by the developing person in a given setting with particular physical and
material characteristics” (Bronfenbrenner, 1979, p. 22), i.e. it contains elements that
contribute to the understanding of the happenings in the classroom.
The belief that students and teachers should be researched as a whole prevailed, but
researchers have shown a tendency to isolate individual variables instead of
attempting to understand the complex integration of thinking, motivation, and feelings.
The authors found that teaching never directly affects learning; on the contrary, it
operates through intermediary factors that include perceptions of teaching, evaluation,
the climate in the classroom, the content of the school subject, structure and similar.
The first and most important reason is that goal orientation directly influences many
important aspects of student motivation. For example, it is more likely that students
with intrinsic goal orientation will have higher self-efficacy, use more complex cognitive
learning strategies, be meta-cognitively more active, and achieve better learning
outcomes. Previous research shows that goals direct, or at least mediate, the entire
process of self-regulation of learning, wherein the use of strategies is only one of the
aspects.
Method
Participants and procedure The survey was conducted between November and
December 2014, and included students who were enrolled in the first year of master
studies at the Faculty of Arts at the University of Ljubljana. The sample consisted of
120 students (102 female, 17 male, 1 did not reveal his or her gender) who study in
different programs, but are also participating in the common teaching module. This
means that than 80% of all the students in this module were included in the research.
Measures
Characteristics of motivation
In order to establish the connection between motivation and perception of the learning
environment, we employed motivational scales from the Motivated Strategies for
Learning Questionnaire (MSLQ) (Pintrich, Smith, Garcia, & McKeachie, 1991), which is
based on a social-cognitive approach to motivation and learning characterized by
stressing the interconnection of the cognitive and emotional components of learning.
In the first part of the questionnaire, 20 items was used from the MSLQ, specifically
from the “Intrinsic goal-orientation” and “Extrinsic goal-orientation”, “Self-efficacy” and
“Control beliefs” scales. The respondents replied to the five-point Likert scale
questionnaire with the following answer possibilities:
1 – Definitely not true of me,
2 – Mostly not true of me,
3 – Sometimes true and sometimes not true of me,
4 – Mostly true of me,
5 – Definitely true of me. A five-point scale instead of the original seven-point scale
was used to unify scales across the questionnaire.
In this part of the questionnaire, 42 items were formed, representing the main
dimensions of the learning environment: teacher support, student interaction, authentic
learning, autonomy, and personal relevance. The respondents assessed their
perceptions of learning environment on the course level by using the five-point Likert
scale, which represented the frequency of individual “events” in lectures. The following
answers were possible: 1 – Never, 2 – Seldom, 3 – Sometimes, 4 – Often, 5 – Always.
The number of components was first evaluated with the principal component analysis,
and the results of this analysis showed six appropriate dimensions.
The importance of collaborative learning and teacher support is also underscored. The
results of the regression analysis reflect the findings from the correlation analysis and
give even more significance to the real-life problems of the studied topics, and support
in developing autonomy.
The importance of the perceived authenticity of learning have also been proven in the
correlation analysis. In this study, the interconnectedness of theoretical knowledge and
practical application seems to be among the most important determinants of students’
motivation for studying in higher education.
The general premise of SDT is that decisions are made against a background of
uncertainty, and the goal of the decision-maker is to tease out the decision signal from
background noise. SDT can be applied to any binary decision-making situation where
the response of the decision maker can be compared to the actual presence or
absence of the target. The advantage of SDT as a measure of decision-making is that
it provides a unitless measure of sensitivity, regardless of subject bias, that can be
compared to other sensitivities over widely different situations.
SDT is a model for a theory of how organisms make fine discriminations and it
specifies model-based methods of data collection and analysis. Notably, through its
analytical technique called the receiver operating characteristic (ROC), it separates
sensory and decision factors and provides independent measures of them. SDT's
approach is now used in many areas in which discrimination is studied in psychology,
including cognitive as well as sensory processes.
SDT has been applied within a broad range of topics, including memory research
(e.g., Banks, 1970), accuracy in radiology diagnostics (e.g., Obuchowski, 2003), and
sustained attention in individuals with ADHD (e.g., Huang-Pollock et al., 2012). Further
testament to the utility of SDT comes from the fact that SDT is often discussed in
introductory courses and textbooks (e.g., Wade et al., 2013; Lilienfeld et al., 2015).
Alternatively, a miss represents the probability that the subject reports the signal
absent when it is present and a correct rejection represents the probability that the
subject reports the signal absent when it is absent. All response probabilities are
reflected as a part of the area underneath a normal curve. If the probability of each
response type is therefore known, both the signal and the noise distributions can be
estimated based on simple statistical principles.
Signal detection theory (SDT) is a framework for interpreting data from experiments in
which accuracy is measured. In such experiments, two or more stimulus classes
(signal and noise in a detection experiment, old and new items in a memory task) are
sampled repeatedly, and an observer must select a response corresponding to the
class actually presented. According to SDT, performance in such tasks is limited by
observer sensitivity, which depends on the degree of overlap between the distributions
of a decision variable produced by the stimulus classes.
The meaning of the term subliminal perception has changed over the years, and some
prefer to use perception without awareness as an alternative that avoids the
sometimes contentious issue of limen (threshold). Generally speaking, “Subliminal
perception occurs whenever stimuli presented below the threshold or limen for
awareness are found to influence thoughts, feelings, or actions” (Merikle, 2000).
The term subliminal is derived from the terms sub (below) and limen (threshold), and it
refers to perception so subtle it cannot reach conscious awareness. Most of the
research on subliminal perception is done on visual subliminal perception. For
instance, one can flash words or pictures so quickly on a computer screen (generally
faster than 10-15 milliseconds) that perceivers have the feeling they do not see
anything at all.
While Cheesman and Merikle (1984) have addressed the issue by classifying types of
self reports, this controversy is far from resolved. Reingold and Toth (1996) describe
one of the fundamental issues: . . . factors unrelated to awareness, such as demand
characteristics and preconceived biases, may lead subjects to adopt a conservative
response criterion and report null perceptual awareness even under conditions in
which conscious perceptual information is available.
Response bias represents a threat not only to the validity of the subjective report
measure of awareness, but also to its reliability. In particular, variability in response
criteria makes it difficult to compare reports of null subjective confidence across-
subjects, or within-subjects across conditions.
In the second phase, subjects had to make comparisons between two octagons under
normal viewing conditions. One octagons had been subliminally presented to the
subject during the exposure phase, and the other octagon was novel. Subjects were
asked to both indicate which octagon they had seen before (recognition), and which
octagon they liked better.
However, subjects were instructed to not use the priming word in order to complete
the partial word. If subjects were aware of the priming word, then they should avoid
using it in the word completion task. However,
if the subjects were not aware of the priming word but had nevertheless perceived it
(subliminal perception), then the priming word might influence their word completion
task.
Debner and Jacoby (1994) found that indeed, priming words presented for very brief
durations – the subliminal stimuli – were much more likely to be used in completing
words than the priming words shown for longer durations. This provides an interesting
new experimental format in contrast to the usual dissociation, since the exclusion
paradigm shows distinct results for consciously versus unconsciously perceived stimuli.
Information Processing
Basic Assumptions
The information processing approach is based on a number of assumptions, including:
(1) information made available by the environment is processed by a series of
processing systems (e.g. attention, perception, short-term memory);
(2) these processing systems transform or alter the information in systematic ways;
(3) the aim of research is to specify the processes and structures that underlie
cognitive performance;
(4) information processing in humans resembles that in computers.
The development of the computer in the 1950s and 1960s had an important influence
on psychology and was, in part, responsible for the cognitive approach becoming the
dominant approach in modern psychology (taking over from Behaviorism).
Critical Evaluation
A number of models of attention within the Information Processing framework have
been proposed including:
Broadbent'sFilter Model (1958), Treisman's Attenuation Model (1964) and Deutsch and
Deutsch's Late Selection Model (1963).
However, there are a number of evaluative points to bear in mind when studying these
models, and the information processing approach in general. These include:
2. The analogy between human cognition and computer functioning adopted by the
information processing approach is limited.
Computers can be regarded as information processing systems insofar as they:
(i) combine information presented with stored information to provide solutions to a
variety of problems, and
(ii) most computers have a central processor of limited capacity and it is usually
assumed that capacity limitations affect the human attentional system.
This demonstrates that the rural and urban groups sense the same event differently as
a result of their diverse cultural learnings. The term field dependence refers to the
degree to which perception of an object is influenced by the background or
environment in which it appears.
Some people are less likely than others to separate an object from its surrounding
environment. When adults in Japan and the United States are shown an animated
underwater scene in which one large fish swims among small fish and other marine
life, the Japanese describe the scene and comment more about the relationships
among the objects in the scene.
PERCEIVING
Culture also has a great effect on the perception process (Tajfel, 1969; Triandis,
1964). Human perception is usually thought of as a three-step process of selection,
organization, and interpretation. Each of these steps is affected by culture.
Selection
The first step in the perception process is selection. Within your physiological
limitations, you are exposed to more stimuli than you could possibly manage. To use
sight as an example, you may feel that you are aware of all stimuli on your retinas, but
most of the data from the retinas are handled on a subconscious level by a variety of
specialized systems. Parts of our brains produce output from the retinas that we
cannot “see.” No amount of introspection can make us aware of those processes.
Perceptual Styles
Perceptual Style is the way you take in information through your five senses and make
that information meaningful to you.
Your Perceptual Style acts as a filter between sensation and understanding. It is at the
core of who you are, and it impacts your values, your beliefs, your feelings, and your
psychology.
Each of us has one of six unique Perceptual Styles that is innate. Our
individual Perceptual Style is literally hard wired and has grown with us as we’ve aged
and developed. The decisions you make, the actions you take, and the directions you
choose, are all influenced by your Perceptual Style. This is because our Perceptual
Style defines our reality.
We often assume perceive and respond to the same objective reality and that there is
one absolute “right.” Research implies that not only is that untrue, but perception is
actually a filter applied to objective reality, resulting in natural differences between
people.
• Activity: People with the Activity Perceptual Style jump into life with both feet. They
fully engage with the confidence that the details will sort themselves out. Direction,
ideas, and pursuits emerge as the result of constant action and involvement with
others and their surroundings. They engage until some new possibility or interest
emerges to capture their attention. They cultivate extensive networks of friends and
associates.
• Adjustments: People with the Adjustments Perceptual Style see the world as an
objective reality that can be known if they take the time to gather complete information
about its intricacies and complexities. They pursue the acquisition and application of
knowledge as the basis for their life experience. They enjoy sharing their knowledge
with others and gathering new information from research or conversation. They have a
strong sense of diplomacy and project a calm certainty.
• Flow: People with the Flow Perceptual Style are instinctive advocates for the natural
rhythms of life. They see the complex connectivity among seemingly unrelated people,
environments, and situations. They intuitively integrate and harmonize their actions
within a broadly defined community that provides them and others with a sense of
belonging. They honor the continuity between past, present, and future.
• Goals: People with the Goals Perceptual Style stride through life focused on the
accomplishment of specific results and well-defined objectives. They experience a
sense of urgency and clarity of purpose. They believe achievement is primary and
method or process secondary – the end justifies the means. They evaluate all activities
based on possible contribution towards the achievement of the results they expect.
They thrive on competition and believe that life is a constant competition with winners
and losers.
• Methods: People with the Methods Perceptual Style approach life in a practical,
matter-of-fact manner. They focus on how things need to be done. They believe that
ordered processes, properly followed, will produce the desired results. They will
discern the best process or technique to apply to any specific situation in order to
produce reliable, repeatable outcomes. They impose order and they believe that
everyone prefers to use well known and proven methods.
• Vision: People with the Vision Perceptual Style approach life as a singular
experience, a journey toward the future. They face the realities of a situation with
serious intent, an optimistic perspective that a solution will be found, and confidence
that if one is not, there are always other alternatives to explore. They intuitively see
new directions, and actions are taken or dropped opportunistically based on a sense of
future possibilities and potential.
Activity people resist details and analysis. They get easily bored with repetitive and
routine tasks. They are always ready to jump into something new, and they lose
interest in activities that do not deliver attention-grabbing results. They quickly abandon
anything they find boring and will wander off in search of other groups or activities that
need energizing
Activity people function best in settings that require interaction and allow them to get
involved, share insight, tell stories, provide help, and communicate their perspective to
others.
Adjustments
The actions of Adjustments people reflect their skills in collating, analyzing, and
sharing what they know in useful ways. They create intricate systems for the storage
and retrieval of their knowledge.
The greatest satisfaction for Adjustments people comes from being an information
resource for others rather than only applying the information themselves. As such, they
are good at explaining and describing complex, detailed, or technical information. Their
thoroughness, patience with repetitive tasks, and desire for perfection allow them to
spot where information is missing or fuzzy. They edit the written work of others
effectively. They actively polish and hone their knowledge, their systems, and their
processes to increase elegance and accuracy. They are at their best when given the
time to do things carefully and systematically.
Flow
They create and sustain powerful but subtle relationships that form the glue of a
community (family, friends, workgroup, social group, etc). They move smoothly and
easily between daily events as their awareness emerges and recedes. They attend, in
proper proportion, to events and people that require their attention, trusting that what
needs to be done will be done.
Flow people facilitate the development of an environment that is comfortable, one that
fosters and encourages people. When their environment shifts away from people
centered community, they quietly influence its realignment, putting their personal
needs aside if necessary to bring it back into harmony.
Flow people welcome new events that support their traditions and values. They use
relational communities to gather and transmit informal information, after they have
decided what to pass on and what to withhold. Their information sharing is so subtle
that others experience the contribution Flow people make to create connection within
the community but are often unaware of its source
Flow people provide aid and assistance to the members of a community by serving as
a listening post, encouraging development and growth, and empathizing with those
who are struggling
Goals
Goals people distrust complexity, subtlety, and solutions that evolve slowly over time.
They believe that if a problem needs a solution, there is no time like the present to
solve it.
Goals people approach the world with intense energy and have a high level of
endurance that allows them to push themselves well after others have given up. They
take action with personal intensity and urgency, and they are always anxious to get on
to the next task even before the current one is complete. What needs to be done next
is obvious to them, so they do not understand why others around them do not see and
act on it.
Goals people are very outcome oriented and as such prefer to focus on the
accomplishment of goals on which they can see immediate progress. They have no
loyalty to current processes or methods and will abandon them quickly if progress
towards a solution is slowing down and stagnating. They make high achievement
demands on others but never more than they demand from themselves.
Methods
Methods people follow an ordered set of steps that when performed in a repeatable,
logical sequence, inevitably end with the achievement of their objectives. They believe
there is a correct method by which each problem, undertaking, or objective can be best
handled. Discovering and applying this method is what drives them.
Once the desired result has been determined Methods people do not question it.
Instead, they seek to find the steps that will produce the desired outcome with the most
efficient use of time, money, and energy. They believe that failure of a solution to work
is due to human error in the application of a correctly designed course of action
Methods people analyze, manipulate, and apply facts. They use a rational application
of facts to make decisions and solve problems, and they are confident that through this
method they will arrive at the correct conclusion.
Vision
Vision people face the realities of a situation with serious intent, an optimistic
perspective that a solution will be found, and confidence that if one is not, there are
always other alternatives to explore.
Vision people intuitively see new directions that others do not and make the most of
this advantage by moving decisively. This ability to intuit new, useful directions and to
take swift advantage of opportunities as they arise, gives them a strategic edge over
others.
Vision people are unafraid of taking risks and accept that the possibility of high rewards
carries with it an equal possibility of failure. However, they view failure as only a
temporary setback. They love to play with, explore, and develop new ideas, and they
examine all aspects, possible outcomes, and consequences without preconception or
judgment
Pattern Recognition
Pattern recognition is the process of recognizing patterns by using machine learning
algorithm. Pattern recognition can be defined as the classification of data based on
knowledge already gained or on statistical information extracted from patterns and/or
their representation. One of the important aspects of the pattern recognition is its
application potential.
Example: consider our face then eyes, ears, nose etc are features of the face.
A set of features that are taken together, forms the features vector.
Example: In the above example of face, if all the features (eyes, ears, nose etc) taken
together then the sequence is feature vector([eyes, ears, nose]). Feature vector is the
sequence of a features represented as a d-dimensional column vector. In case of
speech, MFCC (Melfrequency Cepstral Coefficent) is the spectral features of the
speech. Sequence of first 13 features forms a feature vector.
Pattern is everything around in this digital world. A pattern can either be seen
physically or it can be observed mathematically by applying algorithms.
Example: The colours on the clothes, speech pattern etc. In computer science, a
pattern is represented using vector features values.
Pattern recognition possesses the following features:
Pattern recognition system should recognise familiar pattern quickly and accurate
Recognize and classify unfamiliar objects
Accurately recognize shapes and objects from different angles
Identify patterns and objects even when partly hidden
Recognise patterns quickly with ease, and with automaticity.
Training
Training set is used to build a model. It consists of the set of images which are used to
train the system. Training rules and algorithms used give relevant information on how
to associate input data with output decision. The system is trained by applying these
algorithms on the dataset, all the relevant information is extracted from the data and
results are obtained. Generally, 80% of the data of the dataset is taken for training
data.
Testing
Testing data is used to test the system. It is the set of data which is used to verify
whether the system is producing the correct output after being trained or not.
Generally, 20% of the data of the dataset is used for testing. Testing data is used to
measure the accuracy of the system. Example: a system which identifies which
category a particular flower belongs to, is able to identify seven category of flowers
correctly out of ten and rest others wrong, then the accuracy is 70 %
Advantages:
Pattern recognition solves classification problems
Pattern recognition solves the problem of fake bio metric detection.
It is useful for cloth pattern recognition for visually impaired blind people.
It helps in speaker diarization.
We can recognise particular object from different angle.
Disadvantages:
Syntactic Pattern recognition approach is complex to implement and it is very
slow process.
Sometime to get better accuracy, larger dataset is required.
Applications:
Image processing, segmentation and analysis
Pattern recognition is used to give human recognition intelligence to machine
which is required in image processing.
Computer vision
The greatest success in speech recognition has been obtained using pattern
recognition paradigms. It is used in various algorithms of speech recognition
which tries to avoid the problems of using a phoneme level of description and
treats larger units such as words as pattern
Finger print identification
None of these tests, as it turned out, were able to predict how well a student pilot
would perform. The traditional theory of depth perception was not working; it failed to
apply where it should have. Gibson puzzled over this and came to realize that the
traditional theory of depth perception was wrong.
Helmholtz (1866, in F. H. Allport, 1955) had struggled with the fact that visual
perception of three dimensions was based upon a two-dimensional structure—the
retina (the retina was flat and visual sensations without depth; Gibson, 1966). It was
not possible, given that barrier, to perceive the three dimensions immediately.
Helmholtz proposed that cues, which were signs of distance, provided the basis for
making unconscious inferences regarding size and distance (Hilgard, 1987).
Based upon his research Gibson (1979/1986) began to suspect that the traditional list
of depth cues was simply not sufficient. Pondering the situation, he theorized that light
provided information and that the changes taking place
The relations within this system are reciprocal, with the reciprocity including a species
evolving in an environment to which it becomes adapted, and an individual acting in its
own niche, developing and learning. (Gibson and Pick, 2000, p. 14) In this reciprocal
interaction the environment makes available resources, opportunities and information
for action.
Actions themselves result in feedback (more information) that can lead to alterations in
action. When chasing down prey, for instance, if it begins to pull away from one, speed
can either be increased to compensate and overtake, or the chase broken off if that is
not possible. In progressing toward some end, whatever that may be, one can
continuously monitor one’s progress and make adjustments as required.
This was not a position, however, that Gibson arrived at easily or without a great deal
of thought and experimentation. In order to get to that point Gibson, who had originally
been aligned with the constructionists, had to realize that problems existed for that
perspective.
Gibson’s discontent was not simply with the stimulus materials and experimental
methods of the constructionists. With Helmholtz the place of commencement for the
study of vision was at the retina, with sense impressions and receptor reactions. Up to
1950 this had been Gibson’s focus, i.e., the retinal image as the stimulus for the eye
(Gibson, 1966). With a change in perspective, he proposed that Newton had misled us
when he suggested that light rays painted a picture of the visible object on the back of
the eye.
The retinal image, contrary to this, is not a picture. That is misleading since it suggests
something looked at. The retinal image is a scintillation—a flash or a trace—because
the retina jerks about (saccadic movement) and it has a gap called the blind spot
where the optic nerve leaves the eye. It was a further misconception,
argued Gibson, to think that a retinal sensory pattern can be impressed on the brain
neural tissue since the neural pattern never existed in the retinal mosaic. A further
reason for discounting the retina as the basis for visual perception was what was found
through cross-species comparisons.
The visual organs of octopi, rabbits, bees, spiders, flies, and humans differ widely but
all suggest visual perception of those conditions in the environment that are essential
to surviving.
Attention
Paying attention is the first step in learning anything. It is easy for most of us to pay
attention to things that are interesting or exciting to us. It is difficult for most of us to
pay attention to things that are not. When something is not interesting to us, it is easier
to become distracted, to move to a more stimulating topic or activity, or to tune out.
The teacher’s job is to construct lessons that connect to the learner. Relating what is to
be taught to the students’ lives can accomplish this. Relate Romeo and Juliet, for
example, to the realities in our communities of prejudice, unfounded hatred and gang
wars. Or relate today’s discrimination to The Diary of Anne Frank, and hold class
discussions of discrimination that students have personally experienced or witnessed.
Physical movement can help to “wake up” a mind. When a student shows signs of
inattentiveness and/or restlessness, teachers can provide the student with
opportunities to move around. Many students with attention challenges actually need to
move in order to remain alert. It is wise to find acceptable, non-destructive ways for
these students to be active. Responsibilities such as erasing the board, taking a
message to the office, and collecting papers can offer appropriate outlets for activity.
Memory
Memory is the complex process that uses three systems to help a person receive, use,
store, and retrieve information. The three memory systems are (1) short-term memory
(e.g., remembering a phone number you got from information just long enough to dial
it), (2) working memory (e.g., keeping the necessary information “files” out on the
mind’s “desktop” while performing a task such as writing a paragraph or working a long
division problem), and (3) long-term memory (a mind’s ever expanding file cabinet for
important information we want to retrieve over time).
Students who have difficulty with both short-term and working memory may need
directions repeated to them. Giving directions both orally and in written form, and giving
examples of what is expected will help all students. All students will benefit from self-
testing. Students should be asked to identify the important information, formulate test
questions and then answer them. This tactic is also effective in cooperative learning
groups and has been shown by evidence-based research to increase reading
comprehension (NICHD, 2000).
Language
Language is the primary means by which we give and receive information in school.
The two language processing systems are expressive and receptive. We use
expressive language when we speak and write, and we use receptive language when
we read and listen. Students with good language processing skills usually do well in
school. Problems with language, on the other hand, can affect a student’s ability to
communicate effectively, understand and store verbal and written information,
understand what others say, and maintain relationships with others.
All students will benefit from systematic, cumulative, and explicit teaching of reading
and writing.
Students who have receptive language challenges such as a slower processing speed
must use a lot of mental energy to listen, and, therefore, may tire easily. Consequently,
short, highly structured lectures or group discussion times should be balanced with
frequent breaks or quiet periods. Oral instructions may also need to be repeated and/or
provided in written form.
Broadening the way we communicate information in the classroom can connect all
students more to the topic at hand, and especially students with language challenges.
Using visual communication such as pictures and videos to reinforce verbal
communication is helpful to all students, and especially to students with receptive
language challenges. Challenge students to invent ways to communicate with pictures
and other visuals, drama, sculpture, dance and music, and watch memory of key
concepts increase and classrooms come alive.
Organization
We process and organize information in two main ways: simultaneous (spatial)
and successive (sequential). Simultaneous processing is the process we use to order
or organize information in space. Having a good sense of direction and being able to
“see” how puzzle pieces fit together are two examples of simultaneous processing.
trouble with time management and usually find it easy to organize an essay in a
sequence that is logical.
Graphomotor
The writing process requires neural, visual, and muscular coordination to produce
written work. It is not an act of will but rather an act of coordination among those
functions. Often the student who seems unmotivated to complete written work is the
student whose writing coordination is klutzy. We have long accepted that students may
fall on a continuum from very athletic to clumsy when it comes to sports, but we have
not known until recently that some students are writing “athletes” while others writing
klutzes. Just as practice, practice, practice will not make a football all-star out of an
absolute klutz, practice and acts of will not make a writing all-star out of someone
whose neurological wiring does not allow her to be a high performing graphomotor
athlete.
Students with handwriting difficulties may benefit from the opportunity to provide oral
answers to exercises, quizzes, and tests. Having computers in place for all children
helps level the playing field for the graphomotor klutz. Parents and teachers should be
aware, however, that many children with graphomotor challenges may also have
difficulty with the quick muscular coordination required by the keyboard.
Higher order thinking (HOT) is more than memorizing facts or relating information in
exactly the same words as the teacher or book expresses it. Higher order thinking
requires that we do something with the facts. We must understand and manipulate the
information.
HOT includes concept formation; concept connection; problem solving; grasping the
“big picture”; visualizing; creativity; questioning; inferring; creative, analytical and
practical thinking; and metacognition. Metacognition is thinking about thinking, knowing
about knowing, and knowing how you think, process information, and learn.
A person with metacognition also monitors and regulates how he learns. He can take a
task and decide how best to accomplish it by using his strategies and skills effectively.
He knows how he would best learn a new math procedure and which strategies he
would use to understand and remember a science concept. He understands the best
way for him to organize an essay – whether he would be more successful by using an
outline, a graphic organizer or a mind map. He has mental self-management.
Psychologist Robert Sternberg lists six components of mental self-management:
Teaching students about the six components of the learning process – attention,
memory, language, processing and organizing, graphomotor (writing) and higher order
thinking, then, demystifies learning and provides an opportunity to increase their
metacognition. It also enhances their sense of self-worth. A student who understands
that she may need to use a particular strategy to help her working memory function
better or that taking frequent breaks will help her stay more focused on her homework
assignments is much better off than thinking that she is stupid or lazy.
Emotions
Emotions control the on-off switch to learning. When we are relaxed and calm, our
learning processes have a green light. When we are uptight, anxious, or afraid, our
learning processes have a red light. In the classroom, tension slams the steel door of
the mind shut. Creating a non-threatening classroom environment or climate where
mistakes are welcomed as learning opportunities reduces tension, opens the mind and
increases the opportunity for learning.
The more teachers know about how learning takes place – how information is
processed, manipulated and created, the more we will know about what it looks like
when it’s working and what it looks like when it starts to break down. Then, rather than
thinking a student isn’t motivated, teachers will look to see if it is attention, memory,
language, organizing, graphomotor or higher order thinking that needs an intervention.
Motivation
It is every teacher’s job to motivate every student. Learning more about the brain and
the development of the mind, studying new information on learning, making learning
meaningful and learning about learning, watching the learning process, monitoring
closely for breakdowns, and celebrating the successes of every student – these are
our challenges as we create schools that honor diversity – the schools all children
deserve.
Edward Thorndike put forward a “Law of effect” which stated that any behavior that is
followed by pleasant consequences is likely to be repeated, and any behavior followed
by unpleasant consequences is likely to be stopped.
Critical Evaluation
Thorndike (1905) introduced the concept of reinforcement and was the first to apply
psychological principles to the area of learning.
His research led to many theories and laws of learning, such as operant conditioning.
Skinner (1938), like Thorndike, put animals in boxes and observed them to see what
they were able to learn.
The learning theories of Thorndike and Pavlov were later synthesized by Hull (1935).
Thorndike's research drove comparative psychology for fifty years, and influenced
countless psychologists over that period of time, and even still today.
The theory suggests that transfer of learning depends upon the presence of identical
elements in the original and new learning situations; i.e., transfer is always specific,
never general. In later versions of the theory, the concept of “belongingness” was
introduced; connections are more readily established if the person perceives that
stimuli or responses go together (c.f. Gestalt principles). Another concept introduced
was “polarity” which specifies that connections occur more easily in the direction in
which they were originally formed than the opposite. Thorndike also introduced the
“spread of effect” idea, i.e., rewards affect not only the connection that produced them
but temporally adjacent connections as well.
Application
Connectionism was meant to be a general theory of learning for animals and humans.
Thorndike was especially interested in the application of his theory to education
including mathematics (Thorndike, 1922), spelling and reading (Thorndike, 1921),
measurement of intelligence (Thorndike et al., 1927) and adult learning (Thorndike at
al., 1928).
Example
The classic example of Thorndike’s S-R theory was a cat learning to escape from a
“puzzle box” by pressing a lever inside the box. After much trial and error behavior, the
cat learns to associate pressing the lever (S) with opening the door (R). This S-R
connection is established because it results in a satisfying state of affairs (escape from
the box). The law of exercise specifies that the connection was established because
the S-R pairing occurred many times (the law of effect) and was rewarded (law of
effect) as well as forming a single sequence (law of readiness).
association, however, will only occur if stimuli and responses occur soon
enough one after another (the contiguity law). The association is established on
the first experienced instance of the stimulus (one trial learning). Repetitions
or reinforcements in terms of reward or punishment do not influence the strength of
this connection. Still, every stimulus is a bit different, which results in many trials in
order to form a general response. This was according to Guthrie the only type of
learning identifying him not as reinforcement theorist, but contiguity theorist.
More complex behaviors are composed of a series of movements (habits where each
movement is a small stimulus-response combination. This movements or are actually
what is being learned in each one trial learning rather than behaviors. Learning a
number of moves forms an act (incremental learning). Unsuccessful acts remain not
learned because they are replaced by later successfully learned acts.
Other researchers like John Watson studied whole acts just because it was easier, but
movements are, according to Guthrie what should actually be studied.
Forgetting occurs not due to time passage, but due to interference. As time passes,
stimulus can become associated with new responses. Three different methods can
help in forgetting an undesirable old habit and help replacing it
Fatigue method - using numerous repetitions, an animal becomes so fatigued
that it is unable to reproduce the old response, and introduces a new response
(or simply doesn't react).
Threshold method - first, a very mild version of the stimulus below the threshold
level is introduced. Its intensity is then slowly increased until the full stimulus can
be tolerated without causing the undesirable response
Incompatible stimuli method - the response is “unlearned” by placing the
animal in a situation where it cannot exhibit the undesirable response.
Although it was intended to be a general theory of learning, Guthrie's theory was tested
mostly on animals.
What is the practical meaning of contiguity theory and one trial learning?
In Guthrie's own words, “we learn only what we ourselves do”.
Learning must be active, but as such must involve both teacher's and students'
activity in order to relate stimulus with a response within a time limit. Guthrie also
applied his ideas to treatment of personality disorders.
Criticisms
Guthrie's theory was first preferred, due to its simplicity, but later criticized for the
same reason. Its simplicity was later turned into incompleteness. It was also based on
too little experimental data and criticized for being unable to explain why people often
behave differently in same situations
Application
Contiguity theory is intended to be a general theory of learning, although most of the
research supporting the theory was done with animals. Guthrie did apply his framework
to personality disorders (e.g. Guthrie, 1938).
Example
The classic experimental paradigm for Contiguity theory is cats learning to escape from
a puzzle box (Guthrie & Horton, 1946). Guthrie used a glass paneled box that allowed
him to photograph the exact movements of cats. These photographs showed that cats
learned to repeat the same sequence of movements associated with the preceding
escape from the box. Improvement comes about because irrelevant movements are
unlearned or not included in successive associations.
Principles
1. In order for conditioning to occur, the organism must actively respond (i.e., do
things).
2. Since learning involves the conditioning of specific movements, instruction must
present very specific tasks.
3. Exposure to many variations in stimulus patterns is desirable in order to produce
a generalized response.
4. The last response in a learning situation should be correct since it is the one that
will be associated.
DEFINITION
Gardner Murphy (1968)
Hull viewed the drive as a stimulus, arising from a tissue need, which in turn stimulates
behavior. The strength of the drive is determined upon the length of the deprivation, or
the intensity / strength of the resulting behavior. He believed the drive to be non-
specific, which means that the drive does not direct behavior rather it functions to
energize it. In addition this drive reduction is the reinforcement.
Hull's learning theory focuses mainly on the principle of reinforcement; when an S-R
relationship is followed by a reduction of the need, the probability increases that in
future similar situations the same stimulus will create the same prior response.
Reinforcement can be defined in terms of reduction of a primary need. Just as Hull
believed that there were secondary drives, he also felt that there were secondary
reinforcements - “If the intensity of the stimulus is reduced as the result of a secondary
or learned drive, it will act as a secondary reinforcement" (Schultz & Schultz, 1987, p
241). The way to strengthen the S-R response is to increase the number of
reinforcements, habit strength.
REASONS
1. Drive stimuli for thirst include dryness in the mouth and parched lips. Water
almost immediately reduces such stimulation thus hull had the mechanism he needed
for explaining learning.
2. It was provided by Sheffield and Roby (1950), who found that hungry rats were
reinforced by non- nutritive saccharine, which could not possibly have reduced the
hunger drive.
following a change in reinforcement size is referred to as the Crepsi effect, after the
man who first observed it.
Stimulus-Intensity Dynamism
According to hull, Stimulus-Intensity Dynamism (V) is an intervening variable that
varies along with the intensity of the external stimulus(S). Stated simply, Stimulus-
Intensity Dynamism indicates that the greater the intensity of a stimulus, the greater
the probability that a learned response will be elicited. Thus we must revise hull’s
earlier formula as follows
sEr = (sHr x D x K x V) - (sIr + Ir) – sOr
It is interesting to note that because sHr , D, K and V are multiplied together, if any
one had a value of zero, reaction potential would be zero. For example there could
have been many pairings between S and R (sHr), but if drive is zero, reinforcement is
zero or the organism cannot detect the stimulus, a learned response will not occur.
Hull’s final system summarized
There are three kinds of variable in hull’s theory:
1. Independent variable –which are stimulus events systematically manipulated by
the experimenter.
2. Intervening variables – which are process thought to be taking place within the
organism but directly observable.
3. Dependent variables – which are some aspect of behavior that is measured
by the experimenter in order to determine whether the independent variables had any
effect.
EDUCATIONAL IMPLICATION
The development of curriculum
In this reference hull emphasized the importance of needs in learning process and
accordingly the needs of all categories of children should be incorporated in the
curriculum learning becomes meaningful only when it satisfies the needs of children.
The know actual needs of the students by teacher and parents
Hull is fells that teachers and parents of the student should also share their
responsibility in teaching the actual needs of the student through various means proper
guidance is must for their attitude and aptitudes.
From this line of reasoning, it follows that encouraging some anxiety in students that
could subsequently be reduced by success is a necessary condition for classroom
learning. Too little anxiety results in no learning (because there is no drive to be
reduced), and too much anxiety is disruptive. Therefore, students who are mildly
anxious are in the best position to learn and are therefore easiest to teach.
Hull’s system of learning advocated the following chain sequence for improved results
in the teaching-learning process:
Application
Hull’s theory is meant to be a general theory of learning. Most of the research
underlying the theory was done with animals, except for Hull et al. (1940) which
focused on verbal learning. Miller & Dollard (1941) represents an attempt to apply the
theory to a broader range of learning phenomena. As an interesting aside, Hull began
his career researching hypnosis – an area that landed him in some controversy at Yale
(Hull, 1933).
Example
Here is an example described by Miller & Dollard (1941): A six year old girl who is
hungry and wants candy is told that there is candy hidden under one of the books in a
bookcase. The girl begins to pull out books in a random manner until she finally finds
the correct book (210 seconds). She is sent out of the room and a new piece of candy
is hidden under the same book. In her next search, she is much more directed and
finds the candy in 86 seconds. By the ninth repetition of this experiment, the girl finds
the candy immediately (2 seconds). The girl exhibited a drive for the candy and looking
under books represented her responses to reduce this drive. When she eventually
found the correct book, this particular response was rewarded, forming a habit. On
subsequent trials, the strength of this habit was increased until it became a single
stimulus-response connection in this setting.
Principles
1. Drive is essential in order for responses to occur (i.e., the student must want to
learn).
2. Stimuli and responses must be detected by the organism in order for conditioning
to occur ( i.e., the student must be attentive).
3. Response must be made in order for conditioning to occur (i.e., the student must
be active).
4. Conditioning only occurs if the reinforcement satisfied a need (i.e, the learning
must satisfy the learner’s wants).
.
This stage also involves another stimulus which has no effect on a person and is called
the neutral stimulus (NS). The NS could be a person, object, place, etc.
The neutral stimulus in classical conditioning does not produce a response until it is
paired with the unconditioned stimulus.
During Conditioning:
During this stage, a stimulus which produces no response (i.e., neutral) is associated
with the unconditioned stimulus at which point it now becomes known as the
conditioned stimulus (CS).
For example, a stomach virus (UCS) might be associated with eating a certain food
such as chocolate (CS). Also, perfume (UCS) might be associated with a specific
person (CS).
For classical conditioning to be effective, the conditioned stimulus should occur before
the unconditioned stimulus, rather than after it, or during the same time. Thus, the
conditioned stimulus acts as a type of signal or cue for the unconditioned stimulus.
After Conditioning:
Now the conditioned stimulus (CS) has been associated with the unconditioned
stimulus (UCS) to create a new conditioned response (CR).
For example, a person (CS) who has been associated with nice perfume (UCS) is now
found attractive (CR). Also, chocolate (CS) which was eaten before a person was sick
with a virus (UCS) now produces a response of nausea (CR).
For example, if a student is bullied at school they may learn to associate the school
with fear. It could also explain why some students show a particular dislike of certain
subjects that continue throughout their academic career. This could happen if a student
is humiliated or punished in class by a teacher.
Critical Evaluation
entered the room—even if he had no food. The dogs were associating his entrance
into the room with being fed. This led Pavlov to design a series of experiments in which
he used various sound objects, such as a buzzer, to condition the salivation response
in dogs.
He started by sounding a buzzer each time food was given to the dogs and found that
the dogs would start salivating immediately after hearing the buzzer—
even before seeing the food. After a period of time, Pavlov began sounding the buzzer
without giving any food at all and found that the dogs continued to salivate at the sound
of the buzzer even in the absence of food. They had learned to associate the sound of
the buzzer with being fed.
If we look at Pavlov’s experiment, we can identify the four factors of classical
conditioning at work:
The unconditioned response was the dogs’ natural salivation in response to
seeing or smelling their food.
The unconditioned stimulus was the sight or smell of the food itself.
The conditioned stimulus was the ringing of the bell, which previously had no
association with food.
The conditioned response, therefore, was the salivation of the dogs in response
to the ringing of the bell, even when no food was present.
Pavlov had successfully associated an unconditioned response (natural salivation in
response to food) with a conditioned stimulus (a buzzer), eventually creating
a conditioned response (salivation in response to a buzzer). With these results, Pavlov
established his theory of classical conditioning.
However, because these pathways are being activated at the same time as the other
neural pathways, there are weak synapse reactions that occur between the auditory
stimulus and the behavioral response. Over time, these synapses are strengthened so
that it only takes the sound of a buzzer (or a bell) to activate the pathway leading to
salivation.
Issues
Classical conditioning is one of those introductory psychology terms that gets thrown
around. Many people have a general idea that it is one of the most basic forms of
associative learning, and people often know that Ivan Pavlov's 1927 experiment with
dogs has something to do with it, but that is often where it ends.
Also, it means that the response you hope to elicit must occur below the level of
conscious awareness - for example, salivation, nausea, increased or decreased
heartrate, pupil dilation or constriction, or even a reflexive motor response (such as
recoiling from a painful stimulus). In other words, these sorts of responses
are involuntary.
The basic classical conditioning procedure goes like this: a neutral stimulus is paired
with an unconditional stimulus (UCS). The neutral stimulus can be anything, as long as
it does not provoke any sort of response in the organism. On the other hand, the
unconditional stimulus is something that reliably results in a natural response. For
example, if you shine a light into a human eye, the pupil will automatically constrict
(you can actually see this happen if you watch your eyes in a mirror as you turn on and
off a light). Pavlov called this the "unconditional response." (UCR)
To make this a bit more concrete, we'll use Pavlov's dogs as an example. Before
learning took place, the dogs would reliably salivate (UCR) when given meat powder
(UCS), but they gave no response to the ringing of a bell (neutral). Then Pavlov would
always ring a bell just before he would present the dogs with some meat powder.
Pretty soon, the dogs began to associate the sound of the bell with the impending
presence of meat powder. As a result, they would begin to salivate (CR) as soon as
they heard the bell (CS), even if it was not immediately followed by the meat powder
(UCS). In other words, they learned that the bell was a reliable predictor of meat
powder. In this way, Pavlov was able to elicit an involuntary, automatic, reflexive
response to a previously neutral stimulus.
Classical conditioning can help us understand how some forms of addiction, or drug
dependence, work. For example, the repeated use of a drug could cause the body to
compensate for it, in an effort to counterbalance the effects of the drug. This causes
the user to require more of the substance in order to get the equivalent effect (this is
called tolerance).
However, the development of tolerance also takes into account other environmental
variables (the conditional variables) - this is called the situational specificity of
tolerance. For example, alcohol tends to taste a certain way, and when alcohol is
consumed in the usual way, the body responds in an effort to counteract the effect.
But, if the alcohol is delivered in a novel way (such as in Four Loko), the individual
could overdose. This effect has also been observed among those who have become
tolerant to otherwise lethal amounts of opiates: they may experience an overdose if
they take their typical dose in an atypical setting. These results have been found in
species ranging from rats and mice to humans.
In these examples, it's the environmental context (conditional stimuli) that prompts the
body to prepare for the drug (the conditional response). But if the conditional stimuli
are absent, the body is not able to adequately prepare itself for the drug, and bad
things could happen.
Another example of classical conditioning is known as the appetizer effect. If there are
otherwise neutral stimuli that consistently predict a meal, they could cause people to
become hungry, because those stimuli induce involuntary changes in the body, as a
preparation for digestion. There's a reason it's called the "dinner bell," after all.
Classical conditioning is also being used in wildlife conservation efforts! At
Extinction Countdown, John Platt pointed out last month that taste aversion, which is a
form of classical conditioning, is being used to keep lions from preying on cattle. This
should, in turn, prevent farmers from killing the lions.
They appear to differ in the sense that classical conditioning generally involves the
presence of reflex actions, whereas instrumental conditioning generally involves
modifications of voluntary behavior contingent on presence
of reinforcers or punishers. Whether that is a sufficient reason to distinguish them is
arguable, as we will see later. My sense of the field today is that most theorists would
like to see similar theories explain the results in both. Thus, it will not surprise you, for
example, that a modified version of the Rescorla-Wagner model has also been
proposed for instrumental conditioning.
For now, we can talk about instrumental conditioning as the type of learning involved in
navigating a maze, choosing the correct one of several doors to run to, or even
performing some response that will be successful in avoiding a future shock. In
instrumental conditioning, new responses may be taught that differ from any reflexive
response already in the animal's behavioral repertoire.
Basic Paradigms
We have already introduced two general paradigms
involving acquisition and extinction. In acquisition, an outcome is typically paired with
making a response in the presence of a stimulus; in extinction, that pairing typically
ceases. Within this broad framework (particularly with respect to acquisition), we may
distinguish several additional paradigms.
In appetitive or approach learning, the animal makes a response that results in
a desired reward. This is the type of learning involving reinforcement that we have
implicitly and explicitly discussed so far. But it is not the only paradigm based on
reinforcement. Another that deserves particular note is omission training, in which an
animal has to suppress or withhold a response in order to get its reward. Sheffield, for
example, trained dogs to salivate in the presence of a tone associated with food, and
then shifted them to omission training. In this latter phase, the dogs had to avoid
salivating to the tone for several seconds to get the food. Omission training is initially
typically difficult, and displays a relatively slow learning curve. However, there are
several studies suggesting that in the long run, it will be as effective as extinction in
decreasing the frequency of a response. Omission training is sometimes referred to
as negative punishment to indicate that making the response is associated with
removal of a reinforcer (which thus acts as a punishment).
Such a procedure will obviously be inefficient. In some cases (such as a pig rolling a
coin), the wait may be very long indeed! Hence, a technology has developed that
involves increasing the probability of having the animal emit that response so that we
can then train it further through reinforcement. This technology, called shaping,
requires reinforcing successive approximations to the desired response.
We await some response yet closer to what we want to train (such as being near the
bar), and when that occurs we reintroduce the reinforcer. And then, of course, we cycle
the process through again in order to obtain yet a closer approximation (such as
touching the bar). Shaping is a very powerful technique, not only because of its ability
to 'coax' low frequency responses out of an animal, but also -- and especially --
because of its ability to mold a response that is not normally part of the animal's
repertoire! Thus, by combining shaping and chaining, instrumental conditioning allows
us to train totally new responses, rather than just transfer stimulus control of an old
response to a new stimulus.
He called this phenomenon “reinforcement” that occurs due to the animal’s learning
that there is a “contingent” relationship between doing that behavior (e.g., pecking or
pushing a lever in the cage) and receiving a desired outcome (the reinforce—for
example, a pellet of food). Behavior that might previously have been rare or sporadic
can become regular and frequent if the contingency of reinforcement is set up to teach
the animal that the behavior seems to produce the reinforcer.
It’s perhaps inevitable, then, that many of the central phenomena of instrumental
learning parallel those of classical conditioning. For example, in classical conditioning,
learning trials typically involve the presentation of a CS followed by a US. In
instrumental conditioning, learning trials typically involve a response by the organ-ism
followed by a reward or reinforcer. The reinforcement often involves the presenta-tion
of something good, such as grain to a hungry pigeon. Alternatively, reinforcement may
involve the termination or prevention of something bad, such as the cessation of a loud
noise.
In both forms of conditioning, the more such pairings there are, the stronger the
learning. And if we discontinue these pairings so that the CS is no longer followed by
the US or the response by a reinforcer, the result is extinction.
GENERALIZATION AND DISCRIMINATION
An instrumental response is not directly triggered by an external stimulus, the way a
CR or UR is. But that doesn’t mean external stimuli have no role here. In instrumental
con-ditioning, external events serve as discriminative stimuli, signaling for an animal
what sorts of behaviors will be rewarded in a given situation. For example, suppose a
pigeon is trained to hop onto a platform to get some grain. When a green light is on,
hopping on the platform pays off. But when a red light is on, hopping gains no reward.
Under these circumstances, the green light becomes a positive discriminative stimulus
and the red light a negative one (usually labeled S+ and S–, respectively). The pigeon
swiftly learns this pattern and so will hop in the presence of the first and not in the
presence of the second.
Other examples are easy to find. A child learns that pinching her sister leads to pun-
ishment when her parents are on the scene but may have no consequences otherwise.
In this situation, the child may learn to behave well in the presence of the S + (i.e., when
her parents are there) but not in other circumstances. A hypochondriac may learn that
loud groans will garner sympathy and support from others but may bring no benefits
when others are not around. As a result, he may learn to groan in social settings but
not when alone.
BEHAVIOR MOTIVATION
Once we’ve identified a stimulus as a reinforcer, what determines how effective the
reinforcer will be? We know that some reinforcers are more powerful than others—and
so an animal will respond more strongly for a large reward than for a small one.
However, what counts as large or small depends on the context. If a rat is used to get-
ting 60 food pellets for a response, then 16 pellets will seem measly and the animal will
respond only weakly for this puny reward. But if a rat is used to getting only 4 pellets
for a response, then 16 pellets will seem like a feast and the rat’s response will be fast
and strong (for the classic demonstration of this point, see Crespi, 1942). Thus, the
effectiveness of a reinforcer depends largely on what other rewards are available (or
have recently been available); this effect is known as behavioral contrast.
Contrast effects are important for their own sake, but they may also help explain
another (somewhat controversial) group of findings. In one study, for example, nursery-
school children were given an opportunity to draw pictures. The children seemed to
enjoy this activity and produced a steady stream of drawings. The experimenters then
changed the situation: They introduced an additional reward so that the children now
earned an attractive “Good Player” certificate for producing their pictures. Then, later
on, the chil-dren were again given the opportunity to draw pictures—but this time with
no provision for “Good Player” rewards. Remarkably, these children showed
considerably less interest in drawing than they had at the start and chose instead to
spend their time on other activ-ities (see, for example, Lepper, Greene, & Nisbett,
1973; also Kohn, 1993).
Some theorists say these data illustrate the power of behavioral contrast. At the start of
the study, the activity of drawing was presumably maintained by certain reinforce-
ments in the situation—perhaps encouragement from the teachers or comments by
other children. Whatever the reinforcements were, they were strong enough to
maintain the behavior; we know this because the children were producing drawings at
a steady pace. Later on, though, an additional reinforcement (the “Good Player” certifi-
cate) was added and then removed. At that point the children were back to the same
rewards they’d been getting at the start, but now these rewards seemed puny in com-
parison to the greater prize they’d been earning during the time when the “Good
Player” award was available. As a consequence, the initial set of rewards was no
longer enough to motivate continued drawing.
Other theorists interpret these findings differently. In their view, results like this one
suggest that there are actually two different types of reward. One type is merely tacked
onto a behavior and is under the experimenter’s control; it’s the sort of reward that’s in
play when we give a pigeon a bit of food for pecking a key, or hand a factory worker a
paycheck for completing a day’s work. The other type of reward is intrinsic to the
behav-ior and independent of the experimenter’s intentions; these rewards are in play
when someone is engaging in an activity just for the pleasure of the activity itself.
Reinforcement-Based Learning
Rewards and punishers, in contrast, played a pivotal role in the work of
Thorndike, who is often credited with founding the field of instrumental conditioning.
Thorndike published a monograph in 1898 on his studies with animals such as cats.
He set up an experimental apparatus termed a puzzle box: a cage in which the animal
was placed, and which could be escaped through the performance of a simple
response such as pulling on a rope attached to a door. These studies really involved
the first careful, detailed observations of what animals in general learned, as opposed
to anecdotal stories collected of amazing things animals did that obviously proved their
intelligence. (Television still plays into that sort of approach, needless to say!)
Thorndike asked a very simple question: Would escape from a puzzle box exhibit
any signs of intelligence? Would it display evidence of insight, in which the animal
would be able to glance about its environment, understand that the rope was attached
to the door, and realize that it needed only to pull on the rope to get out? To answer
this question, Thorndike repeatedly placed animals in the same puzzle box, and
measured how long it took them to escape. And what he found was that the time to
escape decreased only gradually. By the end of the experiment, after 20 or so trials,
cats would easily leave the box by performing the appropriate response as soon as
they were placed in it. But, their history clearly demonstrated that this had to have been
a learned response. In particular, Thorndike pointed out that an animal making the
correct response on a given trial early in training would not necessarily choose that
same response as its first response on the next trial. So, rather than insight, he
concluded that learning involved trial-and-error.
Put briefly, this law claimed that an association between a stimulus and a response
would strengthen if the response were followed by a satisfactory state of affairs, and
would weaken if the response were followed by an unsatisfactory state of affairs.
Thus, Thorndike deliberately included Bentham's notion of hedonistic value as a
principle governing the formation of an association, in contrast to Watson. Rather than
being a simple contiguity theory, this was a reinforcement theory: In modern terms,
learning of an association will occur when there is a reinforcer following a response.
Here is what Thorndike actually said regarding satisfying and unsatisfying states
(1913, p. 2):
By a satisfying state of affairs is meant one which the animal does nothing to avoid,
often doing things which maintain or renew it. By an annoying state of affairs is meant
one which the animal does nothing to preserve, often doing things which put an end to
it.
Although he was accused of using hopelessly mentalistic terms in describing learning
as depending on satisfactory or unsatisfactory states, his actual definition provided a
clear behavioral test for determining when one or the other state was present. In that
sense, it ought to have troubled people no more than Watson's use of the term
"emotional."
Note too that Thorndike did not include the outcome in the association. As we will
see, other theorists have claimed that associations to the outcome may also form, so
that we can have S-R associations, R-O associations, and even S-O associations. To
anticipate how such a model might differ from Thorndike's, a strong S-R association
may exist despite a highly unpleasant or unsatisfying outcome: The presence of an R-
O association in that event may serve to inhibit the R excited by presence of an
associated stimulus.
Thorndike also spoke of the value of different satisfactory states, so that strong
satisfiers would do a better job of strengthening an association than weak satisfiers.
And as an interesting historical footnote, he actually contradicted one of the major
principles of strict contiguity by proposing an early version of belongingness by which
some things would be more likely to associate together than others.
One of the most critical challenges in applied empirical research is to draw causal
inference from observational data. Empirical marketing research, which often involves
causal analysis of the impact of marketing strategy, is no exception. A central difficulty
is endogeneity of variables entering the causal relationship, arising from either omitted
variable bias, simultaneity bias, sample selection bias, or measurement errors.
According to recent research, mentions of endogeneity and procedures to address it
have risen 5x across the field’s top four journals (Rutz and Watson (2019)).
Instrumental Variable (IV) methods are among the most frequently used techniques to
address such endogeneity issues. Instruments that are correlated with the endogenous
variable but are otherwise not associated with the outcome variable can be used to
partition the variance of the endogenous variable into endogenous and exogenous
components.
The method of instrumental variables is based on using the variation in the exogenous
component of the endogenous variable induced by the the variation in the instrumental
variable to make inference of causal effects In recent years, use of the IV method has
come under criticism in Marketing (e.g. Rossi (2014)) and multiple other disciplines
(e.g. Bound et al. (1995) and Young (2017) in Economics; Yogo (2004), Stock and
Yogo (2002), and Hausman et al. (2005) in Finance).
Such models have been widely used in marketing literature and remain the workhorse
models for studying many marketing problems (see Goldfarb et al. (2009), Chintagunta
and Nair (2011) and many others) Constructing strong and valid instruments is,
therefore, an important endeavor for causal inference from observational data. In this
paper, we approach the problem of constructing strong instrumental variables from
exogenous information in causal models as a (supervised) machine learning problem.
Related Literature
As discussed above, the standard existing approach to weak IV concerns is
approached through approximation of optimal instruments. Most work on optimal
instrument variables in linear models casts the problem as a selection problem among
the available exogenous variables (and their transformations e.g., b-splines). Early
work on instrument selection goes back to ((Kloek and Mennes 1960) and Amemiya
(1966)) where they studied using
Both principal component analysis and factor analysis are not targeted at
approximating the optimal instruments, but rather at coming up with a low dimensional
vector that summarizes the high-dimensional instruments, which could potentially yield
(Amemiya (1966)) better performance in terms of bias and mean squared error. Recent
work on instrument selection assumes strong sparsity of the optimal instruments
structure (i.e. a small set of the available IVs are valid and sufficient for first stage).
Work by (Bai and Ng (2009)) demonstrate how boosting can be used for recovering the
sparse structure but do not provide any formal proof. (Belloni et al. (2012)) explicitly
shows how Lasso can be used for instrument selection among a large set of candidate
instruments under the strong sparsity assumption. Further they are also able to prove
theoretical consistency and other inference results for their IV estimator.
Their proposed approach does not work as well when sparsity is violated, i.e. most
instruments are weak, as it selects all of the weak IVs or drops them all. Unlike the
extant literature, our learning approach to instrumental variables still exhibits
asymptotic guarantees and does not rely on any sparsity assumption on the optimal
instrument structure. Further, our approach allows the researcher to apply a broad
arsenal of machine learning methods in constructing the instrumental variables.
Reinforcement Schedules
Remember, the best way to teach a person or animal a behavior is to use positive
reinforcement. For example, Skinner used positive reinforcement to teach rats to press
a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the
box, and out would come a pellet of food. After eating the pellet, what do you think the
hungry rat did next? It hit the lever again, and received another pellet of food. Each
time the rat hit the lever, a pellet of food came out. When an organism receives a
reinforcer each time it displays a behavior, it is called continuous reinforcement.
This reinforcement schedule is the quickest way to teach someone a behavior, and it is
especially effective in training a new behavior. Let’s look back at the dog that was
learning to sit earlier in the module. Now, each time he sits, you give him a treat.
Timing is important here: you will be most successful if you present the reinforcer
immediately after he sits, so that he can make an association between the target
behavior (sitting) and the consequence (getting a treat).
Once a behavior is trained, researchers and trainers often turn to another type of
reinforcement schedule—partial reinforcement. In partial reinforcement, also referred
to as intermittent reinforcement, the person or animal does not get reinforced every
time they perform the desired behavior. There are several different types of partial
reinforcement schedules (Table 1). These schedules are described as either fixed or
variable, and as either interval or ratio. Fixed refers to the number of responses
between reinforcements, or the amount of time between reinforcements, which is set
and unchanging. Variable refers to the number of responses or amount of time
between reinforcements, which varies or changes. Interval means the schedule is
based on the time between reinforcements, and ratio means the schedule is based on
the number of responses between reinforcements.
Reinforcem Description Result Example
ent
Schedule
Fixed interval Reinforcement is Moderate response Hospital patient
delivered at predictable rate with significant uses patient-
time intervals (e.g., after pauses after controlled, doctor-
5, 10, 15, and 20 reinforcement timed pain relief
minutes).
Variable Reinforcement is Moderate yet Checking Facebook
interval delivered at unpredictable steady response
time intervals (e.g., after rate
5, 7, 10, and 20 minutes).
With a variable interval reinforcement schedule, the person or animal gets the
reinforcement based on varying amounts of time, which are unpredictable. Say that
Manuel is the manager at a fast-food restaurant. Every once in a while someone from
the quality control division comes to Manuel’s restaurant. If the restaurant is clean and
the service is fast, everyone on that shift earns a $20 bonus. Manuel never knows
when the quality control person will show up, so he always tries to keep the restaurant
clean and ensures that his employees provide prompt and courteous service. His
productivity regarding prompt service and keeping a clean restaurant are steady
because he wants his crew to earn the bonus.
With a fixed ratio reinforcement schedule, there are a set number of responses that
must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store,
and she earns a commission every time she sells a pair of glasses. She always tries to
sell people more pairs of glasses, including prescription sunglasses or a backup pair,
so she can increase her commission. She does not care if the person really needs the
prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells
does not matter because her commission is not based on quality; it’s only based on the
number of pairs sold. This distinction in the quality of performance can help determine
which reinforcement method is most appropriate for a particular situation. Fixed ratios
are better suited to optimize the quantity of output, whereas a fixed interval, in which
the reward is not quantity based, can lead to a higher quality of output.
Continuous Reinforcement
Imagine, for example, that you are trying to teach a dog to shake your hand. During the
initial stages of learning, you would stick to a continuous reinforcement schedule to
teach and establish the behavior. This might involve grabbing the dog's paw, shaking
it, saying "shake," and then offering a reward each and every time you perform these
steps. Eventually, the dog will start to perform the action on its own.
Continuous reinforcement schedules are most effective when trying to teach a new
behavior. It denotes a pattern to which every narrowly-defined response is followed by
a narrowly-defined consequence.
Partial Reinforcement
Once the response is firmly established, a continuous reinforcement schedule is
usually switched to a partial reinforcement schedule.1 In partial (or intermittent)
reinforcement, the response is reinforced only part of the time. Learned behaviors are
acquired more slowly with partial reinforcement, but the response is more resistant
to extinction.
Think of the earlier example in which you were training a dog to shake and. While you
initially used continuous reinforcement, reinforcing the behavior every time is simply
unrealistic. In time, you would switch to a partial schedule to provide additional
reinforcement once the behavior has been established or after considerable time has
passed.
Variable-Ratio Schedules
Variable-ratio schedules occur when a response is reinforced after an unpredictable
number of responses. This schedule creates a high steady rate of responding.
Gambling and lottery games are good examples of a reward based on a variable ratio
schedule. In a lab setting, this might involve delivering food pellets to a rat after one
bar press, again after four bar presses, and then again after two bar presses.
Fixed-Interval Schedules
Fixed-interval schedules are those where the first response is rewarded only after a
specified amount of time has elapsed. This schedule causes high amounts of
responding near the end of the interval but slower responding immediately after the
delivery of the reinforcer. An example of this in a lab setting would be reinforcing a rat
with a lab pellet for the first bar press after a 30-second interval has elapsed.
Variable-Interval Schedules
Variable-interval schedules occur when a response is rewarded after an unpredictable
amount of time has passed. This schedule produces a slow, steady rate of response.
An example of this would be delivering a food pellet to a rat after the first bar press
following a one-minute interval; a second pellet for the first response following a five-
minute interval; and a third pellet for the first response following a three-minute interval.
In daily life, partial schedules of reinforcement occur much more frequently than do
continuous ones. For example, imagine if you received a reward every time you
showed up to work on time. Over time, instead of the reward being a positive
reinforcement, the denial of the reward could be regarded as negative reinforcement.
Instead, rewards like these are usually doled out on a much less predictable partial
reinforcement schedule. Not only are these much more realistic, but they also tend to
produce higher response rates while being less susceptible to extinction.1
Partial schedules reduce the risk of satiation once a behavior has been established. If
a reward is given without end, the subject may stop performing the behavior if the
reward is no longer wanted or needed.
For example, imagine that you are trying to teach a dog to sit. If you use food as a
reward every time, the dog might stop performing once it is full. In such instances,
something like praise or attention may be more effective in reinforcing an already-
established behavior.
Behaviours by reinforcement
Highlights
•
Developmental differences in acquiring and adapting behaviours by reinforcement
were examined.
•
Children and adults acquired simple new behaviours by feedback comparably.
•
Children's performance was more disrupted than adults’ when adapting behaviours.
•
P3 ERP changes indicated children consolidated adapted behaviours less than adults.
•
FRN ERP changes showed children relied more on feedback than adults in adaptation.
including Tourette syndrome and ADHD (Marsh et al., 2004, Sagvolden et al., 2005),
although the precise deficits in these conditions are unclear. A thorough understanding
of the typical development of reinforcement learning may help clarify these deficits, but
few studies have examined this aspect of cognitive development.
Children show less enhancement of the FRN for negative compared with positive
feedback, suggesting children are poorer at differentiating between types of feedback
than adults (Hämmerer et al., 2010). The authors suggest this may explain why
learning is more disrupted in children when feedback is probabilistic and difficult to
discriminate.
FP amplitude decreases less across learning in children than adults and ERP
correlates of monitoring errors in performance differentiate less between correct and
error responses in children than in adults (Eppinger et al., 2009). Based on these
differences between children and adults, Eppinger et al. (2009) suggested that children
have weaker internal representations of whether a response is correct or erroneous,
resulting in a greater reliance on feedback processing to achieve successful
performance. In a recent review of this literature, Hämmerer and Eppinger
(2012) proposed that increasing reinforcement learning ability reflects developing
efficiency in processing feedback, using reinforcements effectively to guide goal-
directed behaviour, and building internal representations of correct behaviours, as
prefrontal cortical regions mature.
However, due to the scarcity of research in this area further studies are needed
(Hämmerer and Eppinger, 2012). Furthermore, previous research has not addressed
an important aspect of reinforcement learning, that is, the ability to alter and re-learn
behaviours following changes in reinforcements. A robust finding in the executive
function literature is that children are poorer than adults in switching to new behaviours
when prompted by cues (Koolschijn et al., 2011).
This suggests that children will have particular difficulty with learning when
reinforcement contingencies change. Furthermore, the learning tasks used previously
have been complicated, with multiple feedback conditions presented for different S–R
associations within task blocks, creating considerable working memory demands
(Crone et al., 2004, Eppinger et al., 2009, Hämmerer et al., 2010). Crone et al.
(2004) and Eppinger et al. (2009) controlled for this problem by allocating children
extra response time, but nevertheless the difficulty of these tasks may have enhanced
developmental differences.
The intention was to ensure all participants could perform the task adequately
regardless of age so that any ERP differences are more likely to reflect differences in
the recruitment of neural networks underlying task performance, rather than floor or
ceiling effects in one age group. Secondly, to assess developmental differences in the
ability to change and re-learn acquired behaviour in response to altered reinforcement
contingencies we compared children aged 9–11 years with adults aged 21 years and
over.
Our aim was to establish whether children differ from adults in behavioural and brain
correlates of learning before they undergo the significant maturational changes that
take place during adolescence. During EEG recording typically developing children and
adults performed a task in which they learned four S–R associations by positive and
negative feedback and then reversed the associations after an unexpected change in
reinforcement contingencies. Changes in performance and feedback processing,
indexed by the FRN, related to learning and reversal were examined across the task
and between age groups.
2. Method
2.1. Participants
Fourteen 9–11 year olds (12 male, mean age: 10.2 years) and 15 adults (5 male, mean
age: 25.5 years) were recruited from local primary schools and the University of
Nottingham, UK to take part in this study. Participants were typically developing with no
known neurological or psychiatric problems which may have affected brain function,
right-handed (determined by the dominant hand for writing) and had normal or
corrected-to-normal vision. Participants were tested in accordance with procedures
approved by the University of Nottingham Medical School Ethics Committee and/or the
East Midlands NHS Research Ethics Committee. Monetary reimbursement (£10) was
provided for taking part.
Children and adults showed equivalent increases in accuracy and P3 amplitude and
decreases in FRN amplitude as they learned the S–R associations. Therefore, in
contrast to previous research children in this study acquired and consolidated new
behaviours and gradually decreased their use of external feedback at the same rate as
adults. Accuracy significantly correlated with FRN amplitude during the first task block
in children, indicating that feedback processing was related to the correct production of
S–R associations in children in this study. This extends previous research by indicating
that feedback processing and guidance of goal-directed behaviour by reinforcement
information is not deficient in children compared with adults, as has previously been
proposed .
Our findings indicate that when reinforcement learning is non-probabilistic the neural
mechanisms underlying this basic form of learning work as efficiently in children as in
adults. Problems with acquiring new behaviours may only appear in children when
reinforcement learning becomes more complicated, for instance when reinforcements
are unclear, for example probabilistic, and demands on other maturing cognitive
functions such as working memory or executive function are high. As such, our findings
highlight the importance of ensuring task difficulty is appropriate for children in
developmental investigations of reinforcement learning.
Previous authors have emphasised that difficulties with feedback processing, resulting
from immature performance monitoring functions of the developing prefrontal cortex,
underlie children's poorer reinforcement learning performance (Hämmerer and
Eppinger, 2012, Hämmerer et al., 2010). It has been suggested that children are less
successful than adults in integrating feedback information with motor action plans, or
that children use feedback in a less goal-directed manner than adults (Hämmerer and
Eppinger, 2012, Hämmerer et al., 2010). In contrast to the latter proposal, our findings
suggest that children do use feedback to drive goal-directed learning behaviour.
Changes in FRN amplitude were associated with changes in performance accuracy in
children when most re-learning was occurring (block 4).
Furthermore, FRN changes were largest in children, indicating children were using
feedback more than adults to guide behaviour. However, as children performed more
poorly than adults, children may have had greater difficulty in integrating feedback
information to consolidate S–R associations and so produce the correct behaviours,
consistent with other work using a probabilistic learning task (Van Duijvenvoorde et al.,
2013).
Errors were not sufficiently numerous to allow analysis of the ERN in this study.
However, the profile of P3 and FRN effects here are similar to the ERN and FP findings
reported by Eppinger et al. (2009), and support the proposal put forward by those
authors that children build weaker internal representations of to-be-learned behaviours
and engage in greater processing of external feedback than adults when alterations in
reinforcement learning are required. This may be due to interference arising from the
extra cognitive processing demands of reversing the S–R associations, such as the
requirement to suppress the previously correct behaviours and produce new
responses that conflict with the original S–R associations. A wealth of evidence
demonstrates that such executive functions are poorer in children than adults
(Johnstone et al., 2005, Ladouceur et al., 2007, Rueda et al., 2004). Therefore, it may
be that these additional processing requirements reduce children's cognitive capacity
for learning, decreasing the efficiency of the processes of consolidating the reversed
S–R associations and integrating new feedback information with behaviour plans.
Children may exercise greater feedback processing to compensate for these
difficulties.
present task was not designed with this question in mind and further research is
needed to investigate the role of the FRN in children in this age range.
Another possible explanation for our findings is that children learn in a different manner
from adults. Research in adults has shown that providing information about reward
likelihood enhances the reinforcement learning process. For example, Li et al.
(2011) and Walsh and Anderson (2011) compared adults’ performance on a
probabilistic S–R learning task when no information about reinforcement probabilities
was given and adults were required to learn the S–R associations solely by feedback,
with a separate condition in which participants were instructed as to the probability that
each S–R pair would be followed by valid feedback, for example that one S–R
association would be correctly reinforced on 30% of trials. Adults’ performance
increased gradually in the no-instruction learning condition, but began and remained at
asymptote in the instruction condition. The enhancing effect of instruction on learning is
suggested to reflect the top-down influence of rules for learning represented in
prefrontal regions on striatal reinforcement learning mechanisms (Li et al., 2011).
In the current study, a rule for how the S–R associations should be re-learned would
have been acquired easily after only a few trials in block 4 based on knowledge of what
the original S–R mappings were and identifying that the mappings simply had to be
reversed. If implemented, this rule would facilitate faster re-learning of the associations.
Adults verbally reported that they realised the S–R combinations in block 4 were simply
the opposite of those in blocks 1–3.
One final observation to discuss is the prolonged negativity following the FRN
observed in the feedback-locked waveforms in all learning blocks in children but not in
adults . A detailed analysis of this component was beyond the scope of this article, but
would be worthy of future research. It is likely that this second negative peak in the
children reflects a second oscillation of the same on-going physiological process
(feedback-processing), and may occur due to additional or more effortful processing of
the feedback information in children to compensate for their greater difficulty in learning
the S–R associations. Alternatively, this negativity might index different learning
strategies used in children compared with adults. A recent study comparing feedback-
locked potentials between groups of adults using different learning strategies to
acquire new behaviours reported strategy-related differences in the morphology of
positive feedback components.
At the same time, the cat also learns what not do when faced with negative
experiences.
Model-Based:
In this Reinforcement Learning method, you need to create a virtual model for each
environment. The agent learns to perform in that specific environment.
Negative:
Negative Reinforcement is defined as strengthening of behavior that occurs because of
a negative condition which should have stopped or avoided. It helps you to define the
minimum stand of performance. However, the drawback of this method is that it
provides enough to meet up the minimum behavior.
Here are the major challenges you will face while doing Reinforcement earning:
Feature/reward design which should be very involved
Parameters may affect the speed of learning.
Realistic environments can have partial observability.
Too much Reinforcement may lead to an overload of states which can diminish
the results.
Realistic environments can be non-stationary.
Summary:
Reinforcement Learning is a Machine Learning method
Helps you to discover which action yields the highest reward over the longer
period.
Three methods for reinforcement learning are 1) Value-based 2) Policy-based
and Model based learning.
Agent, State, Reward, Environment, Value function Model of the environment,
Model based methods, are some important terms using in RL learning method
The example of reinforcement learning is your cat is an agent that is exposed to
the environment.
The biggest characteristic of this method is that there is no supervisor, only a real
number or reward signal
Two types of reinforcement learning are 1) Positive 2) Negative
Two widely used learning model are 1) Markov Decision Process 2) Q learning
Reinforcement Learning method works on interacting with the environment,
whereas the supervised learning method works on given sample data or
example.
Application or reinforcement learning methods are: Robotics for industrial
automation and business strategy planning
You should not use this method when you have enough data to solve the
problem
The biggest challenge of this method is that parameters may affect the speed of
learning
Cognition refers to mental activity including thinking, remembering, learning and using
language. When we apply a cognitive approach to learning and teaching, we focus on
theunderstaning of information and concepts. If we are able to understand
theconnections between concepts, break down information and rebuild with
logicalconnections, then our rention of material and understanding will increase.
When we are aware of these mental actions, monitor them and control ourlearning
processes it is called metacognition
Classifications
Course materials in ETEC 512 classified theorists as follows:
Cognitive
Social Cognitive Theory (Social Learning Theory) by Bandura
Bandura focused on observational learning and self-efficacy (Zeldin,
Britner, & Pajares, 2008).
Information Processing Theories by various theorists
The computer was seen as a metaphor for the mind (Schunk 2004/2007a).
Information was input and the mind processed the information, creating
output (Information Processing, n.d.).
Assimilation Theory (Meaningful Learning) by Ausubel
Ausubel focused on reception learning; he noted that the learner was active
and thus he differentiated between rote and meaningful learning (Novak,
1998/2007).
Ausubel stressed the importance of the advance organizer.
Developmental
Genetic Epistemology by Piaget
Piaget believed that experience with the environment affected knowledge
acquisition.
His four stages of development detail how humans develop cognitively.
Sociocultural Theory by Vygotsky
determine the most effective manner in which to organize and structure new
information to work with these backgrounds/experiences
arrange practice with feedback so that the new information is effectively and
efficiently assimilated and /or accommodated with the learner’s cognitive
structure
Distributed Learning
In a Distributed Learning (DL) program, parents are very involved in helping their
children learn. The parents are not trained teachers, and have difficulty with using
pedagogy to inform their practices at home. The parents find value in the efficient
delivery method inherent in cognitivist approaches. DL students in programs that have
social/interactive components ( face-to-face classes or online discussions) can use the
knowledge and skills learned from a cognitivist approach, from content to critical
thinking and problem-solving strategies, to engage in knowledge construction. Pratt
(n.d.) states “that teachers are ‘pedagogical engineers’ with the responsibility to plan a
lesson(s) with the most relevant instructional approaches and technologies at his or
her disposal” (Conclusion).
Latent Learning
Strict behaviorists like Watson and Skinner focused exclusively on studying behavior
rather than cognition (such as thoughts and expectations). In fact, Skinner was such a
staunch believer that cognition didn’t matter that his ideas were considered radical
behaviorism. Skinner considered the mind a “black box”—something completely
unknowable—and, therefore, something not to be studied. However, another
behaviorist, Edward C. Tolman, had a different opinion. Tolman’s experiments with rats
demonstrated that organisms can learn even if they do not receive immediate
reinforcement (Tolman & Honzik, 1930; Tolman, Ritchie, & Kalish, 1946). This finding
was in conflict with the prevailing idea at the time that reinforcement must be
immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.
After 10 sessions in the maze without reinforcement, food was placed in a goal box at
the end of the maze. As soon as the rats became aware of the food, they were able to
find their way through the maze quickly, just as quickly as the comparison group, which
had been rewarded with food all along. This is known as latent learning: learning that
occurs but is not observable in behavior until there is a reason to demonstrate it.
Latent learning also occurs in humans. Children may learn by watching the actions of
their parents but only demonstrate it at a later date, when the learned material is
needed.
For example, suppose that Ravi’s dad drives him to school every day. In this way, Ravi
learns the route from his house to his school, but he’s never driven there himself, so he
has not had a chance to demonstrate that he’s learned the way. One morning Ravi’s
dad has to leave early for a meeting, so he can’t drive Ravi to school. Instead, Ravi
follows the same route on his bike that his dad would have taken in the car. This
demonstrates latent learning. Ravi had learned the route to school, but had no need to
demonstrate this knowledge earlier.
Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map
can impact our success in navigating through the environment. She suggests that
paying attention to specific features upon entering a building, such as a picture on the
wall, a fountain, a statue, or an escalator, adds information to our cognitive map that
can be used later to help find our way out of the building.
Observational learning
Observational learning, method of learning that consists of observing and modeling
another individual’s behavior, attitudes, or emotional expressions. Although it is
commonly believed that the observer will copy the model, American psychologist Albert
Bandura stressed that individuals may simply learn from the behavior rather than
imitate it. Observational learning is a major component of Bandura’s social
learning theory. He also emphasized that four conditions were necessary in any form
of observing and modeling behavior: attention, retention, reproduction, and motivation.
Unfortunately, this aspect of modeling can also be used in detrimental ways. For
example, if young children witness gang members gaining status or money, they may
imitate those behaviors in an effort to gain similar rewards.
Retention
The second requirement of observational learning is being able to remember the
behavior that was witnessed. If the human or animal does not remember the behavior,
there is a less than probable chance that they will imitate it.
Reproduction
This requisite of behavior concerns the physical and mental ability of the individual to
copy the behavior he or she observed. For instance, a young child may observe a
college basketball player dunk a ball. Later, when the child has a basketball, he or she
may attempt to dunk a ball just like the college player. However, the young child is not
nearly as physically developed as the older college player and, no matter how many
times he or she tries, will not be able to reach the basket to dunk the ball.
An older child or an adult might be able to dunk the ball but likely only after quite a bit
of practice. Similarly, a young colt observes another horse in the herd jump over the
creek while running in the pasture. After observing the model’s jumping behavior, the
colt attempts to do the same only to land in the middle of the creek. He simply was not
big enough or did not have long enough legs to clear the water. He could, however,
after physical growth and some practice, eventually be able to replicate the other
horse’s jump.
Motivation
Perhaps the most important aspect of observational learning involves motivation. If the
human or animal does not have a reason for imitating the behavior, then no amount of
attention, retention, or reproduction will overcome the lack of motivation. Bandura
identified several motivating factors for imitation. These include knowing that the model
was previously reinforced for the behavior, being offered an incentive to perform, or
observing the model receiving reinforcement for the behavior. These factors can also
be negative motivations. For instance, if the observer knew that the model was
punished for the behavior, was threatened for exhibiting the behavior, or observed the
model being punished for the behavior, then the probability of mimicking the behavior
is less.
Bandura’s findings in the Bobo doll experiments have greatly influenced children’s
television programming. Bandura filmed his students physically attacking the Bobo doll,
an inflatable doll with a rounded bottom that pops back up when knocked down. A
student was placed in the room with the Bobo doll. The student punched the doll,
yelled “sockeroo” at it, kicked it, hit it with hammers, and sat on it. Bandura then
showed this film to young children. Their behavior was taped when in the room with the
doll. The children imitated the behaviors of the student and at times even became
more aggressive toward the doll than what they had observed. Another group of young
children observed a student being nice to the doll. Ironically, this group of children did
not imitate the positive interaction of the model. Bandura conducted a large number of
varied scenarios of this study and found similar events even when the doll was a live
clown. These findings have prompted many parents to monitor the television shows
their children watch and the friends or peers with which they associate. Unfortunately,
the parental saying “Do as I say, not as I do” does not hold true for children. Children
are more likely to imitate the behaviors versus the instructions of their parents.
One of the most famous instances of observational learning in animals involves the
blue , a small European bird. During the 1920s and through the 1940s, many people
reported that the cream from the top of the milk being delivered to their homes was
being stolen. The cream-stealing incidents spread all over Great Britain. After much
speculation about the missing cream, it was discovered that the blue tit was the culprit.
Specifically, one bird had learned to peck through the foil top of the milk container and
suck the cream out of the bottle. It did not take long before other blue tit birds imitated
the behavior and spread it through the country.
Verbal Learning
Verbal learning is the process of acquiring, retaining and recalling of verbal material. At
its most elementary level, it can be defined as a process of building associations
between a stimulus and a response, with both of them being verbal. At a broader level,
verbal learning includes the processes of organizing the stimulus material by the
learner and the related changes in the learner's behavior.
At different stages of the development of the verbal learning studies, a variety of
aspects of this phenomenon were highlighted. For example, German psychologist
Hermann Ebbinghaus (1850-1909) in his book On Memory (1885), focused on the
processes of association building and recall, which shaped his experiments with verbal
material. Ebbinghaus's work, albeit not dealing explicitly with verbal learning, is
considered as the first seminal work in the field, due to the material and the methods
used.
Verbal learning can be based on different processes and can be classified in several
types. The first process is serial verbal learning. People engage in it when they learn a
list of verbal items (for example, words or syllables) while maintaining the order of the
items. Psychologists test this type of learning by asking subjects to read a list of verbal
stimuli and then reproduce this list while keeping the original order of the items. Such
experiments have been widely used in tests of short- and long-term memory.
A useful strategy that can be used while remembering such lists of words is to build
associations between them. Thus, the first word "anticipates" the second, and,
analogically, every word points to the one after it. Ebbinghaus called this learning
strategy the serial anticipation method. Studies of this type of learning have also
discovered the serial position effect, which says that different parts of the list are
learned with different difficulty. Usually, the first few items are the easiest to learn, then
come those at the end of the list. The hardest to learn are the items just after the
middle.
of known English words and words from another language, building associations
between the items of a pair. Then, when presented with a foreign word (stimulus), the
pupil has to name the corresponding English word (response).
Free verbal learning is a type of learning, which people use when they learn lists of
items regardless of their order. A task used to test this type of learning is free recall.
The subject is asked to recall as many items from a list as possible, regardless of their
sequence. Such tests are often used to establish organizational patterns of learning
and memory. For example, the subject may use a clustering strategy - grouping items
according to their similarity or the number of letters in them.
Discrimination learning
Learning Set and Learning by Exclusion
Discrimination learning can be generalized. The point is illustrated by experiments with
monkeys on learning set. In the typical procedure, a monkey is presented with two
objects; under one is a piece of food. When it learns the discrimination, two new
objects are presented, and so on. Learning proceeds more rapidly after each problem
until it learns new discriminations in a single trial. Learning set has also been
demonstrated in other species and with other tasks.
known names. Without any further training or prompts, subjects are asked to point to
the new food that is given a never before heard name. Experienced subjects correctly
select the novel food on the first test, and in subsequent tests with that food and new
foods they demonstrate retention of the new name. These experiments show that after
learning several names, individuals learn the contingency that exists between new
names and new objects.
Discrimination Learning
Discrimination learning can be used as a part of training for more difficult tasks,
including the judgement bias tasks and Iowa gambling task described earlier in the
chapter. It can, however, also be used as a task in and of itself, to determine the ability
of animals to discriminate between two stimuli and the capacity of animals to learn and
perform tasks based on discrimination in different modalities.
Discrimination tasks based on auditory stimuli have been more successful, with pigs
showing distinct operant responses to auditory stimuli of different frequencies (Murphy
et al., 2013a). Other modalities, such as odor cues or tactile cues, have yet to be
tested in pigs. Given their strong olfactory and tactile abilities (the snout is particularly
sensitive), this may be an interesting avenue to explore to improve discrimination
learning.
Neural Basis
different stimulus domain (e.g., testing on spatial reversals after having performed
object reversals).
The shelf-life of skills is diminishing. The need for ongoing learning and development is
greater than at any previous point in history. 38% of CEOs believe a shortage of key
skills is the top people-related threat to growth. That’s up from 31% in 2017. With this
in mind, it’s no surprise that building a culture of continuous learning is currently a
priority for L&D leaders. This encompasses just-in-time learning designed to close a
specific knowledge gap in a current role, right through to development of competencies
and behaviours needed for future roles.
Today’s learner expects (and is expected to) continually learn and develop. This
culture of continual learning is simultaneously driving and being reinforced by the
As the above diagram suggests, over half of employees (58%) want to learn at their
own pace depending on their requirements, development needs and interests. But
employees still want their manager’s input on how to improve, with 56% saying they
would spend more time learning if their manager suggested activities.
Instead of a ‘one-size-fits-all’ approach, organisations are
creating personalised learning paths that develop employees in their current role, next
role and future career paths.”
4. Social learning
In addition to social collaborative tools, organisations are also experimenting with
cross-functional project-based learning, creating online learning marketplaces and
structured mentoring forums. At the very minimum, employees expect workplace
technologies that allow:
social networking
instant messaging,
online collaboration;
video conferencing.
Humans are social by nature: 87% of employees say that sharing knowledge with their
team is critical for learning. 34% of organisations are already investing in social
learning tools and over the next few years we anticipate the uptake will accelerate. The
increasing complexity of work, rise of the contingent and freelance workforce, and the
desire to work ‘anywhere, anytime’ will drive the adoption of social collaboration and
knowledge-sharing tools.
5. Employee-curated content
Relevant content is what matters most to employees. Yet less than half (46%) of
employees are satisfied with the relevance of the content provided by
their organisation. How can we help employees cut through the clutter? Employees
want the ability to create their own online content and share learning resources. It’s no
surprise that peer-to-peer learning continues to gain traction – empowering people to
share relevant content with their colleagues.
Organisations are also supporting apps which curate, publish and share content to
keep peers, teammates and managers across the latest and most relevant content.
Crowdsourcing means content is constantly refreshed, removing the barrier of
irrelevant information which can deter time-poor learners.
7. Microlearning
Making time for learning isn’t easy. Our constantly-connected lifestyles also means
that attention spans are shrinking. A solution could lie in Microlearning: this is bite-
sized chunks of learning content, completed in three to five minutes, that makes
learning easily digestible. Some examples of successful microlearning can be found on
popular mediums including:
podcasts
blogs
eLearning
videos
As a medium, videos are still a popular way to learn. More organisations are turning to
mobile devices to create quick, inexpensive, easily uploadable content. According
to Training Journal (TJ) “In 2019, we can expect to see even more of this ‘guerilla’ film
production with high-end features with interactivity that will be distributed on mobile
devices and applications for easy accessibility.”
Online content can be parceled into smaller components, so employees can learn
where and when it suits them.
This trend has been driven not only by a better understanding of how we learn, but
also by advances in technology. The uptake of mobile and cloud technologies means
content can be accessed as needed. This can have a positive impact on a company’s
bottom line. Organisations that empower their employees with microlearning
experience a 63% increase in revenue compared to their peers.
Neuro-physiology of Learning
Many seem to be the points of contact that relate the analysis of studies on the neuro-
phenomenological vision of knowledge (and the occurrence of states of consciousness
in relation to the act of knowing) and further studies phenomena related to bio physical
and neuro-logical that govern and influence the physiology of learning.
A study conducted by several parties on the subject show that individual neurons are
able to recognize people, landscapes, objects and even written and names. The
finding suggests the existence of a consistent and explicit code that could play a role in
the transformation of complex visual representations into long-term memories.
This conception of individual neurons as 'thinking cells' - says the neuro-surgeon Itzhak
Fried - represents an important step toward deciphering the code of the cognitive brain.
If we can understand this process, maybe one day we will be able to build cognitive
prostheses to replace functions lost due to brain injury or disease, and perhaps even to
improve memory
This statement leads to the conclusion that each neuron has its own memory and that
groups of neurons fire selectively to images of faces, animals, objects or scenes. In
this perspective are here analyzed two different areas of research that are based on
two different approaches: one referring to the neuro-phenomenological studies (total
embodiment of Varela and Thompson), and the other in reference to the neuro-
physiological and bio-medical studies (neurophysiology of learning of Zhuo Joseph
Tsien).
For a long time research of the neural mechanisms by which memories are stored in
the brain has been studied by neuroscientists. Learning and memory are very
important in the structuring of knowledge: learning is the process by which one
acquires knowledge and memory is the process by which knowledge is preserved in
time. For many years we have tried, therefore, to investigate the intricacies of cellular
memory and to understand the functional basis for action at the neural level.
Tsien and his team, in a biomedical field and through combined and complex
experiments, have developed an interesting theory on the basic mechanism by which
the brain would be able to transform experience into memory. Clans of neurons
involved in coding, they say, make a selection of experiences stored, giving a sense of
the experience and transforming it into knowledge. This extraordinary research could
allow, in the near future, to decipher the neural universal code allowing the reading of
the memories of a human being by monitoring brain activity.
Interesting observations are: a) the nature of the mechanisms of action and the
behavior of neural cells, b) the sophisticated mechanism of action research (covering
the area CA1 of the hippocampus).
In this theoretical model each event is represented by a group of neuronal clans that
encode different characteristics; a clan is represented by a set of neurons that
responds in a similar manner to each stimulus, working in harmony in the encoding of
events. It is believed, therefore, that it is the clan to generate neural memories, acting
in unison on the information conveyed phenomenal experience. Does this mean that
behavior is also the derivation of genetic relational nature of man and his
predisposition to the “lineage"? (Maturana Dàvila, 2006)
The brain is, in this perspective, the clan for neural discrimination of events encoded in
memory. In a threedimensional view, each experience is represented on a pyramid at
various levels; each pyramid is considered an integral part of a polyhedron which, in
turn, represents the category common to all the pyramids. This model represents a
consolidation of memories in a clear and inconceivable way and demonstrate the
dynamic nature of the human brain and its extraordinary ability.
organization and on the categorization as universal principles of the functioning of our
brain. In the case of memory these properties allow you to create an unlimited number
of patterns of neuronal activation (corresponding to the number of experiences that an
organism can live). In this perspective, Tsien and his research team in a recent article
say “The ability to learn and remember conspecifics is essential for the establishment
and maintenance of social groups.
Many animals, including humans, primates and rodents, depend on stable social
relationships for survival. Social learning and social recognition have become emerging
areas of interest for neuroscientists but are still not well understood. It has been
established that several hormones play a role in the modulation of social recognition
including estrogen, oxytocin and arginine vasopression.
Relatively few studies have investigated how social recognition might be improved or
enhanced. In this study, we investigate the role of the NMDA receptor in social
recognition memory, specifically the consequences of altering the ratio of the
NR2B:NR2A subunits in the forebrain regions in social behavior. We produced
transgenic mice in which the NR2B subunit of the NMDA receptor was overexpressed
postnatally in the excitatory neurons of the forebrain areas including the cortex,
amygdala and hippocampus. We investigated the ability of both our transgenic animals
and their wild-type littermate to learn and remember juvenile conspecifics using both 1-
hr and 24-hr memory tests.
Next is the actual storage, which simply means holding onto the information. For this to
take place, the computer must physically write the 1’ and 0’s onto the hard drive. It is
very similar for us because it means that a physiological change must occur for the
memory to be stored. The final process is called retrieval, which is bringing the memory
out of storage and reversing the process of encoding. In other words, return the
information to a form similar to what we stored.
The major difference between humans and computers in terms of memory has to do
with how the information is stored. For the most part, computers have only two types;
permanent storage and permanent deletion. Humans, on the other hand are more
complex in that we have three distinct memory storage capabilities (not including
permanent deletion). The first is Sensory memory, referring to the information we
receive through the senses. This memory is very brief lasting only as much as a few
seconds.
Short Term Memory(STM) takes over when the information in our sensory memory is
transferred to our consciousness or our awareness (Engle, Cantor, & Carullo, 1993;
Laming, 1992). This is the information that is currently active such as reading this
page, talking to a friend, or writing a paper. Short term memory can definitely last
longer than sensory memory (up to 30 seconds or so), but it still has a very limited
If STM lasts only up to 30 seconds, how do we ever get any work done? Wouldn’t we
start to lose focus or concentrate about twice every minute? This argument prompted
researchers to look at a second phase of STM that is now referred to as Working
Memory. Working Memory is the process that takes place when we continually focus
on material for longer than STM alone will allow (Baddeley, 1992).
What happens when our short term memory is full and another bit of information
enters? Displacement means that the new information will push out part of the old
information. Suddenly some one says the area code for that phone number and almost
instantly you forget the last two digits of the number. We can further sharpen our short
term memory skills, however, by mastering chunking and using rehearsal (which allows
us to visualize, hear, say, or even see the information repeatedly and through different
senses).
Finally, there is long term memory (LTM), which is most similar to the permanent
storage of a computer. Unlike the other two types, LTM is relatively permanent and
practically unlimited in terms of its storage capacity. Its been argued that we have
enough space in our LTM to memorize every phone number in the U.S. and still
function normally in terms of remembering what we do now.
Long Term Memory. Information that passes from our short term to our long term
memory is typically that which has some significance attached to it. Imagine how
difficult it would be to forget the day you graduated, or your first kiss. Now think about
how easy it is to forget information that has no significance; the color of the car you
parked next to at the store or what shirt you wore last Thursday. When we process
information, we attach significance to it and information deemed important is
transferred to our long term memory.
There are other reasons information is transferred. As we all know, sometimes our
brains seem full of insignificant facts. Repetition plays a role in this, as we tend to
remember things more the more they are rehearsed. Other times, information is
transferred because it is somehow attached to something significant. You may
remember that it was a warm day when you bought your first car. The temperature
really plays no important role, but is attached to the memory of buying your first car.
Forgetting
You can’t talk about remembering without mentioning its counterpart. It seems that as
much as we do remember, we forget even more. Forgetting isn’t really all that bad, and
is in actuality, a pretty natural phenomenon. Imagine if you remembered every minute
detail of every minute or every hour, of every day during your entire life, no matter how
good, bad, or insignificant. Now imagine trying to sift through it all for the important
stuff like where you left your keys.
There are many reasons we forget things and often these reasons overlap. Like in the
example above, some information never makes it to LTM. Other times, the information
gets there, but is lost before it can attach itself to our LTM. Other reasons include
decay, which means that information that is not used for an extended period of time
decays or fades away over time. It is possible that we are physiologically
preprogrammed to eventually erase data that no longer appears pertinent to us.
Failing to remember something doesn’t mean the information is gone forever though.
Sometimes the information is there but for various reasons we can’t access it. This
could be caused by distractions going on around us or possibly due to an error of
association (e.g., believing something about the data which is not correct causing you
to attempt to retrieve information that is not there). There is also the phenomenon of
repression, which means that we purposefully (albeit subconsciously) push a memory
out of reach because we do not want to remember the associated feelings. This is
often sited in cases where adults ‘forget’ incidences of sexual abuse when they were
children. And finally, amnesia, which can be psychological or physiological in origin.
Forgetting is an all too common part of daily life. Sometimes these memory slips are
simple and fairly innocuous, such as forgetting to return a phone call. Other times,
forgetting can be much more dire and even have serious consequences, such as an
eyewitness forgetting important details about a crime.
Memory failures are an almost daily occurrence. Forgetting is so common that you
probably rely on numerous methods to help you remember important information, such
as jotting down notes in a daily planner or scheduling important events on your phone's
calendar.
As you are frantically searching for your missing car keys, it may seem that the
information about where you left them is permanently gone from your memory.
However, forgetting is generally not about actually losing or erasing this information
from your long-term memory.
In order to test for new information, Ebbinghaus tested his memory for periods of time
ranging from 20 minutes to 31 days. He then published his findings in 1885 in Memory:
A Contribution to Experimental Psychology.
His results, plotted in what is known as the Ebbinghaus forgetting curve, revealed a
relationship between forgetting and time. Initially, information is often lost very quickly
after it is learned. Factors such as how the information was learned and how frequently
it was rehearsed play a role in how quickly these memories are lost. Information stored
in long-term memory is surprisingly stable.
The forgetting curve also showed that forgetting does not continue to decline until all of
the information is lost.2 At a certain point, the amount of forgetting levels off.
So how do we know when something has been forgotten? There are a few different
ways to measure this:3
Recall: People who have been asked to memorize something, such as a list of
terms, might be asked to recall the list from memory. By seeing how many items
are remembered, researchers are able to identify how much information has
been forgotten. This method might involve the use of free recall (recalling items
without hints) or prompted recall (utilizing hints to trigger memories).
Recognition: This method involves identifying information that was previously
learned. On a test, for example, students might have to recognize which terms
they learned about in a chapter of their assigned reading.
Good encoding techniques include relating new information to what one already
knows, forming mental images, and creating associations among information that
needs to be remembered. The key to good retrieval is developing effective cues that
will lead the rememberer back to the encoded information. Classic mnemonic systems,
known since the time of the ancient Greeks and still used by some today, can greatly
improve one’s memory abilities.
Key Points
The three main stages of memory are encoding, storage, and retrieval. Problems
can occur at any of these stages.
The three main forms of memory storage are sensory memory, short-term
memory, and long-term memory.
Sensory memory is not consciously controlled; it allows individuals to retain
impressions of sensory information after the original stimulus has ceased.
Short-term memory lasts for a very brief time and can only hold 7 +/- 2 pieces of
information at once.
Long-term storage can hold an indefinitely large amount of information and can
last for a very long time.
Implicit and explicit memories are two different types of long-term memory.
Implicit memories are of sensory and automatized behaviors, and explicit
memories are of information, episodes, or events.
Key Terms
memory: The ability of an organism to record information about things or events
with the facility of recalling them later at will.
rehearsal: Repetition of an item in short-term memory in order to store it in long-
term memory.
Memory is the ability to take in information, store it, and recall it at a later time. In
psychology, memory is broken into three stages: encoding, storage, and retrieval. The
Memory Process
1. Encoding (or registration): the process of receiving, processing, and combining
information. Encoding allows information from the outside world to reach our
senses in the forms of chemical and physical stimuli. In this first stage we must
change the information so that we may put the memory into the encoding
process.
2. Storage: the creation of a permanent record of the encoded information. Storage
is the second memory stage or process in which we maintain information over
periods of time.
3. Retrieval (or recall, or recognition): the calling back of stored information in
response to some cue for use in a process or activity. The third process is the
retrieval of information that we have stored. We must locate it and return it to our
consciousness. Some retrieval attempts may be effortless due to the type of
information.
Problems can occur at any stage of the process, leading to anything from forgetfulness
to amnesia. Distraction can prevent us from encoding information initially; information
might not be stored properly, or might not move from short-term to long-term storage;
and/or we might not be able to retrieve the information once it’s stored.
Memory Encoding
Our memory has three basic functions: encoding, storing, and retrieving information.
Encoding is the act of getting information into our memory system through automatic or
effortful processing. Storage is retention of the information, and retrieval is the act of
getting information out of storage and into conscious awareness through recall,
recognition, and relearning. There are various models that aim to explain how we
utilize our memory. In this section, you’ll learn about some of these models as well as
the importance of recall, recognition, and relearning.
Encoding
We get information into our brains through a process called encoding, which is the
input of information into the memory system. Once we receive sensory information
from the environment, our brains label or code it. We organize the information with
other similar information and connect new concepts to existing concepts. Encoding
information occurs through automatic processing and effortful processing. If someone
asks you what you ate for lunch today, more than likely you could recall this
information quite easily.
This is known as automatic processing, or the encoding of details like time, space,
frequency, and the meaning of words. Automatic processing is usually done without
any conscious awareness. Recalling the last time you studied for a test is another
example of automatic processing. But what about the actual test material you studied?
It probably required a lot of work and attention on your part in order to encode that
information.
What are the most effective ways to ensure that important memories are well
encoded? Even a simple sentence is easier to recall when it is meaningful (Anderson,
1984). Read the following sentences (Bransford & McCarrell, 1974), then look away
and count backwards from 30 by threes to zero, and then try to write down the
sentences (no peeking back at this page!).
1. The notes were sour because the seams split.
2. The voyage wasn’t delayed because the bottle shattered.
3. The haystack was important because the cloth ripped.
How well did you do? By themselves, the statements that you wrote down were most
likely confusing and difficult for you to recall. Now, try writing them again, using the
following prompts: bagpipe, ship christening (shattering a bottle over the bow of the
ship is a symbol of good luck), and parachutist. Next count backwards from 40 by
fours, then check yourself to see how well you recalled the sentences this time. You
can see that the sentences are now much more memorable because each of the
sentences was placed in context. Material is far better encoded when you make it
meaningful.
There are three types of encoding. The encoding of words and their meaning is known
as semantic encoding. It was first demonstrated by William Bousfield (1935) in an
experiment in which he asked people to memorize words. The 60 words were actually
divided into 4 categories of meaning, although the participants did not know this
because the words were randomly presented. When they were asked to remember the
words, they tended to recall them in categories, showing that they paid attention to the
meanings of the words as they learned them.
Visual encoding is the encoding of images, and acoustic encoding is the encoding
of sounds, words in particular. To see how visual encoding works, read over this list of
words: car, level, dog, truth, book, value. If you were asked later to recall the words
from this list, which ones do you think you’d most likely remember? You would
probably have an easier time recalling the words car, dog, and book, and a more
difficult time recalling the words level, truth, and value. Why is this? Because you can
recall images (mental pictures) more easily than words alone. When you read the
words car, dog, and book you created images of these things in your mind. These are
concrete, high-imagery words. On the other hand, abstract words like level,
truth, and value are low-imagery words. High-imagery words are encoded both visually
and semantically (Paivio, 1986), thus building a stronger memory.
Now let’s turn our attention to acoustic encoding. You are driving in your car and a
song comes on the radio that you haven’t heard in at least 10 years, but you sing
along, recalling every word. In the United States, children often learn the alphabet
through song, and they learn the number of days in each month through rhyme: “Thirty
days hath September, / April, June, and November; / All the rest have thirty-one, /
Save February, with twenty-eight days clear, / And twenty-nine each leap year.” These
lessons are easy to remember because of acoustic encoding. We encode the sounds
the words make. This is one of the reasons why much of what we teach young children
is done through song, rhyme, and rhythm.
Which of the three types of encoding do you think would give you the best memory of
verbal information? Some years ago, psychologists Fergus Craik and Endel Tulving
(1975) conducted a series of experiments to find out. Participants were given words
along with questions about them. The questions required the participants to process
the words at one of the three levels. The visual processing questions included such
things as asking the participants about the font of the letters. The acoustic processing
questions asked the participants about the sound or rhyming of the words, and the
semantic processing questions asked the participants about the meaning of the words.
After participants were presented with the words and questions, they were given an
unexpected recall or recognition task.
Words that had been encoded semantically were better remembered than those
encoded visually or acoustically. Semantic encoding involves a deeper level of
processing than the shallower visual or acoustic encoding. Craik and Tulving
concluded that we process verbal information best through semantic encoding,
especially if we apply what is called the self-reference effect. The self-reference effect
is the tendency for an individual to have better memory for information that relates to
oneself in comparison to material that has less personal relevance (Rogers, Kuiper &
Kirker, 1977).
Recoding
The process of encoding is selective, and in complex situations, relatively few of many
possible details are noticed and encoded. The process of encoding always
involves recoding—that is, taking the information from the form it is delivered to us
and then converting it in a way that we can make sense of it. For example, you might
try to remember the colors of a rainbow by using the acronym ROY G BIV (red, orange,
yellow, green, blue, indigo, violet). The process of recoding the colors into a name can
Varieties of Memory
To be a good chess player you have to learn to increase working memory so you can
plan ahead for several offensive moves while simultaneously anticipating - through use
of memory - how the other player could counter each of your planned moves.
For most of us, remembering digits relies on short-term memory, or working memory—
the ability to hold information in our minds for a brief time and work with it (e.g.,
multiplying 24 x 17 without using paper would rely on working memory). Another type
of memory is
Episodic memory—the ability to remember the episodes of our lives. If you were
given the task of recalling everything you did 2 days ago, that would be a test of
episodic memory; you would be required to mentally travel through the day in your
mind and note the main events.
Semantic memory
is our storehouse of more-or-less permanent knowledge, such as the meanings of
words in a language (e.g., the meaning of “parasol”) and the huge collection of facts
about the world (e.g., there are 196 countries in the world, and 206 bones in your
body). Collective memory refers to the kind of memory that people in a group share
(whether family, community, schoolmates, or citizens of a state or a country). For
example, residents of small towns often strongly identify with those towns,
remembering the local customs and historical events in a unique way. That is, the
community’s collective memory passes stories and recollections between neighbors
and to future generations, forming a memory system unto itself.
Storage Memory
Every experience we have changes our brains. That may seem like a bold, even
strange, claim at first, but it’s true. We encode each of our experiences within the
structures of the nervous system, making new impressions in the process—and each
of those impressions involves changes in the brain. Psychologists (and
neurobiologists) say that experiences leave memory traces, or engrams (the two
terms are synonyms).
Memories have to be stored somewhere in the brain, so in order to do so, the brain
biochemically alters itself and its neural tissue. Just like you might write yourself a note
to remind you of something, the brain “writes” a memory trace, changing its own
physical composition to do so. The basic idea is that events (occurrences in our
environment) create engrams through a process of consolidation: the neural changes
that occur after learning to create the memory trace of an experience. Although
neurobiologists are concerned with exactly what neural processes change when
memories are created, for psychologists, the term memory trace simply refers to the
physical change in the nervous system (whatever that may be, exactly) that represents
our experience.
often have errors in our memory, which would not exist if memory traces were perfect
packets of information.
Thus, it is wrong to think that remembering involves simply “reading out” a faithful
record of past experience. Rather, when we remember past events, we reconstruct
them with the aid of our memory traces—but also with our current belief of what
happened. For example, if you were trying to recall for the police who started a fight at
a bar, you may not have a memory trace of who pushed whom first. However, let’s say
you remember that one of the guys held the door open for you. When thinking back to
the start of the fight, this knowledge (of how one guy was friendly to you) may
unconsciously influence your memory of what happened in favor of the nice guy. Thus,
memory is a construction of what you actually recall and what you believe happened.
In a phrase, remembering is reconstructive (we reconstruct our past with the aid of
memory traces) not reproductive (a perfect reproduction or recreation of the past).
Psychologists refer to the time between learning and testing as the retention interval.
Memories can consolidate during that time, aiding retention. However, experiences can
also occur that undermine the memory. For example, think of what you had for lunch
yesterday—a pretty easy task. However, if you had to recall what you had for lunch 17
days ago, you may well fail (assuming you don’t eat the same thing every day). The 16
lunches you’ve had since that one have created retroactive interference. Retroactive
interference refers to new activities (i.e., the subsequent lunches) during the retention
interval (i.e., the time between the lunch 17 days ago and now) that interfere with
retrieving the specific, older memory (i.e., the lunch details from 17 days ago). But just
as newer things can interfere with remembering older things, so can the opposite
happen. Proactive interference is when past memories interfere with the encoding of
new ones. For example, if you have ever studied a second language, often times the
grammar and vocabulary of your native language will pop into your head, impairing
your fluency in the foreign language.
In which she shows how memory for an event can be changed via misinformation
supplied during the retention interval. For example, if you witnessed a car crash but
subsequently heard people describing it from their own perspective, this new
information may interfere with or disrupt your own personal recollection of the crash. In
fact, you may even come to remember the event happening exactly as the others
described it! This misinformation effect in eyewitness memory represents a type of
retroactive interference that can occur during the retention interval (see Loftus [2005]
for a review). Of course, if correct information is given during the retention interval, the
witness’s memory will usually be improved.
Retrieval Memory
Endel Tulving argued that “the key process in memory is retrieval” (1991, p. 91). Why
should retrieval be given more prominence than encoding or storage? For one thing, if
information were encoded and stored but could not be retrieved, it would be useless.
What factors determine what information can be retrieved from memory? One critical
factor is the type of hints, or cues, in the environment. You may hear a song on the
radio that suddenly evokes memories of an earlier time in your life, even if you were
not trying to remember it when the song came on. Nevertheless, the song is closely
associated with that time, so it brings the experience to mind.
The general principle that underlies the effectiveness of retrieval cues is the encoding
specificity principle (Tulving & Thomson, 1973): when people encode information,
they do so in specific ways. For example, take the song on the radio: perhaps you
heard it while you were at a terrific party, having a great, philosophical conversation
with a friend. Thus, the song became part of that whole complex experience. Years
later, even though you haven’t thought about that party in ages, when you hear the
song on the radio, the whole experience rushes back to you. In general, the encoding
specificity principle states that, to the extent a retrieval cue (the song) matches or
overlaps the memory trace of an experience (the party, the conversation), it will be
effective in evoking the memory. A classic experiment on the encoding specificity
principle had participants memorize a set of words in a unique setting. Later, the
participants were tested on the word sets, either in the same location they learned the
words or a different one. As a result of encoding specificity, the students who took the
test in the same place they learned the words were actually able to recall more words
(Godden & Baddeley, 1975) than the students who took the test in a new setting.
One caution with this principle, though, is that, for the cue to work, it can’t match too
many other experiences (Nairne, 2002; Watkins, 1975). Consider a lab experiment.
Suppose you study 100 items; 99 are words, and one is a picture—of a penguin, item
50 in the list. Afterwards, the cue “recall the picture” would evoke “penguin” perfectly.
No one would miss it. However, if the word “penguin” were placed in the same spot
among the other 99 words, its memorability would be exceptionally worse.
This outcome shows the power of distinctiveness that we discussed in the section on
encoding: one picture is perfectly recalled from among 99 words because it stands out.
Now consider what would happen if the experiment were repeated, but there were 25
pictures distributed within the 100-item list. Although the picture of the penguin would
still be there, the probability that the cue “recall the picture” (at item 50) would be useful
for the penguin would drop correspondingly. Watkins (1975) referred to this outcome
as demonstrating the cue overload principle. That is, to be effective, a retrieval cue
cannot be overloaded with too many memories. For the cue “recall the picture” to be
effective, it should only match one item in the target set (as in the one-picture, 99-word
case).
To sum up how memory cues function: for a retrieval cue to be effective, a match must
exist between the cue and the desired target memory; furthermore, to produce the best
retrieval, the cue-target relationship should be distinctive. Next, we will see how the
encoding specificity principle can work in practice.
Psychologists measure memory performance by using production tests (involving
recall) or recognition tests (involving the selection of correct from incorrect information,
e.g., a multiple-choice test). For example, with our list of 100 words, one group of
people might be asked to recall the list in any order (a free recall test), while a different
group might be asked to circle the 100 studied words out of a mix with another 100,
unstudied words (a recognition test). In this situation, the recognition test would likely
produce better performance from participants than the recall test.
We usually think of recognition tests as being quite easy, because the cue for retrieval
is a copy of the actual event that was presented for study. After all, what could be a
better cue than the exact target (memory) the person is trying to access? In most
cases, this line of reasoning is true; nevertheless, recognition tests do not provide
perfect indexes of what is stored in memory. That is, you can fail to recognize a target
staring you right in the face, yet be able to recall it later with a different set of cues
(Watkins & Tulving, 1975). For example, suppose you had the task of recognizing the
surnames of famous authors. At first, you might think that being given the actual last
name would always be the best cue. However, research has shown this not
necessarily to be true (Muter, 1984). When given names such as Tolstoy, Shaw,
Shakespeare, and Lee, subjects might well say that Tolstoy and Shakespeare are
famous authors, whereas Shaw and Lee are not. But, when given a cued recall test
using first names, people often recall items (produce them) that they had failed to
recognize before. For example, in this instance, a cue like George Bernard
________ often leads to a recall of “Shaw,” even though people initially failed to
recognize Shaw as a famous author’s name. Yet, when given the cue “William,” people
may not come up with Shakespeare, because William is a common name that matches
many people (the cue overload principle at work). This strange fact—that recall can
sometimes lead to better performance than recognition—can be explained by the
encoding specificity principle. As a cue, George Bernard _________ matches the way
the famous writer is stored in memory better than does his surname, Shaw, does (even
though it is the target). Further, the match is quite distinctive with George
when a man tried to kidnap me. I was held in by the strap fastened round me while my
nurse bravely tried to stand between me and the thief. She received various scratches,
and I can still vaguely see those on her face. . . . When I was about 15, my parents
received a letter from my former nurse saying that she had been converted to the
Salvation Army. She wanted to confess her past faults, and in particular to return the
watch she had been given as a reward on this occasion. She had made up the whole
story, faking the scratches. I therefore must have heard, as a child, this story, which my
parents believed, and projected it into the past in the form of a visual memory. . . .
Many real memories are doubtless of the same order. (Norman & Schacter, 1997, pp.
187–188)
Piaget’s vivid account represents a case of a pure reconstructive memory. He heard
the tale told repeatedly, and doubtless told it (and thought about it) himself. The
repeated telling cemented the events as though they had really happened, just as we
are all open to the possibility of having “many real memories ... of the same order.” The
fact that one can remember precise details (the location, the scratches) does not
necessarily indicate that the memory is true, a point that has been confirmed in
laboratory studies, too (e.g., Norman & Schacter, 1997).
So, how can these principles be adapted for use in many situations? Let’s go back to
how we started the module, with Simon Reinhard’s ability to memorize huge numbers
of digits. Although it was not obvious, he applied these same general memory
principles, but in a more deliberate way. In fact, all mnemonic devices, or memory
aids/tricks, rely on these fundamental principles. In a typical case, the person learns a
set of cues and then applies these cues to learn and remember information. Consider
the set of 20 items below that are easy to learn and remember (Bower & Reitman,
1972).
1. is a gun. 11 is penny-one, hot dog bun.
2. is a shoe. 12 is penny-two, airplane glue.
3. is a tree. 13 is penny-three, bumble bee.
4. is a door. 14 is penny-four, grocery store.
5. is knives. 15 is penny-five, big beehive.
The way to use the method is to form a vivid image of what you want to remember and
imagine it interacting with your peg words (as many as you need). For example, for
these items, you might imagine a large gun (the first peg word) shooting a loaf of
bread, then a jar of peanut butter inside a shoe, then large bunches of bananas
hanging from a tree, then a door slamming on a head of lettuce with leaves flying
everywhere. The idea is to provide good, distinctive cues (the weirder the better!) for
the information you need to remember while you are learning it. If you do this, then
retrieving it later is relatively easy. You know your cues perfectly (one is gun, etc.), so
you simply go through your cue word list and “look” in your mind’s eye at the image
stored there (bread, in this case).
This peg word method may sound strange at first, but it works quite well, even with little
training (Roediger, 1980). One word of warning, though, is that the items to be
remembered need to be presented relatively slowly at first, until you have practice
associating each with its cue word. People get faster with time. Another interesting
aspect of this technique is that it’s just as easy to recall the items in backwards order
as forwards. This is because the peg words provide direct access to the memorized
items, regardless of order.
How did Simon Reinhard remember those digits? Essentially he has a much more
complex system based on these same principles. In his case, he uses “memory
palaces” (elaborate scenes with discrete places) combined with huge sets of images
for digits. For example, imagine mentally walking through the home where you grew up
and identifying as many distinct areas and objects as possible.
Simon has hundreds of such memory palaces that he uses. Next, for remembering
digits, he has memorized a set of 10,000 images. Every four-digit number for him
immediately brings forth a mental image. So, for example, 6187 might recall Michael
Jackson. When Simon hears all the numbers coming at him, he places an image for
every four digits into locations in his memory palace. He can do this at an incredibly
rapid rate, faster than 4 digits per 4 seconds when they are flashed visually, as in the
demonstration at the beginning of the module. As noted, his record is 240 digits,
recalled in exact order. Simon also holds the world record in an event called “speed
cards,” which involves memorizing the precise order of a shuffled deck of cards. Simon
was able to do this in 21.19 seconds! Again, he uses his memory palaces, and he
encodes groups of cards as single images.
Many books exist on how to improve memory using mnemonic devices, but all involve
forming distinctive encoding operations and then having an infallible set of memory
cues. We should add that to develop and use these memory systems beyond the basic
peg system outlined above takes a great amount of time and concentration. The World
Memory Championships are held every year and the records keep improving.
However, for most common purposes, just keep in mind that to remember well you
need to encode information in a distinctive way and to have good cues for retrieval.
You can adapt a system that will meet most any purpose.
Sensory Memory
Sensory memory allows individuals to retain impressions of sensory information after
the original stimulus has ceased. One of the most common examples of sensory
memory is fast-moving lights in darkness: if you’ve ever lit a sparkler on the Fourth of
July or watched traffic rush by at night, the light appears to leave a trail. This is
because of “iconic memory,” the visual sensory store. Two other types of sensory
memory have been extensively studied: echoic memory (the auditory sensory store)
and haptic memory (the tactile sensory store). Sensory memory is not involved in
higher cognitive functions like short- and long-term memory; it is not consciously
controlled. The role of sensory memory is to provide a detailed representation of our
entire sensory experience for which relevant pieces of information are extracted by
short-term memory and processed by working memory.
KEY TAKEAWAYS
Key Points
Sensory memory allows individuals to recall great detail about a complex
stimulus immediately following its presentation.
There are different types of sensory memory, including iconic memory, echoic
memory, and haptic memory.
In sensory memory, no manipulation of the incoming information occurs, and the
input is quickly transferred to the working memory.
Key Terms
sensory memory: The brief storage (in memory) of information experienced by
the senses; typically only lasts up to a few seconds.
iconic: Visually representative.
echoic: Imitative of a sound; onomatopoeic.
Sensory memory allows individuals to retain impressions of sensory information for a
brief time after the original stimulus has ceased. It allows individuals to remember great
Light trails: In iconic memory, you perceive a moving bright light as forming a
continuous line because of the images retained in sensory memory for milliseconds.
Echoic Memory
Echoic memory is the branch of sensory memory used by the auditory system. Echoic
memory is capable of holding a large amount of auditory information, but only for 3–4
seconds. This echoic sound is replayed in the mind for this brief amount of time
immediately after the presentation of the auditory stimulus.
Haptic Memory
Haptic memory is the branch of sensory memory used by the sense of touch. Sensory
receptors all over the body detect sensations like pressure, itching, and pain, which are
briefly held in haptic memory before vanishing or being transported to short-term
memory. This type of memory seems to be used when assessing the necessary forces
for gripping and interacting with familiar objects. Haptic memory seems to decay after
about two seconds. Evidence of haptic memory has only recently been identified and
not as much is known about its characteristics compared to iconic memory.
Short-Term Memory
Short-term memory is also known as working memory. It holds only a few items
(research shows a range of 7 +/- 2 items) and only lasts for about 20 seconds.
However, items can be moved from short-term memory to long-term memory via
Key Points
Short-term memory acts as a scratchpad for temporary recall of information
being processed. It decays rapidly and has a limited capacity.
Rehearsal and chunking are two ways to make information more likely to be held
in short-term memory.
Working memory is related to short-term memory. It contains a phonological loop
that preserves verbal and auditory data, a visuospatial scratchpad that preserves
visual data, and a central manager that controls attention to the data.
Key Terms
chunking: The splitting of information into smaller pieces to make reading and
understanding faster and easier.
encoding: The process of converting information into a construct that can be
stored within the brain.
consolidation: A process that stabilizes a memory trace after its initial
acquisition.
Short-term memory is the capacity for holding a small amount of information in an
active, readily available state for a brief period of time. It is separate from our long-term
memory, where lots of information is stored for us to recall at a later time. Unlike
sensory memory, it is capable of temporary storage. How long this storage lasts
depends on conscious effort from the individual; without rehearsal or active
maintenance, the duration of short-term memory is believed to be on the order of
seconds.
The psychologist George Miller suggested that human short-term memory has a
forward memory span of approximately seven items plus or minus two. More recent
research has shown that this number is roughly accurate for college students recalling
lists of digits, but memory span varies widely with populations tested and with material
used.
For example, the ability to recall words in order depends on a number of characteristics
of these words: fewer words can be recalled when the words have longer spoken
duration (this is known as the word-length effect) or when their speech sounds are
similar to each other (this is called the phonological similarity effect). More words can
be recalled when the words are highly familiar or occur frequently in the language.
Working Memory
Though the term “working memory” is often used synonymously with “short-term
memory,” working memory is related to but actually distinct from short-term memory. It
holds temporary data in the mind where it can be manipulated. Baddeley and Hitch’s
1974 model of working memory is the most commonly accepted theory of working
memory today. According to Baddeley, working memory has a phonological loop to
preserve verbal data, a visuospatial scratchpad to control visual data, and a central
executive to disperse attention between them.
Phonological Loop
The phonological loop is responsible for dealing with auditory and verbal information,
such as phone numbers, people’s names, or general understanding of what other
people are talking about. We could roughly say that it is a system specialized for
language. It consists of two parts: a short-term phonological store with
auditory memory traces that are subject to rapid decay, and an articulatory loop that
can revive these memory traces. The phonological store can only store sounds for
about two seconds without rehearsal, but the auditory loop can “replay them” internally
to keep them in working memory. The repetition of information deepens the memory.
Visuospatial Sketchpad
Visual and spatial information is handled in the visuospatial sketchpad. This means
that information about the position and properties of objects can be stored. The
phonological loop and visuospatial sketchpad are semi-independent systems; because
of this, you can increase the amount you can remember by engaging both systems at
once. For instance, you might be better able to remember an entire phone number if
you visualize part of it (using the visuospatial sketchpad) and then say the rest of it out
loud (using the phonological loop).
Central Executive
The central executive connects the phonological loop and the visuospatial sketchpad
and coordinates their activities. It also links the working memory to the long-term
memory, controls the storage of long-term memory, and manages memory retrieval
from storage. The process of storage is influenced by the duration in which information
is held in working memory and the amount that the information is manipulated.
Information is stored for a longer time if it is semantically interpreted and viewed with
relation to other information already stored in long-term memory.
Transport to Long-Term Memory
The process of transferring information from short-term to long-term memory involves
encoding and consolidation of information. This is a function of time; that is, the longer
the memory stays in the short-term memory the more likely it is to be placed in the
long-term memory. In this process, the meaningfulness or emotional content of an item
may play a greater role in its retention in the long-term memory.
Long-Term Memory
Long-term memories are all the memories we hold for periods of time longer than a few
seconds; long-term memory encompasses everything from what we learned in first
grade to our old addresses to what we wore to work yesterday. Long-term memory has
an incredibly vast storage capacity, and some memories can last from the time they
are created until we die.
There are many types of long-term memory. Explicit or declarative memory requires
conscious recall; it consists of information that is consciously stored or retrieved.
Explicit memory can be further subdivided into semantic memory (facts taken out of
context, such as “Paris is the capital of France”) and episodic memory (personal
experiences, such as “When I was in Paris, I saw the Mona Lisa“).
In contrast to explicit/declarative memory, there is also a system for procedural/implicit
memory. These memories are not based on consciously storing and retrieving
information, but on implicit learning. Often this type of memory is employed in learning
new motor skills. An example of implicit learning is learning to ride a bike: you do not
need to consciously remember how to ride a bike, you simply do. This is because of
implicit memory.
Key Points
Long-term memory is the final, semi-permanent stage of memory; it has a
theoretically infinite capacity, and information can remain there indefinitely.
Long-term memories can be categorized as either explicit or implicit memories.
Explicit memories involve facts, concepts, and events, and must be recalled
consciously.
Explicit memories can be either semantic (abstract, fact-based) or episodic
(based on a specific event).
Implicit memories are procedures for completing motor actions.
Key Terms
long-term memory: Memory in which associations among items are stored
indefinitely; part of the theory of a dual-store memory model.
script: A “blueprint” or routine for dealing with a specific situation.
If we want to remember something tomorrow, we have to consolidate it into long-term
memory today. Long-term memory is the final, semi-permanent stage of memory.
Unlike sensory and short-term memory, long-term memory has a theoretically infinite
capacity, and information can remain there indefinitely. Long-term memory has also
been called reference memory, because an individual must refer to the information in
long-term memory when performing almost any task. Long-term memory can be
broken down into two categories: explicit and implicit memory.
Explicit Memory
Explicit memory, also known as conscious or declarative memory, involves memory of
facts, concepts, and events that require conscious recall of the information. In other
words, the individual must actively think about retrieving the information from memory.
This type of information is explicitly stored and retrieved—hence its name. Explicit
memory can be further subdivided into semantic memory, which concerns facts, and
episodic memory, which concerns primarily personal or autobiographical information.
Semantic Memory
Semantic memory involves abstract factual knowledge, such as “Albany is the capital
of New York.” It is for the type of information that we learn from books and school:
faces, places, facts, and concepts. You use semantic memory when you take a
test. Another type of semantic memory is called a script. Scripts are like blueprints of
what tends to happen in certain situations. For example, what usually happens if you
visit a restaurant? You get the menu, you order your meal, you eat it, and then you pay
the bill. Through practice, you learn these scripts and encode them into semantic
memory.
Episodic Memory
Episodic memory is used for more contextualized memories. They are generally
memories of specific moments, or episodes, in one’s life. As such, they include
sensations and emotions associated with the event, in addition to the who, what,
where, and when of what happened. An example of an episodic memory would be
recalling your family’s trip to the beach. Autobiographical memory (memory for
particular events in one’s own life) is generally viewed as either equivalent to, or a
subset of, episodic memory. One specific type of autobiographical memory is a
flashbulb memory, which is a highly detailed, exceptionally vivid “snapshot” of the
moment and circumstances in which a piece of surprising and consequential (or
emotionally arousing) news was heard. For example, many people remember exactly
where they were and what they were doing when they heard of the terrorist attacks on
September 11, 2001. This is because it is a flashbulb memory.
Semantic and episodic memory are closely related; memory for facts can be enhanced
with episodic memories associated with the fact, and vice versa. For example, the
answer to the factual question “Are all apples red?” might be recalled by remembering
the time you saw someone eating a green apple. Likewise, semantic memories about
certain topics, such as football, can contribute to more detailed episodic memories of a
particular personal event, like watching a football game. A person that barely knows
the rules of football will remember the various plays and outcomes of the game in
much less detail than a football expert.
Implicit Memory
In contrast to explicit (conscious) memory, implicit (also called “unconscious” or
“procedural”) memory involves procedures for completing actions. These actions
develop with practice over time. Athletic skills are one example of implicit memory. You
learn the fundamentals of a sport, practice them over and over, and then they flow
naturally during a game. Rehearsing for a dance or musical performance is another
example of implicit memory. Everyday examples include remembering how to tie your
shoes, drive a car, or ride a bicycle. These memories are accessed without conscious
awareness—they are automatically translated into actions without us even realizing it.
As such, they can often be difficult to teach or explain to other people. Implicit
memories differ from the semantic scripts described above in that they are usually
actions that involve movement and motor coordination, whereas scripts tend to
emphasize social norms or behaviors.
The general consensus regarding working memory supports the idea that working
memory is extensively involved in goal-directed behaviors in which information must be
retained and manipulated to ensure successful task execution. Before the emergence
of other competing models, the concept of working memory was described by the
multicomponent working memory model proposed by Baddeley and Hitch. In the
present article, the authors provide an overview of several working memory-relevant
studies in order to harmonize the findings of working memory from the neurosciences
and psychological standpoints, especially after citing evidence from past studies of
healthy, aging, diseased, and/or lesioned brains.
In particular, the theoretical framework behind working memory, in which the related
domains that are considered to play a part in different frameworks (such as memory’s
capacity limit and temporary storage) are presented and discussed.
From the neuroscience perspective, it has been established that working memory
activates the fronto-parietal brain regions, including the prefrontal, cingulate, and
parietal cortices. Recent studies have subsequently implicated the roles of subcortical
regions (such as the midbrain and cerebellum) in working memory.
Aging also appears to have modulatory effects on working memory; age interactions
with emotion, caffeine and hormones appear to affect working memory performances
at the neurobiological level. Moreover, working memory deficits are apparent in older
individuals, who are susceptible to cognitive deterioration. Another younger population
with working memory impairment consists of those with mental, developmental, and/or
neurological disorders such as major depressive disorder and others.
A less coherent and organized neural pattern has been consistently reported in these
disadvantaged groups. Working memory of patients with traumatic brain injury was
similarly affected and shown to have unusual neural activity (hyper- or hypoactivation)
as a general observation. Decoding the underlying neural mechanisms of working
memory helps support the current theoretical understandings concerning working
memory, and at the same time provides insights into rehabilitation programs that target
working memory impairments from neurophysiological or psychological aspects.
Working memory has fascinated scholars since its inception in the 1960’s (Baddeley,
2010; D’Esposito and Postle, 2015). Indeed, more than a century of scientific studies
revolving around memory in the fields of psychology, biology, or neuroscience have not
completely agreed upon a unified categorization of memory, especially in terms of its
functions and mechanisms (Cowan, 2005, 2008; Baddeley, 2010).
From the coining of the term “memory” in the 1880’s by Hermann Ebbinghaus, to the
distinction made between primary and secondary memory by William James in 1890,
and to the now widely accepted and used categorizations of memory that include:
short-term, long-term, and working memories, studies that have tried to decode and
understand this abstract concept called memory have been extensive (Cowan,
2005, 2008).
Short and long-term memory suggest that the difference between the two lies in the
period that the encoded information is retained. Other than that, long-term memory has
been unanimously understood as a huge reserve of knowledge about past events, and
its existence in a functioning human being is without dispute (Cowan, 2008). Further
categorizations of long-term memory include several categories:
(1) episodic;
(2) semantic;
(3) Pavlovian; and
(4) procedural memory (Humphreys et al., 1989). For example, understanding and
using language in reading and writing demonstrates long-term storage of semantics.
Meanwhile, short-term memory was defined as temporarily accessible information that
has a limited storage time (Cowan, 2008).
Holding a string of meaningless numbers in the mind for brief delays reflects this short-
term component of memory.
Thus, the concept of working memory that shares similarities with short-term memory
but attempts to address the oversimplification of short-term memory by introducing the
role of information manipulation has emerged (Baddeley, 2012). This article seeks to
present an up-to-date introductory overview of the realm of working memory by
outlining several working memory studies from the psychological and neurosciences
perspectives in an effort to refine and unite the scientific knowledge concerning
working memory.
Episodic memory
Episodic memory refers to any events that can be reported from a person’s life.
This covers information such as any times, places involved – for example, when you
went to the zoo with a friend last week. It is a type of ‘declarative’ memory, i.e. it can
be explicitly inspected and recalled consciously. Episodic memory can be split further
into autobiographical episodic memory (memories of specific episodes of one’s life)
and experimental episodic memory (where learning a fact [a semantic memory, below]
has been associated with memory of the specific life episode when it was
learned). Flashbulb memories are detailed autobiographical episodic memories that
are stored permanently in LTM when they are first learned, often because they were of
emotional or historical importance in that person’s life (e.g. a birth or a death).
Semantic memory
Like episodic memory, semantic memory is also a type of ‘declarative’ (explicit,
consciously recalled) memory.
However, the conscious recall here is of facts that have meaning, as opposed to the
recall of past life events associated with episodic memory. For instance, recalling that
you listen to music using your ears does not require knowing when or where you first
learned this fact.
Procedural memory
Procedural memory describes our implicit knowledge of tasks that usually do not
require conscious recall to perform them. One example would be riding a bike –you
might struggle to consciously recall how to manage the task, but we can
[unconsciously] perform it with relative ease.
But as intervening days pass, the memories of all the other meals you have eaten
since then start to interfere with your memory of that one particular meal. This is a
good example of what psychologists call the interference theory of forgetting.4
According to interference theory, forgetting is the result of different memories
interfering with one another. The more similar two or more events are to one another,
the more likely interference will occur.
It is difficult to remember what happened on an average school day two months ago
because so many other days have occurred since then. Unique and distinctive events,
however, are less likely to suffer from interference. Your high school graduation,
wedding, and the birth of your first child are much more likely to be recalled because
they are singular events—days like no other.
Interference also plays a role in what is known as the serial position effect, or the
tendency to recall the first and last items of a list.5 For example, imagine that you wrote
down a shopping list but forgot to take it with you to the store. In all likelihood, you will
probably be able to easily recall the first and last items on your list, but you might forget
many of the items that were in the middle.
The first thing you wrote down and the last thing you wrote down stand out as being
more distinct, while the fourth item and seventh item might seem so similar that they
interfere with each other. There are two basic types of interference that can occur:4
Retroactive interference happens when newly acquired information interferes
with old memories. For example, a teacher learning the names of her new class
of students at the start of a school year might find it more difficult to recall the
names of the students in her class last year. The new information interferes with
the old information.
Proactive interference occurs when previously learned information makes it
more difficult to form new memories. Learning a new phone number or locker
combination might be more difficult, for example, because your memories of your
old phone number and combination interfere with the new information.
Eliminating interference altogether is impossible, but there are a few things you can do
to minimize its effects. One of the best things you can do is rehearse new information
in order to better commit it to memory. In fact, many experts
recommend overlearning important information, which involves rehearsing the material
over and over again until it can be reproduced perfectly with no errors.6
Another tactic to fight interference is to switch up your routine and avoid studying
similar material back to back. For example, don't try to study vocabulary terms for your
Spanish language class right after studying terms for your German class. Break up the
material and switch to a completely different subject each study session.
Sleep also plays an essential role in memory formation. Researchers suggest
that sleeping after you learn something new is one of the best ways to turn new
memories into lasting ones.
Trace theory proposes that the length of time between the memory and recalling that
information determines whether the information will be retained or forgotten. If the time
interval is short, more information will be recalled. If a longer period of time passes,
more information will be forgotten and memory will be poorer.
The idea that memories fade over time is hardly new. The Greek philosopher Plato
suggested such a thing more than 2,500 years ago. Later, experimental research by
psychologists such as Ebbinghaus bolstered this theory.2
One of the problems with this theory is that it is difficult to demonstrate that time alone
is responsible for declines in recall. In real-world situations, many things happen
between the formation of a memory and the recall of that information. A student who
learns something in class, for example, might have hundreds of unique and individual
experiences between learning that information and having to recall it on an exam.
Was forgetting the date that the American Revolutionary War began due to the length
of time between learning the date in your American History class and being tested on
it? Or did the multitude of information acquired during that interval of time play a role?
Testing this can be exceedingly difficult. It is nearly impossible to eliminate all the
information that might have an influence on the creation of the memory and the recall
of the memory.
Another problem with decay theory is it does not account for why some memories fade
quickly while others linger. Novelty is one factor that plays a role. For example, you are
more likely to remember your very first day of college than all of the intervening days
between it and graduation. That first day was new and exciting, but all the following
days probably seem quite similar to each other.
Internal / State- inside of us, e.g. physical, emotional, mood, drunk etc.
There is considerable evidence that information is more likely to be retrieved from long-
term memory if appropriate retrieval cues are present. This evidence comes from both
laboratory experiments and everyday experience. A retrieval cue is a hint or clue that
can help retrieval.
Tulving (1974) argued that information would be more readily retrieved if the cues
present when the information was encoded were also present when its retrieval is
required. For example, if you proposed to your partner when a certain song was
playing on the radio, you will be more likely to remember the details of the proposal
when you hear the same song again. The song is a retrieval cue - it was present when
the information was encoded and retrieved.
Tulving suggested that information about the physical surroundings (external context)
and about the physical or psychological state of the learner (internal context) is stored
at the same time as information is learned. Reinstating the state or context makes
recall easier by providing relevant information, while retrieval failure occurs when
appropriate cues are not present. For example, when we are in a different context (i.e.
situation) or state.
Context also refers to the way information is presented. For example, words may be
printed, spoken or sung, they may be presented in meaningful groups - in categories
such as lists of animals or furniture - or as a random collection without any link
between them. Evidence indicates that retrieval is more likely when the context at
encoding matches the context at retrieval.
You may have experienced the effect of context on memory if you have ever visited a
place where you once lived (or an old school). Often such as visit helps people recall
lots of experiences about the time they spent there which they did not realize were
stored in their memory.
A number of experiments have indicated the importance of context-based cues for
retrieval. An experiment conducted by Tulving and Pearlstone (1966) asked
participants to learn lists of words belonging to different categories, for example names
of animals, clothing and sports.
Participants were then asked to recall the words. Those who were given the category
names recalled substantially more words than those who were not. The categories
provided a context, and naming the categories provided retrieval cues. Tulving and
Pearlstone argued that cue-dependent forgetting explains the difference between the
two groups of participants. Those who recalled fewer words lacked appropriate
retrieval cues.
Half of the underwater group remained there and the others had to recall on the
beach. The results show that those who had recalled in the same environment (i.e.
context) which that had learned recalled 40% more words than those recalling in a
different environment. This suggests that the retrieval of information is improved if it
occurs in the context in which it was learned.
For example, if someone tells you a joke on Saturday night after a few drinks, you'll be
more likely to remember it when you're in a similar state - at a later date after a few
more drinks. Stone cold sober on Monday morning, you'll be more likely to forget the
joke.
State retrieval clues may be based on state-the physical or psychological state of the
person when information is encoded and retrieved. For example, a person may be
alert, tired, happy, sad, drunk or sober when the information was encoded. They will
be more likely to retrieve the information when they are in a similar state.
Tulving and Pearlstone’s (1966) study involved external cues (e.g. presenting
category names). However, cue-dependent forgetting has also been shown
with internal cues (e.g. mood state). Information about current mood state is often
stored in the memory trace, and there is more forgetting if the mood state at the time of
retrieval is different. The notion that there should be less forgetting when the mood
state at learning and at retrieval is the same is generally known as mood-state-
dependent memory.
positive mood than a negative mood. They are also greater when people try to
remember events having personal relevance.
Evaluation
According to retrieval-failure theory, forgetting occurs when information is available in
LTM but is not accessible. Accessibility depends in large part on retrieval cues.
Forgetting is greatest when context and state are very different at encoding and
retrieval. In this situation, retrieval cues are absent and the likely result is cue-
dependent forgetting.
A math achievement growth score was devised from a regression model of fall math
achievement predicting spring achievement. Results show that children with higher
math self-perceptions showed reduced growth in math achievement across the school
year as a function of math anxiety. Children with lower math interest self-perceptions
did not show this relationship. Results serve as a proof-of-concept for a scientific
account of motivated forgetting within the context of education.
Introduction
Despite all of the effort that students put into studying, they commonly report that
knowledge is rapidly lost once a course is over. While the belief in a total loss of
formally acquired knowledge is false (Bahrick, 1979), it is true that students experience
a significant amount of forgetting soon after the completion of a course (Conway et al.,
1991; Kamuche and Ledman, 2011). Research on long-term retention of classroom
knowledge reports that forgetting arises because of blocked learning schedules
(Landauer and Bjork, 1978; Dempster, 1992), a lack of subsequent relearning (Bahrick
and Phelps, 1987; Bahrick and Hall, 1991; Cooper et al., 2000; Deslauriers and
Wieman, 2011), poor initial knowledge structures and shallow levels of understanding
gained during the course itself (Conway et al., 1991). In this article, I consider an
alternative explanation. I draw on the suppression and threat based coping literature to
argue that students themselves may be motivated to forget due to negative academic
experiences that threaten their self-perceptions.
Motivated Forgetting
Motivated forgetting is the active process of forgetting memories that are unpleasant,
painful, or generally threating to the self-image that individuals strive to maintain (Tajfel
and Turner, 1986; Thompson et al., 1997). Research in cognitive psychology
demonstrates that people can intentionally down-prioritize unwanted memories from
entering consciousness via control processes.
Current Study
To summarize, memory research suggests that people are capable of intentional
forgetting. Threat-based theories identify self-perceptions and experiences that
Limitations
The current study holds several limitations which warrant attention. For instance, the
data collection timeline makes it difficult to make a strong case that the winter break
period was responsible for allowing students to engage in motivated forgetting; all of
the students in the study underwent a winter break period. Future research should
examine achievement immediately prior to and after the winter break period using
assessments that measure what children are learning in their specific class. The
reliability for my measure of math self-perceptions could have been higher if children
had been presented with more items, as well. The correlational research design also
did not allow me to manipulate extensive rest periods or identity threat, which could
have provided causal evidence for the account I put forth. Lastly, I was also not able to
assess perceived threat, intention to forget, or suppression-avoidance processes,
which limited my ability to provide greater evidence for the mechanism of motivated
forgetting.
DIWAKAR
EDUCATION
HUB
(The Learn With Expertise)
1. When an object is moved farther away, (2) Haphazard Stimulation of nerve cells
we tend to see it as more or less invariant (3) Stimulation of nerve cells in the eyes
in size. This is due to: (4) Stimulation of rods and cones
(1) Shape Constancy (5) None of the above
(2) Colour Constancy Answer: (1)
(3) Size Constancy
(4) Brightness Constancy 6. Behaviourists have:
(5) None of the above (1) No theory of learning
Answer: (3) (2) No theory of thinking
(3) No theory of memory
2. Reversible goblet is a favourite (4) No theory of perception
demonstration of: (5) None of the above
(1) A figure-ground reversal Answer: (4)
(2) A focus-margin reversal
(3) A shape-size reversal 7. The process by which the eyes get
(4) A size-contour reversal prepared to see very dimlight is known as:
(5) None of the above (1) Light Adaptation
Answer: (1) (2) Dark Adaptation
(3) Brightness Adaptation
3. A simpler form of stroboscopic motion is (4) Colour Adaptation
: (5) None of the above
(1) Psychokinesis Answer: (2)
(2) Autokinetic Effect
(3) Phi-Phenomenon 8. The partially colour blind people are
(4) Illusion known as:
(5) Hallucination (1) Blind
Answer: (3) (2) Colour-blind
(3) Achromats
4. Empiricists (Barkeley, Locke) maintained (4) Dichromats
that we learn our ways of perceiving (5) None of the above
through : Answer: (4)
(1) Eyes
(2) Motivaton 9. Totally colour blind people are
(3) Experiene otherwise known as :
(4) Learning (1) Colour-blind
(5) None of the above (2) Achromats
Answer: (3) (3) Monochromats
(4) Dichromats
5. William James characterised the (5) None of the above
perception of an infant as a : Answer: (3)
(1) Blooming buzzing confusion
10. The phenomenon of sifting from one 14. Sensations of movement from inside
picture to another is known as : our bodies are called:
(1) Retina Rivalry (1) Proprioception
(2) Eyes Rivalry (2) Perception
(3) Attention Rivalry (3) Interoception
(4) Perception Rivalry (4) Sensation
(5) None of the above (5) Attention
Answer: (1) Answer: (1)
11. When the lens cannot bulge out to the 15. Sensation increases by a constant
extent necessary due to muscular defects, amount each time the stimulus is doubled.
the individual suffers from: This is called:
(1) Double Vision (1) Lewin-Zeigamik Effect
(2) Astigmatism (2) Lewin-Prentice Effect
(3) Myopia (3) Mallinoswki Law
(4) Farsightedness (4) Webber-Fechner Law
(5) None of the above (5) None of the above
Answer: (4) Answer: (4)
12. Due to the irregularities in the 16. The experiments which tell us about
formation of the lens or the cornea, the the relationship between the intensity of
object viewed will be partly clear and stimulus and the consequent changes in
partly blurred. This occurs when the the intensity of sensation are included in :
individual suffers from: (1) Psychoanalysis
(1) Astigmatism (2) Gestalt Psychology
(2) Colour-blindness (3) Parapsychology
(3) Distraction (4) Psychophysics
(4) Ratinal Disparity (5) None of the above
(5) None of the above Answer: (4)
Answer: (1)
17. The study on the Zulu tribes of Africa
13. The length of car number has reference revealed that the Zulu individuals would
to: be less susceptible to the:
(1) Shifting of Attention (1) Figure-Ground Phenomenon
(2) Distraction (2) Zollner Illusion
(3) Span of Attention (3) Ponzo illusion
(4) Focus and Margin (4) Muller-Lyer Illusion
(5) Focus of Consciousness (5) None of the above
Answer: (3) Answer: (4)
29. Visual reaction of the dark adapted eye 33. Cases of yellow-blue colour blindness
to very feeble light is called : are:
(1) Photopic Vision (1) Maximum
(2) Scotopic vision (2) Exceedingly Rare
(3) Visual Acuity (3) Mostly found in children
(4) Colourless Vision (4) Mostly found in women
(5) None of the above (5) Mostly found in men
Answer: (2) Answer: (2)
30. Visual angle varies inversely with: 34. “Interest is latent attention and
(1) Acuity attention is interest in action.” This
(2) Light statement deals with the:
(3) Brightness (1) Objective determinants of attention
(4) Length of cones (2) Span of attention
(5) Length of rods (3) Subjective determinants of attention
Answer: (1) (4) Shifting of attention
(5) None of the above
Answer: (3)
35. “It is not a different process; it is just 39. Certain alterations in the colour of
attention to irrelevant stimuli that are not objects occur depending upon their
a part of the main assigned task.” Then relative distances from the observer. For
what is it? this reason, distant hills look blue on
(1) Distraction account of the light rays travelling through
(2) Shifting of Attention haze. This illustrates:
(3) Span of attention (1) Linear Perspective
(4) Involuntary Attention (2) Monocular Parallax
(5) Voluntary Attention (3) Aerial Perspective
Answer: (1) (4) Visual Acuity
(5) Scotopic Vision
36. Simultaneous focussing on two Answer: (3)
separate activities is otherwise known as:
(1) Span of attention 40 A set of depth cues of the nature of
(2) Shifting of attention some sort of arrangement of proportional
(3) Division of attention rise and fall in compactness of de signs
(4) Distraction which is related to perspectives is called:
(5) None of the above (1) Rods
Answer: (3) (2) Cones
(3) Gradients
37. When familiar large objects look (4) Perspectives
smaller than they are known to be, they (5) None of the above
are regarded as being at a distance. This is Answer: (3)
an instance of what is called:
(1) Visual Acuity 41. The apparent displacement of an
(2) Monocular Parallax object resulting from an actual change of
(3) Linear Perspective observer’s position is known as:
(4) Scotopic Vision (1) Parallax
(5) Pliotopic Vision (2) Acuity
Answer: (3) (3) Scotopic Vision
(4) Photopic Vision
38. Movement Parallax is a monocular cue (5) None of the above
of distance or depth and for this reason it Answer: (1)
is also called:
(1) Visual Acuity 42. The phenomenon of “induced
(2) Monocular parallax movement” occurs when there is some
(3) Linear Perspective real movement which is attributed to:
(4) Scotopic vision (1) A right objects
(5) Photopic Vision (2) A real object
Answer: (2) (3) A wrong object
43. “It has been said that beauty is in the 47. The first person to document the
eye of the beholder”—With what factors existence of the sensory register and to
of perception this statement deals? explore its properties was:
(1) Objective Factors (1) George Sperling (1960)
(2) Figure and Ground (2) Paulham (1887)
(3) Phi-phenomenon (3) Cairnes (1891)
(4) Functional Factors (4) Hersey (1936)
(5) None of the above (5) O’comer (1958)
Answer: (4) Answer: (1)
44. According to Woodworth (1938), the 48. From his experiments of sensory
first systematic experiment that attempted register, Sperling suggested that there is
to measure the span of apprehension was some visual trace available to the Subject
carried out by: that prolongs the life of the image. He calls
(1) Jevons this visual trace as the:
(2) Paulham (1) Sensory Information Centre
(3) Jastrow (2) Perceptual Information Centre
(4) Cairnes (3) Attending Information Centre
(5) Hersey (4) Apprehension Information Centre
Answer: (1) (5) None of the above
Answer: (1)
45. The first experiment to measure span
of attention (apprehension) was designed 49. In 1976, Neisser introduced a term for
by: “Sensory Information Store.” That term is
(1) Coren and Porac in 1979 called as:
(2) Coren and Girgus in 1978 (1) Focus
(3) Jevons in 1871 (2) Margin
(4) Watkins in 1973 (3) Prepotency
(5) None of the above (4) Icon
Answer: (3) (5) Extensity
Answer: (4)
46. Who was/were the Subject (S) in
Jevon’s first experiment on span of 50. When tachistoscope exposures are
apprehension? short and there is no post exposure
(1) Boys from different SES masking field, we can be fairly sure that
(2) Girls from different SES the Subject is actually reading the
(3) A friend of Jevons stimulus:
(1) From the icon rather than from the (2) A sense organ and its receptors
visual image itself (3) A cell
(2) From the image (4) A neuron
(3) From the short-term memory storage (5) Any cell of the sense organs
(4) From the long-term memory storage Answer: (2)
(5) None of the above
Answer: (1) 55. Hue, saturation and brightness are the
conventional terms which are used to
51. A basis on which one stream of characterise the attributes of:
information can be segregated and (1) Brightness
attended to while others can be ignored is (2) Colours
known as: (3) Light
(1) Basilar Membrane (4) Darkness
(2) Cochlea (5) None of the above
(3) Tympanic Membrane Answer: (2)
(4) Channel
(5) Eustachian tube 56. Vision in the ordinary ranges of
Answer: (4) daylight from fairly faint twilight up to the
brightest blaze of the sun is called:
52. The most important school of (1) Photopic Vision
psychology which has contributed a lot (2) Scotopic Vision
toward perception is: (3) Autokinetic Effect
(1) Psychoanalysis (4) Phi-phenomenon
(2) Behaviouristic School (5) Illusion
(3) Structuralistic School Answer: (1)
(4) Gestalt Psychology
(5) Functionalistic School 57. The important part of the inner ear for
Answer: (4) hearing is the snail-shaped:
(1) Cochlea
53. Autokinetic movement does not occur (2) Round Window
if there is a fixed: (3) Oval Window
(1) Frame of Reference (4) Semicircular Canals
(2) Illusion (5) Auditory Nerve
(3) Vision Answer: (1)
(4) Distance between stimulus and the eye
(5) None of the above 58. The most primitive and oldest feature
Answer: (1) of music is:
(1) Harmony
54. A stimulus is any change in external (2) Melody
energy that activates: (3) Rhythm
(1) An effector organ (4) Song
86. The Gestalt Psychologists learned their 90. Outline or boundary of an object is
“Principles of Organization” from the study called:
of: (1) Contour
(1) Perception (2) Figure
(2) Sensory Experience (3) Ground
(3) Attention (4) Brightness
(4) Consciousness (5) None of the above
(5) Insightful Learning Answer: (1)
Answer: (2)
91. Prolongation or renewal of a sensory
87. Reinforcing factors in perceptual experience after the stimulus has ceased
organization are analogous to the: to affect the sense organ is called:
(1) Reinforcement of a conditioned (1) After image
response (2) Illusion
(2) The Law of Effect (3) Hallucination
(3) Both (1) and (2) (4) Autokinetic Effect
(4) The Law of Exercise (5) Stroboscopic Motion
(5) None of the above Answer: (1)
Answer: (3)
92. A familiar study on perception which
88. Wavelength is obtained by dividing the has shown that the poor children
speed of the light by the: overestimated the size of coins to a greater
(1) Frequency degree than wealthy children, was done
(2) Brightness by:
(3) Illumination (1) Bruner and Goodman
(4) Colour (2) Osgood
(5) None of the above (3) Dember
Answer: (1) (4) Murray
(5) Mc Ginnis
Answer: (1)
95. Sense Organs in the muscles, tendons 99 The entire range of wavelengths is
and joints tell us about the position of our called the:
limbs and the state of tension in the (1) Electromagnetic Spectrum
muscles. They serve the sense called: (2) Visible Spectrum
(1) Kinesthesis (3) Photosensitive Area
(2) Transduction (4) Blind Spot
(3) Vision (5) None of the above
(4) Auditory Sense Answer: (1)
(5) None of the above
Answer: (1) 100. The tendency to perceive a line that
starts in one way as continuing in the same
96. The process of converting physical way is called the principle of:
energy into nervous system activity is (1) Proximity
called: (2) Similarity
(1) Transmission (3) Closure
(2) Nerve Impluse (4) Continuation
(3) Inhibition (5) None of the above
(4) Transduction Answer: (4)
(5) None of the above
Answer: (4) 101. The idea that we may be ‘ready’ and
‘primed for’ certain kinds of sensory input
97. Receptor cells convert physical energy is known as:
into an electric voltage or potential called: (1) Set
144. The tendency to see the immobility of 148. A red ball looks red in broad daylight
objects when he moves about is called: as well as in dark night. This is due to:
(1) Location Constancy (1) Light Constancy
(2) Size Constancy (2) Brightness Constancy
(3) Shape Constancy (3) Size Constancy
(4) Depth Constancy (4) Colour Constancy
(5) None of the above (5) None of the above
Answer: (1) Answer: (1)
145. A contour is the boundary between: 149. Soldiers dressed in colours of uniform
(1) A figure and its ground that merge with the background is called:
(2) A figure and another figure’s ground (1) Colour Matching
(3) Two similar figures (2) Colour Constancy
(4) Two dissimilar figures (3) Brightness Constancy
(5) None of the above (4) Camaouflage
Answer: (1) (5) None of the above
Answer: (4)
146. An object that has been constituted
perceptually as a permanent and stable 150. When there is a deliberate confusion
thing continue to be perceived as such, of figure and ground and it is difficult to
regardless of illumination, position, organise form and distinguish objects from
distance etc. This kind of stability of the one another, it is called:
environment experienced by human (1) Camaouflage
beings is termed as: (2) Phi-phenomenon
(1) Depth Constancy (3) Colour Constancy
(2) Perceptual Constancy (4) Brightness Constancy
152. Stimuli that make the lowest 156. The events, we perceive clearly, are at
interruptions in contour also tend to be the:
grouped together. The tendency to (1) Margin
organize the fragmentary stimuli into a (2) Centre
familiar pattern is called the principle of: (3) Side
(1) Similarity (4) Focus
(2) Proximity (5) None of the above
(3) Closure Answer: (4)
(4) Continuation
(5) Pragnaz 157. Attention is the term given to the
Answer: (4) processes that select certain inputs for
inclusion in the focus of:
153. “Why do things look the way they (1) Sensation
do”? —This question was asked by the (2) Consciousness
Gestalt Psychologist: (3) Unconsiousness
(1) W. G. Kohler (4) Experience
(2) M. Wertheimer (5) None of the above
(3) Kurt Lewin Answer: (2)
(4) K. Koffka
(5) None of the above 158. The most fundamental process in
Answer: (4) form perception is the recognition of:
(1) A figure on a ground
154. An illusion is not a trick or (2) A picture without background
misperception. It is a/an: (3) A figure without ground
(1) Attention (4) The contour of a figure
(2) Sensation (5) None of the above
(3) Perception Answer: (1)
sensory input that occur in the process of 171. Which one of the following is an
perceiving the world? increase in the activity to extract
(1) Hallucinations information from the environment as a
(2) Sensations result of experience or practice with the
(3) Illusions stimulation coming from it?
(4) Conation (1) Convergence
(5) Attention (2) Divergence
Answer: (3) (3) Perceptual Learning
(4) Plasticity of Perception
168. Who has first pointed out that the (5) None of the above
‘Whole’ is more than the sum total of its Answer: (3)
parts?
(1) Kohler 172. The ability to read other people’s
(2) Wertheimer thoughts is called
(3) KurtLewin (1) Prerecognition
(4) KurtKoffka (2) Telepathy
(5) Otto Rank (3) Psychokinesis
Answer: (4) (4) Clairvoyance
(5) None of the above
169. Which one of the following is formed Answer: (2)
whenever a marked difference occurs in
the brightness or colour of the 173. “We perceive things as we are”. This
background? statement emphasises upon:
(1) Sizes (1) Functional Factors in Perception
(2) Shapes (2) Objective Patterns in Perception
(3) Contours (3) Organizational factors in perception
(4) Sets (4) Voluntary Attention
(5) None of the above (5) Involuntary Attention
Answer: (3) Answer: (1)
170. Perceived motion also occurs without 174. The fact that the moon looks larger
any energy movement across the receptor near the horizon than high in the sky is
surface. This type of motion is called: called the:
(1) Constant Motion (1) MullerLyer Illusion
(2) Retinal Disparity (2) Jastrow Illusion
(3) Real motion (3) Ponzo illusion
(4) Apparent Motion (4) Moon Illusion
(5) None of the above (5) Height-Width Illusion
Answer: (4) Answer: (4)
(1) Mood
(2) Emotion 180. We are able to separate forms from
(3) Sensation the general ground in our visual perception
(4) Habit only because we can perceive:
(5) Span (1) Size
Answer: (4) (2) Shape
(3) Side
176. The old saying “Seeing is believing” (4) Contours
does not hold good in case of: (5) Colours
(1) Hallucination Answer: (4)
(2) Illusion
(3) Affection 181. The thought perception without any
(4) Conation known means of communication is known
(5) Stimulation as:
Answer: (2) (1) Preognition
(2) Psychokinesis (PK)
177. The stimulus is explicit in: (3) Clairovoyance
(1) Hallucination (4) Telepathy
(2) Illusion (5) None of the above
(3) Affection Answer: (4)
(4) Conation
(5) Stimulation 182. The perception of future events or
Answer: (2) happenings through dreams or
hallucinations is known as:
178. In illusion, the stimulation is usually (1) Psychokinesis (PK)
external, while the stimulations in (2) Clairovoyance
hallucinations are: (3) Precognition
(1) In the person himself (4) Telepathy
(2) In the stimulus itself (5) None of the above
(3) Both in stimulus and perceiver Answer: (3)
(4) In the external world
(5) None of the above 183. Attention divides our perceived world
Answer: (1) into:
(1) Focus and Margin
179. Muller-Lyer illusion is: (2) Margin and Centre
(1) Also a hallucination Jastrow illusion (3) Nucleus and Focus
(2) An individual illusion (4) Focus and centre
(3) Otherwise called as (5) None of the above
(4) An optical illusion Answer: (1)
(5) None of the above
Answer: (4)
184. The tendency to see the colour of a 188. Averaging models suggest that the
familiar object as the same, regardless of mean of the information is:
the actual light conditions is called: (1) Most appropriate
(1) Colour Constancy (2) Not most appropriate
(2) Brightness Constancy (3) Predictable
(3) Light Constancy (4) Unpredictable
(4) Dark Constancy (5) None of the above
(5) None of the above Answer: (1)
Answer: (1)
189. “Schemas” are organized bodies of
185. By comparing experimental information stored in:
outcomes, which model provides more (1) Perceptual Field
accurate predictions? (2) Cognitive Field
(1) Averaging Model (3) Emotion field
(2) Additive Model (4) Memory
(3) Self-attribution Model (5) Sensation
(4) Personal-attribution Model Answer: (4)
(5) None of the above
Answer: (1) 190. In case of personality traits, we
organize information into schemas called:
186. Experimental evidences suggest that (1) Common Effects
people use a weighted averaging model to (2) Non-common Effects
combine: (3) Prototypes
(1) Type information (4) Somatotypes
(2) Attribute information (5) Stereotypes
(3) Non-common Effects Answer: (3)
(4) Trait information
(5) None of the above 191. Prototypes are schemas that organize
Answer: (4) a group of personality traits into a/an:
(1) Meaningful Personality Type
187. The way in which individuals focus on (2) Meaningless Personality Type
specific traits to form an overall impression (3) Emotional Trauma
of others is known as: (4) Avoidance conflicting situation
(1) Social perception (5) None of the above
(2) Perceptual organization Answer: (1)
(3) Person Perception
(4) Phi-phenomenon 192. The personality types that we derive
(5) Perceptual Constancy in the case of person perception are
Answer: (3) organized into schemas known as:
(1) Prototypes
(2) Stereotypes
194. At what level, the prototype consists 198. The Covariation Principle” states that
of different types of committed individuals the cause that will be choosen to explain
like monks, nuns and activists? an effect a cause that is present when the
(1) Personal Level effect is also absent. This principle was
(2) Environmental Level introduced by:
(3) Subordinate Level (1) T. D. Wilson
(4) Secondary Level (2) Keith Davis
(5) Primary Level (3) Harold Kelley
Answer: (3) (4) E.E. Jones
(5) I.J. Stone
195. Information processing capabilities Answer: (3)
are enhanced through the use of:
(1) Stereotypes 199. Consensus is the degree to which
(2) Prototypes other people react similarly in the:
(3) Prejudices (1) Same situation
(4) Attitudes (2) Different situation
(5) None of the above (3) Different emotional setup
Answer: (2) (4) Different sensational setup
(5) None of the above
196. With any schema, prototypes help us Answer: (1)
to organize:
(1) The social world around us 200. Consistency refers to the degree to
(2) The psychological world around us which the actor behaves the same way in :
(3) The psychophysical world around us (1) Different situations
(4) The physical world around us (2) Similar situations
(3) Both similar and dissimilar situations (2) Janis and Mann (1977)
(4) Other situations (3) Solomon (1974)
(5) None of the above (4) Corbit (1974)
Answer: (4) (5) None of the above
Answer: (1)
201. “A goal refers to some substance,
objects or environmental condition 205. The achievement motivation theory
capable of reducing or temporarily of Mc Clelland is explained in terms of:
eliminating the complex of internal (1) “Affective Arousal model of moti-
conditions which initiated action.” This vation”
definition of “goal” was given by: (2) Action Specific energy
(1) Janis& Mann (1977) (3) Innate Releasing Mechanism
(2) Ruch (1970) (4) Displacement Behaviour
(3) Solomon and Corbit (1974) (5) Opponent Process Theory
(4) Neal Miller (1959) Answer: (1)
(5) None of the above
Answer: (2) 206. Intrinsic Motivational Theory was
propounded by:
202. Cannon called the concept of internal (1) Mc Clelland
equilibrium and function as: (2) Maslow
(1) Imprinting (3) Harry Harlow
(2) Instinct (4) Solomon
(3) Homeostasis (5) Corbit
(4) Substitute Behaviour Answer: (3)
(5) None of the above
Answer: (3) 207. Psychoanalytic theory of motivation
was developed by:
203. The expectations or goal that one sets (1) Sigmund Freud
to achieve in future keeping in view his (2) Maslow
past performance is called: (3) Harry Harlow
(1) Valence (4) McClelland
(2) Vector (5) None of the above
(3) Vigilance Answer: (1)
(4) Level of Aspiration
(5) None of the above 208. The goals which the person tries to
Answer: (4) escape are called:
(1) Positive goals
204. “The need for achievement” was first (2) Vectors
defined largely on the basis of clinical (3) Valences
studies done by: (4) Negative goals
(1) Murray (1938) (5) None of the above
209. A person’s need for feeling competent 213. Intrinsic motivation as currently
and self-determining in dealing with his conceived is championed by:
environment is called: (1) Janis (1977)
(1) Intrinsic Motivation (2) Soloman (1974)
(2) Instinct (3) Deci (1975)
(3) Imprinting (4) Mann (1977)
(4) Coolidge Effect (5) Corbit (1974)
(5) None of the above Answer: (3)
Answer: (1)
214. Most of the research on intrinsic
210. When the motive is directed towards motivation has concentrated on the
goals external to the person such as money interaction between:
or grade, it is called: (1) Intrinsic and extrinsic rewards
(1) Extrinsic Motivation (2) Instinct and imprinting
(2) Intrinsic Motivation (3) Action specific energy and balance
(3) Imprinting sheet grid
(4) Instinct (4) Substitute behaviour and
(5) None of the above consummatory behaviour
Answer: (1) (5) None of the above
Answer: (1)
211. Steers and Porter (1975) in their text
entitled “Motivation and work behaviour” 215. An individual’s affective orientation
identified: towards particular outcomes is called the:
(1) Two major components of motivation (1) Vector of the outcome
(2) Four major components of motivation (2) Approach gradient of the outcome
(3) Five major components of motivation (3) Valence of the outcome
(4) Three major components of motivation (4) Avoidance gradient of the outcome
(5) None of the above (5) None of the above
Answer: (4) Answer: (3)
212. The conditions which influence the 216. Dipboye (1977) distinguished
arousal, direction and maintenance of between the strong and weak version of:
behaviours relevant in work settings are (1) Achievement theory
called: (2) Two-factor theory
(1) Work Motivation (3) Valence theory
(2) Drive stimuli (4) Consistency theory
(3) Substitute behaviour (5) None of the above
(4) Consummatory behaviour Answer: (4)
(5) None of the above
236. An object or thing which directs or 240. The hypothalamus plays an important
stimulates behaviour: role in the regulation of:
(1) Instinct (1) Food intake
(2) Incentive (2) Water intake
(3) Need (3) Alcohol intake
(4) Motive (4) Both food and water intake
(5) Drive (5) None of the above
Answer: (2) Answer: (1)
238. A motive that is primarily learned 242. Research evidences indicated that
rather than basing on biological needs is ventromedial hypothalamus (VMH):
known as: (1) Facilitates eating
(1) Physical Motive (2) Expedites eating
(2) Psychological Motive (3) Both facilitates and
(3) Neurophysiological Motive (4) Inhibits eating expedites eating
(4) Psychological Motive (5) None of the above
(5) None of these Answer: (4)
Answer: (4)
(3) Instincts
251. Realistic anxiety is otherwise known (4) Dreams
as: Answer: (1)
(1) Objective anxiety
(2) Subjective anxiety 256. Defense mechanisms help the person
(3) Psychic anxiety in protecting ego from open expression of
(4) Ego defenses id impulses and opposing:
Answer: (1) (1) Superego directives
(2) Death Instinct
252. In “moral anxiety”, ego’s dependence (3) Lie Instinct
upon: (4) Unconscious mind
(1) Superego is found Answer: (1)
(2) Id is found
(3) Sex is found 257. Defense mechanisms operate at
(4) Unconscious is found unconscious level. They occur without
Answer: (1) awareness of the individual. Hence they
are:
253. Neurotic anxiety is one in which there (1) Self-explanatory
occurs emotional response to a threat to (2) Self-deceptive
ego that the impulses may break through (3) Self-expressive
into: (4) Self-dependant
(1) Consciousness Answer: (2)
(2) Unconsciousness
(3) Subconsciousness 258. A child scolded by his father may hit
(4) Super ego his younger sublings. This is an example of:
Answer: (1) (1) Displacement
(2) Rationalization
254. Sometimes the superego gives threats (3) Regression
to punish the ego. This causes an (4) Repression
emotional response called: Answer: (1)
(1) Moral Anxiety
(2) Realistic Anxiety 259. “A young woman after fighting with
(3) Objective Anxiety her husband returned to her parent’s
(4) Neurotic Anxiety home only to allow her parents to “baby”
Answer: (1) her and fulfil her every wish like that of a
child”. This is an illustration of:
255. Always we want to protect ego from (1) Repression
the ensuring anxiety. For doing this, ego (2) Regression
adopts some strategies which are called: (3) Fixation
(1) Defense mechanisms (4) Reaction Formation
(2) Sex energy Answer: (2)
262. In the book “Group Psychology and 267. Homosexuality is a derivative of:
the Analysis of the Ego”, Freud has (1) Electra Complex
explained the formation of: (2) Oedipus Complex
(1) Personality (3) Libido
(2) Group (4) Death Instinct
(3) Society Answer: (2)
(4) Gang
Answer: (2) 268. The Oral, Anal and Phallic stages of
Psychosexual Development are called:
263. Freud had published a book “Totem (1) Pregenital Period
and Taboo” in 1913. By publishing this (2) Sexual Genesis
book, he has shown his concern for: (3) Life Instinct
(1) Social Psychology (4) Latency Period
(2) Abnormal Psychology Answer: (1)
(3) Industrial Psychology
(4) Child Psychology 269. The genital stage is generally
Answer: (1) characterized by object choices rather than
by:
264. Who viewed, “A person is brown with (1) Libido
sex, lives in sex and finally dies in sex” ? (2) Narcissim
(1) J. Herbart (3) Personality
(2) Sigmund Freud (4) Superego
270. In Anal Stage of Psychosexual 275. Studies of Freud and Breuer reported
Development, pleasure is derived from: successful treatment of hysterical
(1) Thinking symptoms by a method called:
(2) Libido (1) Hypnosis
(3) Emotion (2) Free Association
(4) Expulsion and Retention (3) Catharsis
Answer: (4) (4) Dream Analysis
Answer: (3)
271. The “Super ego” is the equivalent of
what is more commonly known as the: 276. The success of the cathartic method
(1) Conscience was regarded by Freud as evidence of the:
(2) Personality (1) Unconscious
(3) Libido (2) Conscious
(4) Narcissism (3) Subconsious
Answer: (1) (4) Libido
Answer: (1)
272. The psychoanalysis performed in a
controlled setting is known as: 277. From the experiences in hypnotism
(1) Psychotherapy and catharsis, Freud’s theory of:
(2) Chemotherapy (1) Unconscious was derived
(3) Hypoanalysis (2) Conscious was derived
(4) Hyperanalysis (3) Narcisssim was derived
Answer: (3) (4) Dream was derived
Answer: (1)
273. A state of deep unconsciousness, with
non- responsiveness to stimulation, is 278. Dreams represent demands or wishes
known as: stemming from the:
(1) Coma (1) Unconscious
(2) Fixation (2) Conscious
(3) Hypnotism (3) Preconscious
(4) Trauma (4) Death Instinct
Answer: (1) Answer: (1)
274. In 1895, Freud and Breuer published a 279. In a special book, Freud analyzed the
book entitled: psychology of error and found the source
(1) Studies in Hysteria of errors in the conflict between:
(2) Interpretation of Dreams (1) Ego and Super ego
(3) Moses and Menotheism (2) Unconscious wish and conscious
(4) Psychopathology of Everyday Life censorship
281. The main erotogenic zone of our body 286. According to Freud, the negative
is: Oedipus complex may lead to:
(1) Mouth (1) Heterosexuality
(2) Genitals (2) Homosexuality
(3) Anal Zones (3) Narcissism
(4) Lips (4) Castration
Answer: (2) Answer: (2)
282. According to Freud, the entire activity 287. The idea of developmental stages was
of men is bent upon procuring pleasure borrowed by Freud from:
and avoiding pain. This activity is (1) Biology
controlled by: (2) Sociology
(1) Reality Principle (3) Anthropology
(2) Pleasure Principle (4) Physics
(3) Primary Narcissism Answer: (1)
(4) Secondary Narcissim
Answer: (2) 288. The diversion of a part of the sexual
energy into non-sexual activities is called:
283. The urethral development stage is an (1) Repression
introductory period to the: (2) Regression
(1) Oral Stage (3) Rationalization
(2) Phallic Stage (4) Sublimation
(3) Genital Stage Answer: (4)
(4) Latency Stage
Answer: (2) 289. The term “defense mechanism” was
introduced by:
284. The very term “Phallic” is derived (1) Freud in 1894
from “Phallos”, which means: (2) Jung in 1902