0% found this document useful (0 votes)
51 views

learning and memory pdf

Uploaded by

ricksonyonah24
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

learning and memory pdf

Uploaded by

ricksonyonah24
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

LEARNING AND MEMORY

THE BEGINNING

Ciccarelli and White define learning as a relatively permanent change in behavior or knowledge
that occurs as a result of experience or practice. This definition highlights key aspects of
learning:

1. Relatively Permanent: Learning is not momentary; it tends to have lasting effects on


behavior or knowledge.
2. Change in Behavior or Knowledge: Learning influences how we act, think, or
understand things.
3. Result of Experience or Practice: Learning arises from interactions with the
environment, either through direct experience or structured practice.

In the early 1900s, research scientists were unhappy with psychology’s focus on mental activity. Many
were looking for a way to bring some kind of objectivity and scientific research to the field. It was a
Russian physiologist (a person who studies the workings of the body) named Ivan Pavlov (1849–1936)
who pioneered the empirical study of the basic principles of a particular kind of learning (Pavlov, 1906,
1926).

Studying the digestive system in his dogs, Pavlov and his assistants built a device to accurately
measure the amount of saliva produced by the dogs when they were fed a measured amount of
food. Normally, when food is placed in the mouth of any animal, the salivary glands
automatically start releasing saliva to help with chewing and digestion. This is a normal reflex—
an unlearned, involuntary response that is not under personal control or choice, and one of many
that occur in both animals and humans. The food causes a particular reaction, salivation. A
stimulus can be defined as any object, event, or experience that causes a response—the
reaction of an organism. In the case of Pavlov’s dogs, the food is the stimulus and salivation
is the response.
Pavlov soon discovered that the dogs began salivating when they weren’t supposed to be
salivating. Some dogs would start salivating when they saw a lab assistant bringing their food,
others when they heard the clatter of the food bowl in the kitchen, and still others when it was the
time of day they were usually fed. Shifting his focus, Pavlov spent the rest of his career studying
what he eventually termed classical conditioning, learning to elicit an involuntary, reflex-like
response to a stimulus other than the original, natural stimulus that normally produces the
response.

ASSOCIATIVE LEARNING
Associative learning is a way in which we naturally connect things in our minds to make sense of
the world. Through this process, we learn to predict what might happen next, to understand
patterns, and to make decisions based on our experiences. This kind of learning isn’t about
memorizing facts; it’s about linking events, actions, or ideas that happen around us, so we can
react or prepare for what comes next.
Key principles in associative learning include:

1. Contiguity
Contiguity is the idea that associations are most easily formed when stimuli or events
occur close together in time. When one stimulus consistently follows another, the brain
begins to link them, expecting one to predict the other. This principle helps organisms
identify causal relationships or sequence patterns.
2. Frequency
The strength of an association typically increases with the frequency of pairing. The more
often two events or stimuli are paired, the more likely the association becomes robust and
lasting. For instance, a frequent pairing of certain behaviors with rewards strengthens that
behavioral response, which is why repetition is crucial for effective learning.
3. Salience
Salience refers to the intensity or distinctiveness of stimuli. More intense or noticeable
stimuli tend to form associations more readily because they attract attention, making the
learning process more effective. For example, a loud sound following a specific action is
more likely to be remembered than a faint noise, leading to stronger associative learning.
4. Biological Preparedness
Biological preparedness is the concept that some associations are more easily formed due
to evolutionary adaptations. Certain stimuli naturally provoke responses without much
experience, like a fear response to potentially dangerous animals. This predisposition
influences how quickly and easily organisms form associations, particularly in survival-
related scenarios.

Here’s a look at the different types of associative learning and how they help us in everyday life:

1. Classical Conditioning
This type of learning happens when we start to connect two things because they
repeatedly happen together. For example, if you hear your favorite song every time you
go to a certain café, you might start feeling happy just by stepping inside the café—even
before you hear the song. The association becomes automatic, forming a link that
reminds you of the pleasant experience whenever you’re there.
2. Operant Conditioning
In operant conditioning, we learn by understanding the outcomes of our actions. If a
certain behavior brings a reward or positive outcome, we’re more likely to repeat it. For
instance, if you study hard and get praised for good grades, you’re more likely to keep
putting in the effort. On the other hand, if an action leads to something unpleasant, like
getting a parking ticket, you might try to avoid repeating it. This type of learning helps us
figure out what’s worth doing and what’s better left avoided.
3. Sensitization and Habituation
These are types of “tuning” our responses to what’s around us.
o Sensitization: Sometimes, our reaction to a certain event grows stronger the more
it happens, like being more startled by a creaking door late at night if we’re
already on edge.
o Habituation: Other times, we might stop reacting to things that aren’t meaningful
or helpful. For instance, after hearing the same car honk repeatedly in a busy
street, we may start tuning it out, allowing us to focus on more important sounds.

PAVLOV'S CLASSICAL CONDITIONING EXPERIMENT: AN IN-DEPTH LOOK


Background
Ivan Pavlov, a Russian physiologist, initially set out to study the digestive processes in dogs. His
research focused on how the body responds to food and the physiological processes involved in
digestion. However, his observations led him to a groundbreaking discovery about learning and
behavior that would become known as classical conditioning.
The Experiment Setup
Pavlov's experiment began with the observation that dogs would salivate not only when food was
presented but also in anticipation of food. He noted that the dogs would start salivating upon
seeing the lab assistant who regularly fed them or even hearing the sound of footsteps. Intrigued
by this behavior, Pavlov devised a systematic experiment to explore this phenomenon further.
To conduct his experiment, Pavlov introduced a neutral stimulus (the sound of a bell) alongside
the unconditioned stimulus (the presentation of food). The unconditioned stimulus naturally
elicited an unconditioned response—in this case, salivation. The bell, which initially had no
effect on salivation, was rung just before the dogs were given food.
Conditioning Process
The conditioning process involved several key stages:
1. Before Conditioning: The unconditioned stimulus (food) produced an unconditioned
response (salivation). The bell was a neutral stimulus with no effect on salivation.
2. During Conditioning: Pavlov repeatedly paired the bell (neutral stimulus) with the
presentation of food (unconditioned stimulus). After several trials, the dogs began to
associate the sound of the bell with the imminent arrival of food.
3. After Conditioning: Eventually, the bell alone became a conditioned stimulus, eliciting
a conditioned response—salivation—even when no food was presented. This
demonstrated that learning had occurred through association.
Results
The results of Pavlov's experiments were profound. He found that after sufficient pairings of the
bell and food, the dogs would salivate in response to the bell alone. This indicated that they had
formed an association between the previously neutral stimulus (the bell) and the unconditioned
stimulus (food).
ELEMENTS OF CLASSICAL CONDITIONING

Pavlov identified several key elements for conditioning to occur:

● Unconditioned Stimulus (UCS): The naturally occurring stimulus that triggers an


involuntary response. In Pavlov’s experiment, the food was the UCS because it naturally
caused the dogs to salivate.
● Unconditioned Response (UCR): The automatic, involuntary response to the UCS. In
this case, the dogs salivating in response to food was the UCR, an unlearned behavior
that happens due to biological reflexes.
● Conditioned Stimulus (CS): A previously neutral stimulus (NS) that becomes associated
with the UCS through repeated pairings. Initially, the metronome was a neutral stimulus
because it didn’t cause any salivation. After being paired with the UCS (food) several
times, it became a CS.
● Conditioned Response (CR): The learned response to the CS, which is similar to the
UCR but occurs as a result of conditioning. After learning took place, the dogs began
salivating (CR) at the sound of the metronome (CS), even without food present.

Classical Conditioning Terminology Chart:

Full Term Abbreviation Abbreviation

Unconditioned Stimulus UCS


Unconditioned Response UCR

Conditioned Stimulus CS

Conditioned Response CR

Classical Conditioning Process:

● Before Conditioning:
o NS: Metronome → No Salivation
o UCS: Food → UCR: Salivation
● During Conditioning:
o NS: Metronome + UCS: Food → UCR: Salivation
● After Conditioning:
o CS: Metronome → CR: Salivation

In Pavlov’s experiment, acquisition occurred when the dogs learned to associate the neutral
stimulus (metronome) with the unconditioned stimulus (food) through repeated pairings. Before
conditioning, the ticking of the metronome was a neutral stimulus and had no effect. But after
being paired multiple times with food, the ticking alone caused salivation.

Classical conditioning is an automatic form of learning. It occurs when a neutral stimulus


becomes associated with a meaningful stimulus and starts to trigger the same response. Examples
of classical conditioning in everyday life include feeling anxious when hearing the dentist’s drill
(after painful past experiences) or salivating when seeing a favorite food on TV (after associating
the image with real food).
Principles of Classical Conditioning:
1. The CS must precede the UCS for conditioning to occur. For example, if the food was presented
first, the dogs wouldn’t have learned to salivate at the metronome.
2. The CS and UCS must be presented close in time, ideally within 5 seconds, to form a strong
association.
3. The neutral stimulus must be paired with the UCS multiple times for learning to occur.
Pavlov found that repeated pairings were crucial for the dogs to make the association between the
metronome and food.
4. The CS should be distinctive, meaning it stands out from other stimuli in the environment. In
Pavlov’s experiment, the ticking metronome was a sound that wasn’t normally present in the lab,
making it easier for the dogs to associate with food.

STIMULUS GENERALIZATION AND DISCRIMINATION


Pavlov did find that similar ticking sounds would (at least at first) produce a similar conditioned
response from his dogs. He and other researchers found that the strength of the response to
similar sounds was not as strong as it was to the original one, but the more similar the other
sound was to the original sound (be it a metronome or any other kind of sound), the more similar
the strength of the response was.
The tendency to respond to a stimulus that is similar to the original conditioned stimulus is called
stimulus generalization. For example, a person who reacts with anxiety to the sound of a
dentist’s drill might react with some slight anxiety to a similar-sounding machine, such as an
electric coffee grinder.
Of course, Pavlov did not give the dogs any food after the similar ticking sound. They only got
food following the correct CS. It didn’t take long for the dogs to stop responding (generalizing)
to the “fake” ticking sounds altogether. Because only the real CS was followed with food, they
learned to tell the difference, or to discriminate, between the fake ticking and the CS ticking, a
process called stimulus discrimination.
Stimulus discrimination occurs when an organism learns to respond to different stimuli in
different ways. For example, although the sound of the coffee grinder might produce a little
anxiety in the dental-drill-hating person, after a few uses that sound will no longer produce
anxiety because it isn’t associated with dental pain.
EXTINCTION AND SPONTANEOUS RECOVERY
If Pavlov stopped giving food after the metronome's ticking (the conditioned stimulus, or CS),
the dogs gradually stopped salivating. This process is known as extinction. The CS (metronome)
was presented without the unconditioned stimulus (UCS, food), leading to the decline of the
conditioned response (CR, salivation). Removing the UCS leads to extinction because the CS
alone no longer predicts the UCS. During extinction, the learned association between the CS and
UCS weakens. For Pavlov's dogs, they learned not to salivate to the metronome’s ticking, as it no
longer meant food was coming.
The term "extinction" might suggest that the CR is completely gone. However, learning is
usually a lasting change in behavior. Once something is learned, it is difficult to "unlearn." New
learning may replace it, but the original response can remain in memory. After extinction, Pavlov
waited several weeks without exposing the dogs to the metronome. When he reintroduced the
metronome, the dogs salivated again, although the response was weak and brief. This is called
spontaneous recovery—the CR can return when the CS is presented again, even after a period
of extinction. Key concepts to remember include extinction, which is the decrease in CR when
the UCS is no longer presented with the CS, and spontaneous recovery, which is the
reappearance of a weakened CR when the CS is reintroduced after a period of absence.
TEMPORAL ARRANGEMENT IN CLASSICAL CONDITIONING
Temporal arrangement refers to the timing and sequence in which a conditioned stimulus (CS)
and an unconditioned stimulus (US) are presented during classical conditioning. The
effectiveness of conditioning largely depends on how these stimuli are arranged in time. Here are
the main types of temporal arrangements commonly discussed in classical conditioning:
1. Forward Conditioning: In forward conditioning, the CS is presented before the US. This
is the most effective and commonly used method in classical conditioning. The CS
predicts the occurrence of the US, allowing the organism to associate the two. For
example, if a bell (CS) is rung before presenting food (US) to a dog, the dog learns to
associate the bell with food.
2. Simultaneous Conditioning: In simultaneous conditioning, the CS and US are presented
at the same time. While this method can lead to some conditioning, it is generally less
effective than forward conditioning because the CS does not reliably predict the US. For
example, if the bell rings exactly as the food is presented, the dog may not learn to
associate the bell with the food as strongly as in forward conditioning.
3. Backward Conditioning: In backward conditioning, the US is presented before the CS.
This arrangement is typically ineffective for establishing a conditioned response. For
instance, if food (US) is presented before the bell (CS), the dog may not learn to associate
the bell with the food because the food does not serve as a predictor for the bell. This can
sometimes result in a conditioned response to the US instead of the CS, leading to
confusion or an unexpected response.
4. Trace Conditioning: In trace conditioning, the CS is presented and then removed before
the US is presented. The time gap (the trace interval) between the CS and the US can
affect the strength of the conditioning. If the interval is too long, the organism may have
difficulty making the association. For example, if a bell rings and then, after a brief
pause, food is presented, the dog may not associate the sound of the bell with the food as
strongly as in forward conditioning.
5. Delayed Conditioning: In delayed conditioning, the CS is presented and remains present
until the US is presented. This method is effective because the CS continues to provide a
signal for the upcoming US. For example, if the bell rings and continues to ring until the
food is presented, the dog is likely to form a strong association between the bell and the
food.
Importance of Temporal Arrangement
The arrangement of stimuli in time is crucial for establishing conditioned responses. Researchers
have found that forward and delayed conditioning generally yield the strongest associations,
while backward and simultaneous conditioning are less effective. Understanding temporal
arrangements helps psychologist design effective learning and behavior modification strategies,
such as in therapies for phobias or anxiety disorders.
HIGHER-ORDER CONDITIONING

Another concept in classical conditioning is higher-order conditioning. This occurs when a


strong conditioned stimulus (CS) is paired with a neutral stimulus (NS). The strong CS can
actually serve as an unconditioned stimulus (UCS), allowing the previously neutral stimulus to
become a second conditioned stimulus.

For example, consider the scenario where Pavlov has already conditioned his dogs to salivate at
the sound of the metronome. If, just before turning on the metronome, he snaps his fingers, the
sequence would now be “snap-ticking-salivation,” or “NS–CS–CR” (neutral
stimulus/conditioned stimulus/conditioned response).

If this sequence occurs enough times, the finger snap will eventually produce a salivation
response. The finger snap becomes associated with the ticking through the same process that
initially connected the ticking with the food. However, the food (UCS) would need to be
presented periodically to maintain the original conditioned response to the metronome's ticking.
Without the UCS, the higher-order conditioning would be difficult to sustain and would
gradually fade away.

WHY DOES CLASSICAL CONDITIONING WORK?

Pavlov believed that the conditioned stimulus (CS) activates the same area in the animal's brain
that the unconditioned stimulus (UCS) originally activated, a process he termed stimulus
substitution. However, if mere temporal association is sufficient, why does conditioning not
occur when the CS follows the UCS immediately? Robert Rescorla (1988) discovered that the
CS must provide information about the upcoming UCS to achieve conditioning; in other words,
the CS must predict the UCS's arrival.

In one study, Rescorla exposed one group of rats to a tone, followed by an electric shock while
the tone was still audible. These rats became agitated, exhibiting a conditioned emotional
response by shivering and squealing at the tone's onset. In contrast, another group of rats
received the shock only after the tone stopped. This group responded with fear when the tone
ceased. The tone provided different information for the two groups: for the first, it indicated an
impending shock, while for the second, it signaled the absence of a shock during the tone. This
expectancy, determined by the tone's timing relative to the shock, influenced the rats' responses.
This cognitive perspective emphasizes the mental activity of consciously anticipating an event as
an explanation for classical conditioning.

THE LITTLE ALBERT EXPERIMENT


The Little Albert experiment, conducted by John B. Watson and Rosalie Rayner in 1920, is a
seminal study in psychology that demonstrated classical conditioning in humans. The experiment
involved a nine-month-old infant known as Little Albert, who initially showed no fear of various
stimuli, including a white rat. Watson and Rayner aimed to condition a fear response by pairing
the presentation of the rat with a loud, frightening noise created by striking a steel bar with a
hammer. After several pairings, Albert began to cry and exhibit fear at the sight of the rat alone,
indicating that he had developed a conditioned response.

Key Findings

1. Conditioning of Fear: The primary finding was that emotional responses, specifically
fear, could be conditioned through association. Albert learned to associate the previously
neutral stimulus (the rat) with the unconditioned stimulus (the loud noise), resulting in a
conditioned response (fear).
2. Generalization of Fear: The experiment also demonstrated that fear could generalize to
other similar stimuli. Albert exhibited fear not only toward the rat but also toward other
furry objects, such as a rabbit and even a Santa Claus mask.
3. Implications for Understanding Phobias: Watson's work suggested that many phobias
could be rooted in similar conditioning processes, highlighting the role of environmental
factors in shaping emotional responses.

Ethical Concerns

The Little Albert experiment has faced significant ethical criticism:

● Child Welfare: The study involved inducing fear in an infant without any subsequent
attempt to decondition him. This raises concerns about the psychological harm inflicted
on Little Albert.
● Lack of Informed Consent: Although permission was obtained from Albert's mother,
ethical standards today would require more stringent measures to ensure the child's
welfare and understanding.
● Long-Term Effects: The long-term psychological impact on Albert remains unknown, as
he was removed from the study before any follow-up deconditioning could occur.
COUNTER CONDITIONING
Counterconditioning is a technique used to change behavior by pairing a stimulus with a new, opposite
reaction. Like extinction, it reduces the impact of the original learned behavior but doesn't completely
erase it. However, while extinction has gained a lot of attention in research, counterconditioning hasn’t
been studied as much.

For over a century, psychologists have been exploring how to effectively and permanently eliminate
maladaptive behaviors and unwanted memories. This has clear clinical relevance, as many mental health
disorders involve disruptions in learning, memory, and behavior. Examples include intrusive memories in
trauma-related disorders, addiction, excessive worry in generalized anxiety disorder, compulsive
behaviors in OCD, and fear or avoidance in phobias and panic disorders. This drives the need to apply
findings from learning and memory research to better understand the neurobiological mechanisms behind
these disorders and develop innovative treatments. However, despite significant advances in psychology
and neuroscience, techniques to consistently modify maladaptive behaviors in humans remain elusive.
Most research in this area focuses on extinction of conditioned behaviors, but here, we review the
alternative approach of counterconditioning.
Counterconditioning, unlike standard extinction, involves replacing an expected outcome with one of the
opposite feeling. It's the basis for therapies like systematic desensitization or aversion therapy, where
unwanted responses are reduced by triggering an opposite reaction. For example, someone afraid of dogs
after being bitten might receive extinction treatment by safely being exposed to dogs. With
counterconditioning, they would gradually encounter dogs while feeling relaxed or doing something
enjoyable, which helps reduce fear. Both methods create new learning (e.g., "dogs are safe") that
competes with the old association (e.g., "dogs are dangerous"). While counterconditioning seems to
prevent relapse better than extinction in some cases, research on long-term effects is mixed.

Counterconditioning comes from research in Pavlovian and operant conditioning. In Pavlovian


counterconditioning, a neutral stimulus (like a tone) is first paired with something meaningful, either
positive (like food) or negative (like a shock), and then it's paired with the opposite, like switching from
shock to food. In rodents, researchers measure responses like freezing, head-jerks, or entering the food
area. In operant conditioning, performance is tested through tasks like avoiding a shock or stopping
certain behaviors. In both methods, the stimulus begins to trigger a response based on the new
association. Clinically, counterconditioning helps change negative or harmful associations, like reducing
fear (as in desensitization) or making harmful behaviors unpleasant (as in aversion therapy).

LITTLE PETER EXPERIMENT


The "Little Peter" experiment, conducted by Mary Cover Jones in 1924, is one of the earliest examples of
counterconditioning in humans and is considered a landmark study in behavior modification. It followed
the famous "Little Albert" experiment by Watson and Rayner (1920), where a young child, Albert, was
conditioned to fear a white rat by associating it with a loud, frightening noise.

In the "Little Peter" experiment, a three-year-old boy named Peter showed fear towards various stimuli,
including a white rabbit. To reduce his fear, Jones used counterconditioning, gradually pairing Peter’s
exposure to the rabbit with a positive experience—eating candy, which Peter found enjoyable. Initially,
the rabbit was placed at a distance so that Peter could see it while eating, allowing the positive association
to form without triggering too much fear. Over multiple sessions, the rabbit was moved closer and closer
to Peter. Eventually, Peter's fear decreased to the point where he allowed the rabbit to nibble his fingers
without feeling afraid.

This experiment is a foundational example of behavior modification and a precursor to therapies like
systematic desensitization. The key principle demonstrated by the "Little Peter" study was that positive or
pleasurable stimuli (like eating candy) could be used to counteract negative emotional reactions (like
fear). The idea behind this approach is known as reciprocal inhibition, where one emotional system, such
as pleasure, inhibits the activation of another, like fear. The success of Peter's treatment laid the
groundwork for modern desensitization therapies, which are still widely used to treat phobias and anxiety
disorders by gradually exposing patients to feared stimuli in the presence of relaxing or positive
experiences.
Criticisms of Counterconditioning Theory
1. Limited Long-Term Effectiveness
Counterconditioning may not always lead to lasting behavior change. Research shows
that in some cases, the original negative association (e.g., fear) can resurface over time,
especially in different contexts or when the positive stimulus (like relaxation) is absent.
2. Context Dependency
The success of counterconditioning can be highly dependent on the environment in which
it occurs. Changes in context, such as moving from a therapeutic setting to real-life
situations, may lead to the return of the original fear or behavior, limiting its practical
applicability.
3. Competing Responses
Reciprocal inhibition relies on activating a stronger positive response (e.g., relaxation) to
override the negative one (e.g., fear). However, if the negative response is too strong,
such as in cases of severe phobias or trauma, counterconditioning may be less effective
because the anxiety may overpower the positive stimulus.
4. Over-simplification of Complex Behaviors
Critics argue that counterconditioning oversimplifies complex emotional and behavioral
issues, assuming that merely associating a positive stimulus with a negative one will
create lasting change. For deeper-rooted psychological problems, this method may not
address underlying causes.
5. Difficulty in Application to Certain Disorders
Some mental health conditions, like obsessive-compulsive disorder or severe trauma,
may not respond well to counterconditioning. In these cases, more comprehensive
approaches, such as cognitive-behavioral therapy or exposure therapy, may be more
effective for managing symptoms.
CLASSICAL CONDITIONING APPLIED TO HUMAN BEHAVIOR

Pavlov’s concepts were later expanded by scientists to explain human behavior, particularly
emotional responses. A key study demonstrating this is the Little Albert experiment, which
applied classical conditioning to human emotional responses, notably phobias.
Phobias and the "Little Albert" Experiment
Phobias are irrational fear responses, and John B. Watson and Rosalie Rayner's experiment with
"Little Albert" demonstrated how such fears can be conditioned. In the experiment, Watson and
Rayner paired a white rat (neutral stimulus) with a loud, scary noise (unconditioned stimulus or
UCS). Although Albert was not initially afraid of the rat, he was naturally scared of the loud
noise (unconditioned response or UCR). After seven pairings, Albert began to fear the rat, which
had now become the conditioned stimulus (CS). The resulting fear of the rat was the conditioned
response (CR), showcasing the conditioning of a phobia.
Stimulus Generalization in Little Albert’s Case
Following the conditioning, Little Albert also demonstrated fear of other stimuli, such as a
rabbit, a dog, and a sealskin coat. This is an example of stimulus generalization, where the fear
response generalized to similar objects, though some researchers question whether true
generalization occurred.
Conditioned Emotional Responses (CER)
Phobias are a type of conditioned emotional response (CER), which is one of the easiest forms
of classical conditioning. Common examples of CERs include a child's fear of the doctor’s
office, a person's fear of dogs, or a puppy’s fear of a rolled-up newspaper. Vicarious
conditioning can also occur when individuals develop a conditioned emotional response simply
by observing another person's reaction to a stimulus.
Vicarious Conditioning
In vicarious conditioning, people can develop emotional responses by observing others'
reactions. For example, if a child observes a parent reacting with fear to stray dogs, the child may
also develop a fear of dogs, even if they have never been attacked.
Classical Conditioning in Advertising
Advertisers frequently use classical conditioning to evoke emotional responses in consumers.
By associating products with stimuli that generate positive emotions (e.g., cute animals or
attractive models), advertisers hope to condition viewers to associate these emotions with their
products. Vicarious classical conditioning is often used, where viewers observe emotional
reactions in the advertisements and develop similar feelings toward the product.
Treatment of Phobias Using Classical Conditioning
The principles of classical conditioning can also be applied in the treatment of phobias and
anxiety disorders. Therapies based on classical conditioning help individuals unlearn irrational
fears or conditioned responses through repeated exposure to the feared stimuli without the
unconditioned stimulus.
Conditioned Taste Aversions
A specific form of classical conditioning is conditioned taste aversion, where an individual
develops an aversion to a particular food or drink after a negative experience, such as nausea.
Research has shown that rats can develop taste aversions after consuming food up to six hours
before becoming nauseous, and similar aversions occur in humans undergoing chemotherapy or
alcoholics undergoing aversion therapy.

Biological Preparedness and Classical Conditioning


Some stimuli are more easily associated with certain responses due to biological preparedness.
For example, mammals are more likely to associate taste with illness, while birds associate visual
cues with illness. This tendency is rooted in evolutionary importance—fear and nausea are
adaptive responses that increase an organism's chances of survival.
Phobias and Evolutionary Significance
Biological preparedness explains why some objects, such as snakes and crocodiles, are more
easily conditioned to induce fear in humans and animals. In contrast, non-threatening objects,
like flowers or rabbits, are more challenging to associate with fear. This is illustrated by studies
on vicarious classical conditioning in monkeys, where fear of toy snakes and crocodiles was
easily learned, but fear of flowers was not.
Classical Conditioning and Drug Dependency
The surroundings and objects associated with drug use can become conditioned stimuli,
triggering cravings and making it harder for individuals to quit. This occurs because the mind
and body associate certain environmental cues with the effects of the drug, creating a
conditioned "high" response even without the substance.
By understanding these mechanisms, we can see how classical conditioning plays a significant
role in both the development and treatment of phobias, as well as its broader implications in
emotional responses, learning, and even addiction.
CRITICISM
Classical conditioning, while foundational in understanding learning processes, has faced several
criticisms that highlight its limitations.
● Lack of Consideration for Complex Human Behavior: One major criticism is that
classical conditioning oversimplifies the learning process by neglecting the complexity of
human cognition. It fails to account for higher-order mental functions such
as thinking, reasoning, and memory, which also play crucial roles in how individuals
learn and respond to stimuli. This reductionist view suggests that learning is merely a
product of stimulus-response associations, ignoring the nuances of conscious thought and
decision-making.
● Assumption of Determinism: Critics argue that classical conditioning implies a lack of
free will, suggesting that individuals have no control over their reactions to stimuli. This
deterministic perspective undermines the role of personal agency and choice in behavior,
which can be particularly problematic in understanding human actions that are influenced
by context, emotions, and individual experiences.
● Biological Limitations: Another significant limitation is the biological constraints on
what can be conditioned. Certain responses are more readily associated with specific
stimuli due to innate predispositions. For instance, taste aversion demonstrates that
organisms are biologically programmed to associate certain tastes with nausea more
readily than with other stimuli. This indicates that not all potential associations can be
learned equally, challenging the universality of classical conditioning principles.
● Ethical Concerns: The ethical implications of using classical conditioning techniques,
particularly in animal studies or behavioral modification practices, have also been a point
of contention. Critics highlight concerns about the potential for exploitation and
manipulation, raising questions about the morality of conditioning methods used in
various contexts
● Variability in Learning: Classical conditioning does not adequately explain individual
differences in learning. Factors such as personality traits, previous experiences, and
genetic predispositions can significantly influence how different individuals respond to
conditioning. This variability suggests that a one-size-fits-all approach to learning may
not be effective or appropriate.
● Alternative Theories: Finally, alternative learning theories, such as operant conditioning
and cognitive-behavioral approaches, offer more comprehensive frameworks for
understanding behavior. These theories emphasize the role of consequences
(reinforcement or punishment) and cognitive processes in shaping behavior, providing a
broader perspective than classical conditioning alone.

OPERANT CONDITIONING
There are two kinds of behavior that all organisms are capable of doing: involuntary and
voluntary. If Inez blinks her eyes because a gnat flies close to them, that’s a reflex and totally
involuntary. But if she then swats at the gnat to frighten it, that’s a voluntary choice. She had to
blink, but she chose to swat. Classical conditioning is the kind of learning that occurs with
automatic, involuntary behavior. In this section we’ll describe the kind of learning that applies to
voluntary behavior, which is both different from and similar to classical conditioning.

As discussed earlier, Thorndike (1874–1949) was one of the first researchers to study voluntary
learning, which later became known as operant conditioning. In his famous experiment, a hungry
cat was placed in a "puzzle box" with a lever that, when pressed, allowed it to escape and access
food. Initially, the cat’s actions were random, but over time, it accidentally pressed the lever and
learned to escape faster. This led to Thorndike’s law of effect: actions followed by pleasurable
outcomes are more likely to be repeated, while those followed by unpleasant outcomes are less
likely to be repeated.

Thorndike also proposed other laws of learning:

● Law of Readiness: Learning occurs when an individual is ready to act. If not, actions
may lead to frustration.
● Law of Exercise: Repetition strengthens learning; the more an action is practiced, the
stronger the association.
● Law of Use and Disuse: Frequently used connections become stronger, while unused
ones weaken over time.

B. F. Skinner (1904–1990) was the behaviorist who assumed leadership of the field after John
Watson. He was even more determined than Watson that psychologists should study only
measurable, observable behavior. In addition to his knowledge of Pavlovian classical
conditioning, Skinner found in the work of Thorndike a way to explain all behavior as the
product of learning. He even gave the learning of voluntary behavior a special name: operant
conditioning (Skinner, 1938).

Voluntary behavior is what people and animals do to operate in the world. When people perform
a voluntary action, it is to get something they want or to avoid something they don’t want, right?
So voluntary behavior, for Skinner, is operant behavior, and the learning of such behavior is
operant conditioning. The heart of operant conditioning is the effect of consequences on
behavior. Thinking back to the section on classical conditioning, learning an involuntary
behavior really depends on what comes before the response—the unconditioned stimulus (UCS)
and what will become the conditioned stimulus (CS). These two stimuli are the antecedent
stimuli (antecedent means something that comes before another thing). But in operant
conditioning, learning depends on what happens after the response—the consequence. In a way,
operant conditioning could be summed up as this: “If I do this, what’s in it for me?”
B.F. Skinner, a prominent figure in behavioral psychology, is best known for his work on
operant conditioning rather than classical conditioning, which was primarily developed by Ivan
Pavlov. However, we can draw parallels and discuss the concepts of conditioned response (CR),
unconditioned response (UCR), conditioned stimulus (CS), and unconditioned stimulus (UCS) in
the context of Pavlov's classical conditioning experiments, as well as how Skinner's work relates
to operant conditioning.
Pavlov's Classical Conditioning Components
1. Unconditioned Stimulus (UCS): In Pavlov's experiment, the UCS is the stimulus that
naturally and automatically triggers a response without any prior learning. In his classic
study, the UCS was food, which naturally elicited salivation from dogs.
2. Unconditioned Response (UCR): The UCR is the natural, unlearned reaction to the
UCS. In Pavlov’s experiment, the UCR was the salivation that occurred in response to the
food presented to the dogs.
3. Conditioned Stimulus (CS): The CS is a previously neutral stimulus that, after being
paired repeatedly with the UCS, begins to elicit a conditioned response. In Pavlov's
study, the CS was the bell sound, which initially did not trigger salivation.
4. Conditioned Response (CR): The CR is the learned response to the previously neutral
stimulus (CS) after conditioning has occurred. In Pavlov’s experiment, the CR was the
salivation in response to the bell, even when food was not presented.
Skinner's Operant Conditioning
While Skinner did not conduct experiments involving these classical conditioning terms, he
emphasized the importance of reinforcement and punishment in shaping behavior. In Skinner's
work:
● Operant Response: The behavior that is strengthened or weakened through
reinforcement (positive or negative) or punishment.
● Reinforcement: A stimulus that follows a behavior and increases the likelihood of that
behavior occurring again in the future.
● Punishment: A stimulus that follows a behavior and decreases the likelihood of that
behavior occurring again.

Relationship between the Two Concepts


Although Skinner focused on operant conditioning, which involves voluntary behaviors and their
consequences, understanding the principles of classical conditioning (CR, UCR, CS, UCS) can
provide a foundational context for the broader field of behaviorism. Both classical and operant
conditioning contribute to our understanding of learning and behavior modification, but they
differ in mechanisms:

● Classical Conditioning: Involves involuntary responses and is based on the association


between two stimuli.
● Operant Conditioning: Involves voluntary behaviors and is based on the consequences
of those behaviors (reinforcement and punishment).

THE CONCEPT OF REINFORCEMENT


“What’s in it for me?” represents the concept of reinforcement, a key contribution by B.F.
Skinner to behaviorism. Reinforcement refers to anything that, when following a response,
makes that response more likely to occur again. Essentially, it strengthens the behavior.
Reinforcement is typically a pleasurable consequence for the organism, aligning with
Thorndike’s law of effect, where the "pleasurable consequence" motivates behavior. This could
range from receiving food when hungry to avoiding unpleasant tasks, such as chores.

In Thorndike’s puzzle-box experiment, the cat’s reinforcement was both escaping the box and
receiving food. Each time the cat escaped, its lever-pushing behavior was reinforced. According
to Skinner, this reinforcement is the key to why the cat learned to repeat the behavior. In operant
conditioning, reinforcement is crucial for learning.

Skinner designed his own research tool called the Skinner box or operant conditioning chamber.
In these experiments, he trained animals, like rats, to press a lever to receive food, demonstrating
how reinforcement strengthens voluntary behavior.

PRIMARY AND SECONDARY REINFORCERS


The events or items used to reinforce behavior are not all the same. For instance, if a friend asks
you to help her move books from her car to her apartment, offering you a choice between $25 or
a candy bar, you’ll likely choose the money. With $25, you could buy several candy bars.
However, if the same offer is made to a 3-year-old child living nearby, the child will likely
choose the candy bar, as they don't fully grasp the value of money. This illustrates the difference
between two types of reinforcers—items or events that, when following a response, strengthen it.
The candy bar, which fulfills a basic need like hunger, is an example of a primary reinforcer.
Primary reinforcers satisfy basic needs such as food (for hunger), liquid (for thirst), or touch (for
comfort). Infants, young children, and animals are easily reinforced by these primary reinforcers.
It’s important to remember that reinforcers are not just "rewards"; removing pain, for example,
also fills a basic need, making freedom from pain a primary reinforcer.
On the other hand, a secondary reinforcer—like money—gets its value from being associated
with primary reinforcers in the past. A child who learns that money can be exchanged for candy
will come to find money reinforcing on its own. Similarly, if a puppy is praised while being
petted (touch being a primary reinforcer), the praise alone will eventually become reinforcing,
making the puppy happy.

Comparing Two Kinds of Conditioning

Operant Conditioning Classical Conditioning

End result is the creation of a new response to a


End result is an increase in the rate of
stimulus that did not normally produce that
an already-occurring response.
response.

Responses are voluntary, emitted by Responses are involuntary and automatic, elicited
the organism. by a stimulus.

Consequences are important in


Antecedent stimuli are important in forming an
forming an
association.
association.

CS must occur immediately before the


Reinforcement should be immediate.
UCS.

An expectancy develops for


reinforcement to follow An expectancy develops for UCS to follow CS.
a correct response.
The Neural Bases of Learning

As new methods for studying the brain and neuronal functions evolve, researchers are exploring
the neural bases of both classical and operant conditioning (Gallistel & Matzel, 2013). One
critical area involved in learning is the anterior cingulate cortex (ACC), located in the frontal
lobe above the front of the corpus callosum (Apps et al., 2015). The ACC connects to the nucleus
accumbens, which we discussed in relation to drug dependence and the reward pathway in
Chapter Four (see Learning Objective 4.11). Both the ACC and nucleus accumbens are involved
in dopamine release (Gale et al., 2016; Morita et al., 2013; Yavuz et al., 2015).

Dopamine plays a significant role in the reinforcement process, as it amplifies some input signals
while decreasing the intensity of others in the nucleus accumbens (Floresco, 2015). For example,
when you hear a specific sound from your cell phone indicating an incoming message, you may
feel compelled to check it. This sound—whether it's a chime or a ding—has become a
conditioned stimulus (CS) associated with a pleasurable outcome. The actual message serves as
an unconditioned stimulus (UCS) for pleasure, leading to a conditioned response (CR) of
enjoyment.

When you hear that sound followed by rewarding activities, excitatory activity occurs in several
brain areas, along with increased dopamine activity. This signals that the behavior was
beneficial, encouraging you to repeat it. Just as dopamine and the reward pathway are implicated
in drug dependency, they also play a crucial role in our “learned” addictions.

Positive and Negative Reinforcement


Reinforcers can be categorized based on how they influence behavior, specifically through
positive reinforcement and negative reinforcement.

Positive Reinforcement
Positive reinforcement involves the addition of a pleasurable consequence following a response,
which increases the likelihood of that response being repeated. This can be understood as a
reward for desired behavior.
Example: Consider a student who studies hard for an exam and receives an A as a result. The
positive feedback from getting a good grade serves as a reward, encouraging the student to
continue studying diligently in the future. Other common examples include receiving praise,
bonuses, or tokens for good behavior. For instance, a child who cleans their room may receive
praise or a small treat, reinforcing the behavior of tidying up.

Negative Reinforcement
In contrast, negative reinforcement involves the removal or escape from an unpleasant stimulus,
which also increases the likelihood of the associated behavior being repeated. This concept may
initially seem counterintuitive; however, it emphasizes that eliminating discomfort can motivate
individuals to repeat a particular behavior.
Example: If a person has a headache and takes medication to alleviate the pain, the relief from
the headache reinforces the behavior of taking medication when experiencing pain. Similarly,
consider a situation where a student submits their assignment on time to avoid losing points for
lateness. The act of submitting the assignment is reinforced by the removal of the unpleasant
consequence (the penalty for late submission).

● Both positive and negative reinforcement serve to strengthen behaviors, albeit in


different ways.
● Positive reinforcement adds a desirable outcome, while negative reinforcement
removes an undesirable one.

To illustrate the differences more clearly, let's analyze a few examples:


1. Pedro’s Example: Pedro’s father nags him to wash his car. To stop the nagging (an
unpleasant stimulus), Pedro washes the car. Here, the removal of nagging reinforces the
behavior of washing the car, making it a case of negative reinforcement.
2. Napoleon’s Example: Napoleon discovers that talking in a funny voice garners attention
from his classmates. The attention serves as a reward for his behavior, reinforcing his
tendency to talk that way more often. This is an example of positive reinforcement.
3. Allen’s Example: Allen, a server at a restaurant, adopts a pleasant demeanor because he
notices it leads to bigger tips. The increased tips act as positive reinforcement for his
behavior of being friendly and smiling.
4. An Li’s Example: An Li turns in her report on the due date to avoid a penalty for
lateness (marked down a grade). By submitting her paper on time, she removes the
unpleasant consequence, exemplifying negative reinforcement.
Positive and Negative Punishment
Punishment is a concept in behaviorism that aims to decrease the likelihood of a behavior being
repeated. It can be categorized into two types: positive punishment and negative punishment.
Each serves the purpose of reducing undesirable behaviors but does so in different ways.

Positive Punishment
Positive punishment involves the addition of an unpleasant consequence following a behavior,
which decreases the likelihood of that behavior being repeated. This form of punishment is often
seen as a way to discourage unwanted actions by introducing an aversive stimulus.
Example: Consider a child who touches a hot stove. If the child experiences pain from the burn,
that painful experience serves as a positive punishment. The addition of pain (an aversive
consequence) following the behavior of touching the stove decreases the likelihood that the child
will touch the stove again in the future. Other examples include receiving a speeding ticket for
driving too fast or a reprimand from a teacher for talking out of turn. In each case, the addition of
an unpleasant consequence aims to deter the undesirable behavior.
Negative Punishment
Negative punishment, on the other hand, involves the removal of a pleasurable consequence
following a behavior, which also decreases the likelihood of that behavior being repeated. This
form of punishment works by taking away something valued or desired, thereby discouraging the
unwanted behavior.
Example: Imagine a teenager who stays out past curfew. As a result, their parents take away
their driving privileges for a week. The removal of the privilege to drive serves as negative
punishment because it eliminates a pleasurable activity in response to the undesirable behavior of
breaking curfew. Another example is when a child loses access to their favorite toy for
misbehaving. In both scenarios, the removal of a positive experience is intended to reduce the
likelihood of the undesirable behavior occurring again.

● Positive punishment adds an aversive stimulus to decrease behavior, while negative


punishment removes a pleasurable stimulus to achieve the same goal.
● Both forms of punishment can be effective but can also lead to negative emotions, such
as fear or resentment, if used excessively or inappropriately.
● Immediate Consequence: Punishment should follow the undesired behavior
immediately to create a clear association. Delayed punishment diminishes its
effectiveness, just as with reinforcement. For example, if a child misbehaves, the
consequence should occur right away to reinforce the connection between the behavior
and the punishment.
● Consistency: Consistency in punishment is crucial. First, if a specific punishment is
promised for a particular behavior, it must be enacted without fail. Second, the intensity
of punishment should remain the same or increase slightly over time, never decrease. For
instance, if a child is scolded for jumping on the bed, the same or a more severe
consequence should follow if the behavior is repeated. If the child receives a spanking
once and only a mild scolding the next time, they may begin to “gamble” on the
likelihood of punishment.
● Pairing with Reinforcement: Whenever possible, punishment for an undesired behavior
should be paired with reinforcement of a desired behavior. For example, instead of
simply yelling at a child for eating with their fingers, a parent might gently remove the
child's hand from the plate while saying, “No, we do not eat with our fingers. We eat with
our fork,” and then hand the fork back to the child with praise. This approach reinforces
the correct behavior while using a milder form of punishment, effectively teaching the
desired behavior rather than merely suppressing the unwanted one.

TYPE DEFINITION EXAMPLE EFFECT


Adding a pleasant Giving a child praise for Increases
Positive
stimulus to increase the completing their desired
reinforcement
likelihood of behaviour homework behaviour
Removing unpleasant Taking away extra Increases
Negative
stimulus to increase the chores when a child desired
reinforcement
likelihood of behaviour improves their grade behaviour
Adding a unpleasant Decreases the
Positive Scolding a student for
stimulus to decrease the likelihood of
punishment talking during class
likelihood of behaviour behaviour
Taking away a
Removing a pleasant Decreases the
Negative teenager's phone for
stimulus to decrease the likelihood of
punishment speaking rudely with
likelihood of behaviour behaviour
their parents

UNDERSTANDING THE DIFFERENCE BETWEEN NEGATIVE


REINFORCEMENT AND PUNISHMENT
Negative reinforcement strengthens a behavior by removing or avoiding an unpleasant
stimulus. Its goal is to increase the likelihood of a behavior occurring. For example, if a student
completes their homework to avoid extra chores, the removal of chores is negative reinforcement
because it encourages the behavior of doing homework.

On the other hand, punishment (whether positive or negative) aims to reduce or suppress a
behavior. Positive punishment involves adding something unpleasant (like scolding), while
negative punishment involves taking away something pleasant (like losing phone privileges).
Both types of punishment decrease the likelihood of a behavior happening again.

SCHEDULES OF REINFORCEMENT

The timing of reinforcement significantly influences the speed of learning and the strength of the
learned response. Research by Skinner (1956) highlights that reinforcing every response is not
the most effective strategy for fostering long-lasting learning. This phenomenon is illustrated by
the partial reinforcement effect. For instance, Alicia receives a quarter every night for putting
her dirty clothes in the hamper, while Bianca earns a dollar only if she consistently puts her
clothes away each night for a week. Initially, Alicia learns faster due to immediate
reinforcement. However, when reinforcement ceases, Alicia is more likely to stop the behavior,
whereas Bianca may continue for a while longer, demonstrating greater resistance to extinction.
This illustrates that responses reinforced intermittently are more enduring than those reinforced
continuously.

Partial reinforcement can occur through various patterns or schedules. It can focus on the time
interval, known as an interval schedule, or the number of responses required, called a ratio
schedule. Schedules can also be classified as fixed (consistent requirements) or variable
(changing requirements). Thus, we have four main types of reinforcement schedules: fixed
interval, variable interval, fixed ratio, and variable ratio.
1. Fixed Interval Schedule: In this schedule, reinforcement is provided after a set period.
For example, receiving a paycheck every week follows a fixed interval. In a controlled
setting, if a rat must press a lever at least once every two minutes to receive food, this
exemplifies a fixed interval schedule. While this schedule leads to slower response rates,
it produces a scalloped response pattern as the rat becomes more active just before the
end of the interval, akin to how workers may speed up just before payday.
2. Variable Interval Schedule: This schedule involves unpredictable reinforcement at
varying time intervals. An example is a pop quiz, where students study regularly because
they don’t know when a quiz might occur. In a controlled experiment, a rat may receive
food every 5 minutes on average, but the actual intervals may vary. This results in a
steady but slower response rate, as the subject cannot predict when reinforcement will
occur.
3. Fixed Ratio Schedule: Here, reinforcement is given after a specific number of responses.
An example is piecework, where a worker is paid after completing a set number of tasks.
In a controlled setting, a rat may receive a food pellet after pressing a lever ten times.
This schedule produces high response rates, as the rat quickly pushes the lever to reach
the next reinforcement, leading to short pauses after each reinforcer.
4. Variable Ratio Schedule: This schedule involves reinforcement after an unpredictable
number of responses, resulting in high and steady response rates. A common example is
gambling, where players may win after an unpredictable number of bets. In an
experiment, a rat may have to push a lever an average of 20 times to receive food, but the
actual number of presses varies. This unpredictability keeps the rat continuously
responding, as it cannot afford to take breaks.

Regardless of the reinforcement schedule used, effective reinforcement relies on two additional
factors: timing and the reinforcement of only the desired behavior. Immediate reinforcement is
more effective, especially for animals and young children. Furthermore, reinforcing the specific
behavior we wish to encourage is crucial; otherwise, mixed signals can undermine the learning
process.

TYPE OF
FIXED SCHEDULE VARIABLE SCHEDULE
SCHEDULE
Fixed Ratio (FR): Variable Ratio (VR):
Reinforcement/Punishment Reinforcement/punishment is
is delivered after a set delivered after an unpredictable
Ratio (based on number of responses. For number of responses. For example a
the number of example a student receives a slot machine pays out after an
responses) sticker for every 5 math unpredictable number of lever pulls.
problems they solve. A child A student is reprimanded after
loses their tv privilege after unpredictable number of cheating
failing in 3 tests. instances.
Fixed Interval (FI):
Variable Interval (VI):
Reinforcement/Punishment
Reinforcement/punishment is
is delivered after a fixed
delivered after an unpredictable time
period of time, as long as the
interval, provided that the behaviour
behaviour occurs atleast
Interval (based on occurs atleast once. For example, a
once. For example a worker
time interval ) supervisor randomly checks and
gets paid after every two
gives praises at unpredictable times.
weeks. A student loses
A teacher gives surprise quizzes at
recess if they misbehave
unpredictable times to punish
during 10 minutes
students at unpredictable times.
observation period.

STIMULUS GENERALIZATION, DISCRIMINATION, SPONTANEOUS


RECOVERY, AND EXTINCTION
A discriminative stimulus is any stimulus that provides an organism with a cue for making a
certain response in order to obtain reinforcement. Specific cues lead to specific responses, and
discriminating between the cues leads to success. For example, a police car is a discriminative
stimulus for slowing down, and a red stoplight is a cue for stopping because both actions are
usually followed by negative reinforcement—people don’t get a ticket or don’t get hit by another
vehicle. Similarly, a doorknob is a cue for where to grab the door in order to successfully open it.
Extinction in operant conditioning involves the removal of the reinforcement. For example, if
a child throws a temper tantrum in the checkout line for candy or a toy, and the parent gives in,
they positively reinforce the tantrum. To stop the tantrum, the parent must remove the
reinforcement, meaning no candy, no treat, and ideally, no attention. This can be challenging, as
the tantrum behavior may worsen before it extinguishes.
Operantly conditioned responses can also be generalized to stimuli that are only similar to the
original stimulus. For example, when a baby first learns to say “Dada” in response to her father,
she may generalize this response to any man. As other men fail to reinforce her for this response,
she will learn to discriminate among them, eventually only calling her father “Dada.” In this
way, her father becomes a discriminative stimulus.
Spontaneous recovery, similar to classical conditioning, occurs when an operant response
reappears after extinction. For instance, when training animals to perform various tricks, animals
may initially try to get reinforcers by performing their old tricks, demonstrating spontaneous
recovery of previous responses.
CRITICISMS OF OPERANT CONDITIONING
 Oversimplification of Behavior: Critics argue that operant conditioning reduces complex
human behaviors to simple stimulus-response patterns, ignoring the cognitive processes that
influence learning and behavior. It may not adequately account for the nuances of human
decision-making.
 Neglect of Internal States: Operant conditioning primarily focuses on external behaviors and
observable stimuli, often neglecting internal cognitive states, emotions, and motivations that can
significantly influence behavior. This lack of attention to mental processes can limit the
understanding of more complex behaviors.
 Ethical Concerns: The use of punishment as a form of behavior modification raises ethical
questions. Critics argue that relying on punishment can lead to negative side effects, such as
increased aggression, fear, and avoidance behaviors, and may not effectively promote long-term
behavior change.
 Contextual Limitations: Operant conditioning often emphasizes controlled laboratory
settings, which may not translate well to real-world situations. Behavior may be influenced by a
variety of environmental factors that are not accounted for in controlled experiments.
 Overemphasis on Reinforcement and Punishment: Some critics suggest that operant
conditioning places too much emphasis on reinforcement and punishment as the primary drivers
of behavior. Other factors, such as social influences and intrinsic motivation, may play a more
significant role in shaping behavior.
 Limitations in Explaining Complex Behaviors: Operant conditioning may struggle to
explain certain complex behaviors that are not easily reinforced or punished, such as creativity,
insight learning, and behaviors that arise from intrinsic motivation.
 Reductionism: The approach is sometimes criticized for being reductionist, as it simplifies
behavior to basic principles of reinforcement and punishment without considering the influence
of social, cultural, and environmental contexts.
 Inapplicability to All Learning Situations: While operant conditioning can be effective in
some contexts, it may not be applicable to all learning situations, particularly in cases involving
higher-order thinking, problem-solving, or social learning.
 Potential for Misapplication: Misunderstandings of operant conditioning principles can lead
to ineffective or harmful applications in educational, clinical, or parenting contexts, particularly
if reinforcement and punishment are not used appropriately.

BEHAVIOUR MODIFICATION
Operant conditioning is more than just the reinforcement of simple responses. It can be used to
modify the behavior of both animals and humans.
Behaviour modification is the field of psychology concerned with analyzing and modifying
human behaviour. Analyzing means identifying the functional relationship between
environmental events and a particular behaviour to understand the reasons for behaviour or to
determine why a person behaved as he or she did. Modifying means developing and
implementing procedures to help people change their behaviour. It involves altering
environmental events so as to influence behaviour. Behaviour modification procedures are
developed by professionals and used to change socially significant behaviours, with the goal of
improving some aspect of a person’s life.
CHARACTERISTICS OF BEHAVIOUR MODIFICATION
1. Focus on behaviour:
Behaviour modification procedures are designed to change behaviour, not a personal
characteristic or trait. Therefore, behaviour modification deemphasizes labelling. For
example, behaviour modification is not used to change autism (a label); rather, behaviour
modification is used to change problem behaviours exhibited by children with autism.
Behavioural excesses and deficits are targets for change with behaviour modification
procedures. In behaviour modification, the behaviour to be modified is called the target
behaviour. A behavioural excess is an undesirable target behaviour the person wants to
decrease in frequency, duration, or intensity. Smoking is an example of a behavioural
excess. A behavioural deficit is a desirable target behaviour the person wants to increase
in frequency, duration, or intensity. Exercise and studying are possible examples of
behavioural deficits.
2. Procedures based on behavioural principles:
Behaviour modification is the application of basic principles originally derived from
experimental research with laboratory animals. The scientific study of behaviour is called
the experimental analysis of behaviour, or behaviour analysis. The scientific study of
human behaviour is called the experimental analysis of human behaviour, or applied
behaviour analysis. Behaviour modification procedures are based on research in applied
behaviour analysis that has been conducted for more than 40 years.
3. Emphasis on current environmental events:
Behaviour modification involves assessing and modifying the current environmental
events that are functionally related to the behaviour. Human behaviour is controlled by
events in the immediate environment, and the goal of behaviour modification is to
identify those events. Once these controlling variables have been identified, they are
altered to modify the behaviour. Successful behaviour modification procedures alter the
functional relationships between the behaviour and the controlling variables in the
environment to produce a desired change in the behaviour. Sometimes labels are
mistakenly identified as the causes of behaviour. For example, a person might say that a
child with autism engages in problem behaviours (such as screaming, hitting himself,
refusal to follow instructions) because the child is autistic. In other words, the person is
suggesting that autism causes the child to engage in the behaviour. However, autism is
simply a label that describes the pattern of behaviours the child engages in. The label
cannot be the cause of the behaviour because the label does not exist as a physical entity
or event. The causes of the behaviour must be found in the environment (including the
biology of the child).
4. Precise description of behaviour modification procedures:
Behaviour modification procedures involve specific changes in environmental events that
are functionally related to the behaviour. For the procedures to be effective each time
they are used, the specific changes in environmental events must occur each time. By
describing procedures precisely, researchers and other professionals make it more likely
that the procedures will be used correctly each time.
5. Treatment implemented by people in everyday life:
Behaviour modification procedures are developed by professionals or paraprofessionals
trained in behaviour modification. However, behaviour modification procedures often are
implemented by people such as teachers, parents, job supervisors, or others to help people
change their behaviour. People who implement behaviour modification procedures
should do so only after sufficient training. Precise descriptions of procedures and
professional supervision make it more likely that parents, teachers, and others will
implement procedures correctly.
6. Measurement of behaviour change:
One of the hallmarks of behaviour modification is its emphasis on measuring the
behaviour before and after intervention to document the behaviour change resulting from
the behaviour modification procedures. In addition, ongoing assessment of the behaviour
is done well beyond the point of intervention to determine whether the behaviour change
is maintained in the long run. If a supervisor is using behaviour modification procedures
to increase work productivity (to increase the number of units assembled each day), he or
she would record the workers’ behaviours for a period before implementing the
procedures. The supervisor would then implement the behaviour modification procedures
and continue to record the behaviours. This recording would establish whether the
number of units assembled increased. If the workers’ behaviours changed after the
supervisor’s intervention, he or she would continue to record the behaviour for a further
period. Such long term observation would demonstrate whether the workers continued to
assemble units at the increased rate or whether further intervention was necessary.
7. De-emphasis on past events as causes of behaviour:
As stated earlier, behaviour modification places emphasis on recent environmental events
as the causes of behaviour. However, knowledge of the past also provides useful
information about environmental events related to the current behaviour. For example,
previous learning experiences have been shown to influence current behaviour.
Therefore, understanding these learning experiences can be valuable in analysing current
behaviour and choosing behaviour modification procedures. Although information on
past events is useful, knowledge of current controlling variables is most relevant to
developing effective behaviour modification interventions because those variables, unlike
past events, can still be changed.
8. Rejection of hypothetical underlying causes of behaviour:
Although some fields of psychology, such as Freudian psychoanalytic approaches, might
be interested in hypothesised underlying causes of behaviour, such as an unresolved
Oedipus complex, behaviour modification rejects such hypothetical explanations of
behaviour. Skinner (1974) has called such explanations “explanatory fictions” because
they can never be proved or disproved, and thus are unscientific. These supposed
underlying causes can never be measured or manipulated to demonstrate a functional
relationship to the behaviour they are intended to explain.
HISTORY OF BEHAVIOUR MODIFICATION
The development of behaviour modification is rooted in significant historical
contributions from key figures. Ivan P. Pavlov uncovered respondent conditioning
through experiments demonstrating conditioned reflexes, while Edward L. Thorndike
established the law of effect, showing that behaviours producing favorable outcomes are
likely to be repeated. John B. Watson promoted behaviorism, emphasizing observable
behaviour and environmental control, and B. F. Skinner expanded this field by
distinguishing between respondent and operant conditioning, laying the groundwork for
behaviour modification. Following Skinner's principles, researchers in the 1950s applied
these concepts to human behaviour, leading to thousands of studies validating behaviour
modification techniques. Influential publications and professional organizations, such as
the Society for the Experimental Analysis of Behaviour and the Journal of Applied
Behaviour Analysis, further advanced the field by supporting research and disseminating
findings in behaviour analysis and modification.
TYPES OF BEHAVIOUR MODIFICATION

1. SHAPING: Shaping is a process in operant conditioning used to teach animals complex


behaviors through the reinforcement of successive approximations, or small steps, that
lead to a desired goal. For instance, when training an animal to perform tricks in a circus
or zoo, each step toward the final behavior is rewarded until the entire behavior is
achieved.
To illustrate, let’s say Jody wants to teach his dog, Rover, to jump through a hoop. Jody
would start by using a behavior that Rover is already capable of, such as walking through
a hoop placed on the ground. By using a treat to entice Rover to step through the hoop,
Jody can reward him with the treat (positive reinforcement) once he successfully walks
through. Gradually, Jody would raise the hoop slightly, rewarding Rover each time he
steps through. Eventually, the dog would be jumping through the hoop to receive the
treat. This method of reinforcing each small step that gets closer to the desired goal is
known as shaping (Skinner, 1974).
Trainers often use a sound, like a whistle or clicker, paired with food as a secondary
reinforcer. This technique allows trainers to avoid overfeeding while still reinforcing the
behavior.
While many animals can learn behaviors through operant conditioning, there are
biological limits to what each species can learn, as discussed further in the concept of
biological constraints.
2. CHAINING is a behavioral technique used in operant conditioning that involves linking
together a series of individual behaviors to form a more complex behavior or skill. Each
step in the chain is called a "link," and it builds upon the previous steps. Chaining can be
used in both forward and backward directions:

● Forward Chaining: This approach starts with the first behavior in the sequence. The
trainer reinforces the learner for completing the first step, then introduces the second step,
reinforcing the learner for completing the first and second steps together. This process
continues until the entire sequence is learned.
● Backward Chaining: In this method, the last behavior in the sequence is taught first. The
trainer reinforces the learner for completing the last step, then introduces the second-to-
last step, reinforcing the learner for completing both the second-to-last and last steps
together. This continues until the learner can complete the entire sequence from start to
finish.

Example of Chaining: Teaching a child to brush their teeth might involve chaining the steps of
the process:

● Step 1: Wet the toothbrush.


● Step 2: Apply toothpaste.
● Step 3: Brush teeth.
● Step 4: Rinse mouth.
● Step 5: Clean the toothbrush.

Key Differences between Chaining and Shaping:

1. Nature of Behavior:
o Chaining involves linking multiple discrete behaviors to form a sequence, while
shaping focuses on developing a single behavior through successive
approximations.
2. Reinforcement:
oIn chaining, each step is typically reinforced until the entire sequence is learned,
while in shaping, reinforcement is provided for any behavior that is closer to the
desired outcome, regardless of whether it is part of a sequence.
3. Learning Process:
o Chaining requires the learner to learn a complete sequence of actions, while
shaping allows for gradual learning, where each approximation builds on the
previous behavior until the final behavior is achieved.

3. INSTINCTIVE DRIFT: Instinctive drift refers to the tendency for animals to revert to
genetically determined behaviors, even after learning new behaviors through conditioning. This
concept was identified by the Brelands, who were trained by B.F. Skinner, during their studies
on animal training. They observed that animals, despite being trained to perform specific tasks,
would often fall back into instinctive behaviors that were inherent to their species.

For example, raccoons were trained to place coins in a container but would instinctively rub the
coins with their paws, mimicking their natural behavior of washing food before eating. Similarly,
pigs trained to pick up objects would revert to their instinct of rooting and throwing food around.
These behaviors were not taught but were part of the animals’ innate tendencies. The Brelands'
research revealed several key points:

1. Animals are not blank slates and cannot be taught just any behavior.
2. Species differences are crucial in determining what behaviors can be conditioned.
3. Not all behaviors are equally trainable for all species.

These findings suggest that biological instincts can sometimes limit or override the effects of
conditioning, showing that nature and genetics play a significant role in learning. This principle
applies not only to animals but also to humans, where instinctive or ingrained behaviors may
resist modification despite attempts at conditioning.

4. TOKEN ECONOMY: A token economy is a behavioral modification technique that uses


tokens or symbolic rewards to reinforce desired behaviors. Tokens, such as gold stars, points, or
punches on a card, act as secondary reinforcers, which can later be exchanged for primary
reinforcers like treats, privileges, or other rewards.

For example, in a classroom setting, a teacher may use a token economy to encourage a child to
focus during lessons. The teacher would first select a target behavior, like making eye contact
during instruction. Each time the child successfully engages in the target behavior, they receive a
token, such as a gold star on a chart. After accumulating a certain number of tokens, the child
can exchange them for a predetermined reward, such as extra playtime or a treat.

This system operates similarly to how money functions in society. People work to earn money,
which they can then exchange for goods and services. Token economies are used in various
settings, such as schools, mental health facilities, and rehabilitation programs, to reinforce
positive behaviors.
Additionally, credit card companies and airlines use similar systems, offering reward points or
frequent flyer miles that can be exchanged for goods, services, or discounts. This encourages
desired behaviors, such as using a particular service or spending more money. By consistently
reinforcing behaviors with tokens, individuals can develop long-term behavioral changes.

In combination with other behavior modification tools like time-out (which removes attention as
a form of mild punishment), token economies are effective ways to increase desirable behaviors
and decrease unwanted ones.

5. APPLIED BEHAVIOR ANALYSIS (ABA) is a modern form of behavior modification that


involves analyzing current behavior and using behavioral techniques to address socially relevant
issues. In ABA, complex skills are broken down into small, manageable steps and taught through
reinforcement. Prompts, like guiding a child's focus back to a task, are used when necessary and
gradually removed as the child gains mastery. ABA has become especially useful in treating
children with developmental disorders, such as Autism Spectrum Disorder (ASD), where
techniques like shaping are used to teach social and communication skills. Reinforcers like treats
or praise encourage children with ASD to learn behaviors such as making eye contact or using
language, as pioneered by Dr. O. Ivar Lovaas. ABA is widely applied in various fields, including
animal training, behavior management in schools, and therapy for individuals with mental health
issues.
6. BIOFEEDBACK is a technique that uses feedback from biological processes to bring
involuntary responses, such as heart rate or muscle tension, under conscious control. For
example, a person can learn to control their blood pressure or reduce muscle tension by receiving
real-time feedback from sensors monitoring their body. This method has been used for decades
to treat conditions such as high blood pressure, anxiety, and chronic pain by helping individuals
learn to relax or control physiological responses through operant conditioning. The concept of
biofeedback helps people manage responses that are typically considered automatic, and it has
been instrumental in addressing a range of physical and mental health issues.

7. NEUROFEEDBACK, a newer form of biofeedback, focuses on modifying brain activity.


Using technologies like EEG, which records electrical brain activity, neurofeedback helps
individuals learn to regulate their brainwaves to achieve desired states, such as increased focus or
relaxation. This method can be particularly helpful for conditions like ADHD, where patients can
train themselves to produce brain activity associated with improved attention. Modern
neurofeedback systems often integrate with video games or computer programs, allowing users
to interact with their brain activity in real-time. Neurofeedback is also being explored for treating
disorders such as chronic pain, epilepsy, and antisocial personality disorder. Studies using
advanced imaging techniques like MRI and fMRI are further investigating how neurofeedback
affects brain function, offering promising new applications for this technique.

8. CONTINGENCY CONTRACTING: Contingency contracting is a variation of operant


procedures that formalizes expected reinforcements and punishments into a contract clearly
understood by all parties involved. This approach often involves negotiation between the
behavior modifier and participants, enhancing the sense of ownership and responsibility. In
classroom settings, teachers can utilize this method by collaboratively setting expectations for
academic and behavioral performance with students, which helps to establish a positive
classroom environment. By discussing desired reinforcements, such as special projects or time in
reward areas, both teachers and students work together to create a mutually satisfactory
agreement, ultimately reducing behavioral issues and increasing student engagement.

Consistency is crucial in contingency contracting, as it allows individuals to understand the


operative contingencies and feel secure in their environment. Inconsistent application can lead to
anxiety and rule-testing behavior, while consistent enforcement fosters a sense of reliability. This
mutual reciprocity, where both parties fulfill their obligations, enhances the effectiveness of the
contract. Contingency contracting not only promotes accountability in educational settings but
has also proven beneficial in contexts like marriage counseling and family dynamics, reinforcing
the importance of consistency in fostering positive relationships.
AREAS OF APPLICATION

Behaviour modification procedures have been used in many areas to help people change a vast
array of problematic behaviours.

1. Developmental Disabilities: More behaviour modification research has been conducted


in the field of developmental disabilities than perhaps any other area. People with
developmental disabilities often have serious behavioural deficits, and behaviour
modification has been used to teach a variety of functional skills to overcome these
deficits. In addition, people with developmental disabilities may exhibit serious problem
behaviours such as self-injurious behaviours, aggressive behaviours, and destructive
behaviours. A wealth of research in behaviour modification demonstrates that these
behaviours often can be controlled or eliminated with behavioural interventions (Barrett,
1986; VanHouten & Axelrod, 1993). Behaviour modification procedures also are used
widely in staff training and staff management in the field of developmental disabilities
(Reid, Parsons, & Green, 1989).
2. Mental Illness: Behaviour modification has been used with patients with chronic mental
illness to modify such behaviours as daily living skills, social behaviour, aggressive
behaviour, treatment compliance, psychotic behaviours, and work skills. One particularly
important contribution of behaviour modification was the development of a motivational
procedure for institutional patients called a token economy (Ayllon & Azrin, 1968).
Token economies are still widely used in a variety of treatment settings.
3. Education: Great strides have been made in the field of education because of behaviour
modification research. Researchers have analysed student–teacher interactions in the
classroom, improved teaching methods, and developed procedures for reducing problem
behaviours in the classroom (Becker & Carnine, 1981; Madsen, Becker, & Thomas,
1968). Behaviour modification procedures have also been used in higher education to
improve instructional techniques, and thus improve student learning.
4. Rehabilitation: Rehabilitation is the process of helping people regain normal function
after an injury or trauma, such as a head injury from an accident or brain damage from a
stroke. Behaviour modification is used in rehabilitation to promote compliance with
rehabilitation routines such as physical therapy, to teach new skills that can replace skills
lost through the injury or trauma, to decrease problem behaviours, to help manage
chronic pain, and to improve memory performance.
5. Community Psychology: Within community psychology, behavioural interventions are
designed to influence the behaviour of large numbers of people in ways that benefit
everybody. Some targets of behavioural community interventions include reducing
littering, increasing recycling, reducing energy consumption, reducing unsafe driving,
reducing illegal drug use, increasing the use of seat belts, decreasing illegal parking in
spaces for the disabled, and reducing speeding.
6. Clinical Psychology: In clinical psychology, psychological principles and procedures are
applied to help people with personal problems. Typically, clinical psychology involves
individual or group therapy conducted by a psychologist. Behaviour modification in
clinical psychology, often called behaviour therapy, has been applied to the treatment of a
wide range of human problems.
7. Business, Industry and Human Services: The use of behaviour modification in the field
of business, industry, and human services is called organisational behaviour modification
or organisational behaviour management. Behaviour modification procedures have been
used to improve work performance and job safety and to decrease tardiness, absenteeism,
and accidents on the job. In addition, behaviour modification procedures have been used
to improve supervisors’ performances. The use of behaviour modification in business and
industry has resulted in increased productivity and profits for organisations and increased
job satisfaction for workers.
8. Child Management: Numerous applications of behaviour modification to the
management of child behaviour exist. Parents and teachers can learn to use behaviour
modification procedures to help children overcome bedwetting, nail-biting, temper
tantrums, noncompliance, aggressive behaviours, bad manners, stuttering, and other
common problems.
9. Sports: Behaviour modification is also used widely in the field of sports psychology and
is also used to promote health-related behaviours by increasing healthy lifestyle
behaviours (such as exercise and proper nutrition) and decreasing unhealthy behaviours
(such as smoking, drinking, and overeating).
10. Medical Problems: Behaviour modification procedures are also used to promote
behaviours that have a positive influence on physical or medical problems—such as
decreasing frequency and intensity of headaches, lowering blood pressure, and reducing
gastrointestinal disturbances—and to increase compliance with medical regimens.
Applying behaviour modification to health-related behaviours is called behavioural
medicine or health psychology.

COGNITIVE LEARNING THEORY

In the early days of behaviorism, psychologists like Watson and Skinner focused solely on
observable, measurable behavior, ignoring internal mental processes. However, Gestalt
psychologists, such as Edward Tolman and Wolfgang Köhler, remained interested in how the
mind influences behavior, studying how people perceive and organize stimuli. By the 1950s and
1960s, the rise of computers led to comparisons between the human mind and machines, shifting
psychology’s focus to cognitive processes. This gave rise to cognitive learning theory,
emphasizing the role of thoughts, feelings, and expectations in behavior. Martin Seligman later
contributed to this theory.

● TOLMAN’S MAZE-RUNNING RATS: LATENT LEARNING

Edward Tolman, a Gestalt psychologist, conducted a well-known experiment on latent learning using
three groups of rats (Tolman & Honzik, 1930). Each group was placed in the same maze, but their
experiences differed.

● Group 1: Rats received food as reinforcement for successfully exiting the maze. They
were placed back in the maze repeatedly and reinforced until they could solve the maze
without errors.
● Group 2: Rats did not receive any reinforcement until the 10th day of the experiment.
They were simply placed in the maze and allowed to explore, without reinforcement until
later.
● Group 3 (Control): Rats received no reinforcement for the entire duration of the
experiment.
According to strict behaviorist theory, only the first group of rats should have learned the maze
effectively, as behaviorists believed learning required reinforcement. Initially, this seemed true—
Group 1 solved the maze after repeated trials, while Groups 2 and 3 wandered aimlessly.
However, on the 10th day, when Group 2 finally received reinforcement, they solved the maze
much faster than expected. Instead of needing as many trials as Group 1, they began solving the
maze almost immediately.
Tolman concluded that the rats in Group 2 had formed a "cognitive map" of the maze during the
first nine days, learning the layout without showing it since there was no reason to do so. Once
reinforcement was provided, this hidden or latent learning emerged. Tolman's experiment
challenged traditional operant conditioning, as it demonstrated that learning could occur without
reinforcement, only becoming visible when there was motivation to use the knowledge.
● KÖHLER’S SMART CHIMP: INSIGHT LEARNING

Wolfgang Köhler, a Gestalt psychologist, explored cognitive learning through his studies
with chimpanzees while marooned on an island during World War I. In a well-known
experiment (Köhler, 1925), a chimp named Sultan was presented with a challenge: a banana
placed just out of reach outside his cage. Initially, Sultan solved the problem by using a stick
to retrieve the banana—simple trial-and-error learning.

The challenge was then made harder. With the banana still out of reach, Sultan had two sticks
in his cage that could be joined together to create a longer pole. After unsuccessfully trying
each stick individually, Sultan suddenly had an "aha" moment. He realized he could combine
the sticks to make a longer tool to reach the banana. Köhler referred to this sudden
understanding as insight, where the solution to a problem comes from perceiving the
relationships between elements.
Köhler’s experiment demonstrated that insight is not purely the result of trial-and-error
learning, but a cognitive process involving the sudden integration of information. While early
learning theorists like Thorndike believed animals couldn’t show insight, Köhler’s work
challenged this view, sparking debate on the role of insight in animal learning.

 SELIGMAN’S DEPRESSED DOGS: LEARNED HELPLESSNESS

Research underscores the importance of dopamine signals from the nucleus accumbens, with low
dopamine levels being linked to both depression and a diminished ability to avoid threatening
situations (Wenzel et al., 2018). This suggests that learned helplessness—a condition where
individuals feel they have no control over outcomes—can be tied to these biological processes.
Learned helplessness plays a significant role in coping with chronic or acute health conditions,
impacting both the individual with the disorder and family members making critical medical
decisions (Camacho et al., 2013; Sullivan et al., 2012).

The concept of learned helplessness also applies in educational settings. Many students believe
they are poor at subjects like math due to past failures. This belief may cause them to exert less
effort, reinforcing a cycle of failure. Such thinking reflects learned helplessness, where previous
negative experiences create a mental block, preventing students from trying harder.
Alternatively, it could be that they simply haven’t experienced enough success or feelings of
control in the subject.

Cognitive learning, which involves understanding the mental processes underlying behavior,
extends beyond just learned helplessness. In the next section, we explore observational learning,
often summarized as “monkey see, monkey do,” where individuals learn by watching others’
actions.

● OBSERVATIONAL LEARNING

Observational learning is the learning of new behavior through watching the actions of a model
(someone else who is doing that behavior). Sometimes that behavior is desirable, and sometimes it is
not, as the next section describes.

Bandura and the Bobo Doll Experiment

Albert Bandura’s classic study on observational learning involved preschool children watching
a model interact with toys (Bandura et al., 1961). In one condition, the model played non-
aggressively, ignoring a "Bobo" doll. In another, the model was aggressive toward the doll,
kicking, yelling, and hitting it with a hammer.

When left alone, children exposed to the aggressive model imitated the behavior, attacking the
doll. However, those who observed the non-aggressive model did not exhibit such behavior. This
demonstrated that learning can occur without direct reinforcement—a concept known as
learning/performance distinction.

In a later version of the study, children watched a film where the aggressive model was either
rewarded or punished. Children who saw the reward replicated the aggression, while those who
saw the punishment did not—until offered a reward to imitate the behavior. This confirmed that
although all children learned the aggression, only the rewarded model motivated imitation.
Bandura’s research has raised concerns about violent media exposure and its influence on
children's aggression. Studies over decades have linked media violence with increased
aggression in children and young adults (Allen et al., 2018; Anderson et al., 2015). On the
positive side, prosocial behavior has also been shown to increase when children watch media that
models helping behavior (Anderson et al., 2015; Prot et al., 2014).

The Four Elements of Observational Learning


Albert Bandura (1986) identified four essential elements for observational learning:
1. Attention: The learner must first pay attention to the model. Factors such as similarity to
the model and the model's attractiveness can increase attention. For example, a person at
a formal dinner party will observe someone familiar with etiquette to learn the correct
utensil to use.
2. Memory: The learner needs to retain what was observed. This might involve
remembering the steps of preparing a recipe from a cooking show or recalling how a
teacher solved an equation.
3. Imitation: The learner must be capable of reproducing the observed actions. For
instance, a 2-year-old might observe shoelace tying but lack the fine motor skills to
replicate it. Similarly, someone with weak ankles may remember a ballet move but be
unable to execute it.
4. Desire: The learner must have the motivation to imitate the behavior. If the learner
expects a reward, as in Bandura’s study, or observes the model being rewarded, they are
more likely to reproduce the behavior. People rarely imitate those who are unsuccessful
or punished.
A helpful way to remember these four elements is using the acronym AMID: Attention,
Memory, Imitation, and Desire.
VERBAL LEARNING

Verbal learning, distinct from conditioning, is a form of learning predominantly seen in humans.
Unlike conditioning, where associations are formed through repeated stimulus-response pairings,
verbal learning involves acquiring knowledge about objects, events, and their properties
primarily through language. This process enables individuals to understand and categorize the
world in terms of words and symbols, allowing complex information to be stored, organized, and
recalled.
In verbal learning, words do not function merely as labels; they develop associations with each
other, forming intricate mental networks. For example, hearing the word "sun" may evoke
related terms like "warmth," "light," or "summer." This associative learning mechanism supports
memory, comprehension, and communication, as humans are able to recall and relate words in
meaningful ways.
Psychologists have developed a range of experimental methods to explore how verbal learning
occurs, often in controlled laboratory environments. These methods focus on understanding how
different types of verbal materials are learned and recalled. For example, nonsense syllables
(combinations of letters that do not form actual words, like "BAF" or "VIK") are often used to
study basic memory processes without the influence of prior knowledge or meaning. Similarly,
familiar and unfamiliar words are used to observe the effects of prior experience on learning,
with familiar words typically being easier to learn and recall.
Other materials, such as sentences and paragraphs, allow researchers to examine more
complex language processing and the effects of context on memory.

METHODS USED IN STUDYING VERBAL LEARNING

 Paired-Associates Learning: This method resembles stimulus-stimulus (S-S) conditioning


and stimulus-response (S-R) learning and is commonly used for learning foreign language words
by pairing them with their native language equivalents. A list of paired associates is created,
where the first term acts as the stimulus and the second as the response. The pairs can be from
the same language or different languages (see Table 6.3 for examples). In this method, the first
terms (stimuli) are nonsense syllables (consonant-vowel-consonant), while the second terms
(responses) are English nouns. Initially, both terms are shown together, and the learner is
instructed to recall the response when presented with the stimulus. Afterward, the learning trial
begins, where the stimulus words are shown individually, and the participant attempts to recall
the corresponding response term. If the response is incorrect, the correct response is shown. This
sequence continues until the participant recalls all responses accurately without error. The total
number of trials needed to reach this point becomes the measure of paired-associates learning.
 Serial Learning: This method is used to understand how participants learn and recall lists of
verbal items, such as nonsense syllables, familiar or unfamiliar words, or related words, in a
specific order. The participant is shown the entire list and asked to reproduce the items in the
same order. During the first trial, the initial item is shown, and the participant is asked to provide
the next item. If they cannot recall it, the experimenter provides the correct item, which then
becomes the new stimulus, and the participant is prompted to recall the subsequent item. This
"serial anticipation method" is repeated, with learning trials continuing until the participant
accurately recalls the entire list in the correct order.
 Free Recall: In this method, participants are presented with a list of words, shown at a fixed
pace, which they read aloud. Immediately after the list presentation, participants are asked to
recall the words in any order they prefer. The list may contain related or unrelated words,
generally consisting of more than ten items, with the presentation order varying across trials.
This method helps researchers study how participants organize words in memory for effective
recall. Studies reveal that words positioned at the beginning or end of the list are generally easier
to recall, while those in the middle are more challenging to remember.
DETERMINANTS OF VERBAL LEARNING

Verbal learning has undergone extensive experimental research, revealing that various factors
influence the learning process. Key determinants include the features of the material to be
learned, such as the length of the list and the meaningfulness of the content. Meaningfulness is
assessed through several metrics, including the number of associations elicited in a set time,
familiarity, frequency of usage, relationships among the words, and the sequential dependency of
each word on the previous ones. Nonsense syllables are often used in studies, with lists available
at different association levels. For consistent results, nonsense syllables should have uniform
association values.

Research has yielded several generalizations: as the list length and the occurrence of low-
association words increase, learning time also increases. However, this extended time
strengthens learning, aligning with the total time principle, which states that a fixed amount of
time is necessary to learn a given amount of material, regardless of how many trials it takes.
Essentially, the more time spent, the stronger the learning.

When participants are permitted free recall rather than following a fixed sequence, verbal
learning becomes organizational. In free recall, participants reorder items, often grouping them
by category. Bousfield first observed this in an experiment using a list of 60 words from four
semantic categories (names, animals, professions, vegetables) presented randomly. Participants
tended to cluster words by category in their recall, demonstrating a phenomenon called
category clustering. This finding underscores that even randomly presented words can be
organized during recall, driven by either category-based or subjective organization. Subjective
organization indicates that individuals may recall words in personally meaningful ways,
reflecting their unique organizational patterns.

While verbal learning is typically intentional, it may also occur incidentally. During incidental
learning, participants might notice word features such as rhyming, identical starting letters, or
shared vowels. Thus, verbal learning encompasses both intentional and incidental aspects,
allowing participants to recognize specific features of words consciously or subconsciously.

DISCRIMINATION LEARNING

Discrimination learning refers to the process through which individuals learn to differentiate
between various stimuli and respond appropriately based on those distinctions. It involves
recognizing specific features that distinguish one stimulus from another, enabling a person to
make informed decisions or react differently depending on the stimulus presented. This type of
learning is critical for survival and adaptation, allowing individuals to navigate their environment
effectively.
Mechanisms of Discrimination Learning
1. Stimulus Discrimination:
This involves learning to respond differently to similar stimuli. For instance, a person
may learn to distinguish between different types of vehicles by their horn sounds. This
ability relies on the recognition of distinctive features of each stimulus.
2. Reinforcement:
Reinforcement plays a crucial role in discrimination learning. When a response to a
specific stimulus is rewarded (positive reinforcement) or a negative outcome is avoided
(negative reinforcement), the likelihood of that response occurring again increases. Over
time, this reinforcement strengthens the association between the stimulus and the
appropriate response.
3. Generalization:
Discrimination learning is often contrasted with generalization, where a response is
made to similar stimuli. For example, if a dog learns to sit when commanded, it may also
sit when hearing similar commands. Discrimination learning involves narrowing down
responses to specific stimuli while minimizing responses to others.
Applications of Discrimination Learning
1. Animal Training:
Discrimination learning is widely used in training animals. Trainers teach pets to respond
to specific commands or cues by rewarding them for correctly identifying the stimulus.
For example, a dog may learn to differentiate between the command "sit" and "stay."
2. Cognitive Development:
In humans, discrimination learning is essential for cognitive development. Children learn
to differentiate between letters, numbers, and shapes, which lays the foundation for
reading and mathematical skills.
3. Therapeutic Settings:
In psychological therapy, discrimination learning techniques can help individuals with
anxiety or phobias learn to differentiate between real threats and harmless stimuli. For
example, a person who fears dogs may undergo desensitization training to learn that not
all dogs pose a danger.
Factors Influencing Discrimination Learning
1. Salience of Stimuli:
More salient (noticeable) stimuli are easier to discriminate. For example, bright colors or
loud sounds attract more attention and facilitate faster learning.
2. Complexity of Stimuli:
Simpler stimuli are typically easier to discriminate. For instance, distinguishing between
two different shades of color may be more challenging than identifying a red light from a
green light.
3. Practice and Experience:
Repeated exposure to stimuli enhances the ability to discriminate. The more often an
individual encounters and interacts with different stimuli, the better their ability to
differentiate between them.
4. Feedback:
Receiving feedback after making a response can significantly improve discrimination
learning. Correct responses should be reinforced, while incorrect responses should be
corrected to facilitate better understanding.

Recent Trends in Learning in Psychology


The field of psychology is continuously evolving, influenced by advancements in technology,
research methodologies, and theoretical perspectives. Recent trends in learning encompass
various approaches, theories, and applications that reflect these changes. Here’s an in-depth look
at some of the most notable trends:

1. Neuroscientific Approaches to Learning


Recent developments in neuroscience have provided deeper insights into the biological
mechanisms underlying learning processes. Techniques like fMRI and EEG allow researchers to
observe brain activity in real-time, enabling them to understand how different areas of the brain
are activated during learning tasks. Key trends include:
● Understanding Memory Formation: Research is increasingly focused on the
neurobiological processes involved in memory formation, consolidation, and retrieval.
Studies examine how neurotransmitters like dopamine and glutamate influence learning
and memory.
● Brain Plasticity: The concept of neuroplasticity highlights the brain's ability to adapt
and reorganize itself. This trend emphasizes that learning can lead to structural changes in
the brain, supporting lifelong learning and recovery from injuries.

2. Cognitive and Constructivist Learning Theories


Cognitive and constructivist theories emphasize the active role of learners in constructing
knowledge. Recent trends include:
● Metacognition: There is growing interest in the role of metacognition—the awareness
and regulation of one's own learning processes. Research highlights strategies for
fostering metacognitive skills, such as self-assessment and reflection, which enhance
learning outcomes.
● Collaborative Learning: Emphasizing social interactions, collaborative learning
environments encourage students to work together, share perspectives, and construct
knowledge collectively. This trend aligns with constructivist principles, supporting the
idea that learning is a social process.

3. Technology-Enhanced Learning
The integration of technology in education and psychology has transformed learning
experiences:
● E-Learning and Online Education: The rise of e-learning platforms and online courses
allows for flexible and accessible learning. These platforms often employ multimedia
resources, interactive assessments, and social networking features to enhance engagement
and retention.
● Gamification: Incorporating game design elements into learning activities has become
increasingly popular. Gamification leverages competition, rewards, and engaging
narratives to motivate learners and enhance their overall experience.
● Artificial Intelligence (AI) and Adaptive Learning: AI-driven tools personalize
learning experiences by assessing individual strengths and weaknesses. Adaptive
learning systems adjust content and pacing based on real-time performance, fostering
personalized learning paths.

4. Focus on Emotional and Social Learning


Emotional and social dimensions of learning are gaining recognition in psychology:
● Emotional Intelligence (EI): Recent research highlights the importance of EI in
educational settings. Developing skills like empathy, self-regulation, and social
awareness can enhance students’ learning experiences and interpersonal relationships.
● Social-Emotional Learning (SEL): SEL programs aim to equip learners with essential
life skills such as emotional regulation, conflict resolution, and effective communication.
This trend acknowledges that emotional and social competencies are crucial for academic
success and overall well-being.

5. Mindfulness and Well-Being in Learning


Mindfulness practices are being integrated into educational settings to support mental health and
enhance learning:
● Mindfulness-Based Interventions: Programs focusing on mindfulness promote
attention, focus, and stress reduction. Research shows that mindfulness practices can
improve emotional regulation, enhance cognitive flexibility, and foster a positive learning
environment.
● Positive Psychology: This trend emphasizes strengths, resilience, and well-being in
educational settings. By fostering a growth mindset and focusing on positive outcomes,
educators can create an environment that supports motivation and lifelong learning.

6. Culturally Responsive Teaching


Culturally responsive teaching recognizes the diverse backgrounds of learners and emphasizes
the importance of inclusivity in education:
● Cultural Competence: Educators are encouraged to develop cultural competence by
understanding and respecting the diverse cultural backgrounds of their students. This
trend promotes teaching practices that are relevant and responsive to students' cultural
contexts.
● Social Justice Education: This approach integrates social justice issues into the
curriculum, encouraging students to critically examine societal structures and advocate
for equity and inclusion.

7. Interdisciplinary Approaches
There is a growing trend toward interdisciplinary research in learning, blending insights from
psychology, education, neuroscience, and sociology:
● Integrative Learning Models: Researchers are developing models that combine
cognitive, emotional, and social aspects of learning. This holistic approach acknowledges
that learning is influenced by a range of factors, including environmental, cultural, and
individual differences.
● Real-World Applications: Collaborative research initiatives focus on applying
psychological principles to address real-world challenges in education, such as dropout
rates, literacy development, and educational disparities.
GARCIA EFFECT
The Garcia Effect, named after psychologist John Garcia, refers to a phenomenon in which
animals, including humans, develop a strong aversion to a taste or food that has been associated
with illness or negative consequences. This conditioned taste aversion can occur even if the
illness happens hours after the consumption of the food. For example, if someone eats a
particular food and then becomes ill (due to a virus, not the food itself), they may develop an
aversion to that food in the future. This effect illustrates how powerful associations can be
formed between stimuli (in this case, taste) and experiences, highlighting the role of
evolutionary factors in learning. It suggests that organisms are biologically prepared to learn
certain associations more readily than others, emphasizing the importance of survival
mechanisms.
PREMACK PRINCIPLE (GRANDMA'S RULE)
The Premack Principle, often referred to as Grandma’s Rule, states that a more probable
behavior can be used to reinforce a less probable behavior. In simpler terms, it suggests that
individuals are more likely to engage in a less desired activity if it is followed by a more desired
activity. For example, a child may be encouraged to finish their homework (less preferred
activity) before they can play video games (more preferred activity). This principle underscores
the importance of motivation in learning, as it highlights how preferences can be utilized to
encourage specific behaviors or tasks.
Motivational Conflicts
Motivational conflicts occur when an individual experiences competing desires or goals that
may be mutually exclusive, leading to a struggle in decision-making. These conflicts can
significantly influence behavior, emotional well-being, and overall motivation. Understanding
the dynamics of these conflicts is essential for psychologists, educators, and anyone interested in
human behavior. Motivational conflicts can be categorized into three main types:
1. Approach-Approach Conflict
This type of conflict arises when an individual must choose between two desirable options. Both
choices are appealing, making the decision process difficult. For example, someone might face
an approach-approach conflict when deciding between two job offers that both promise
exciting career opportunities, attractive salaries, and positive work environments.
Implications:

● Emotional Tension: The individual may feel anxiety or indecision as they weigh the pros and
cons of each option.
● Satisfaction: Once a choice is made, individuals often experience relief and satisfaction, but
some may also experience regret over the option they did not choose.

2. Avoidance-Avoidance Conflict
This conflict occurs when an individual must choose between two undesirable options. Each
option presents its own set of negative consequences, making the decision even more
challenging. For instance, a student may face an avoidance-avoidance conflict when deciding
whether to complete a difficult assignment or take a failing grade in a course.
Implications:

● Procrastination: Individuals may delay making a decision or resort to avoidance behaviors, such
as procrastination, as they dread both options.
● Emotional Distress: This type of conflict can lead to increased stress, anxiety, and feelings of
helplessness, as the individual feels trapped between two unpleasant outcomes.

3. Approach-Avoidance Conflict
This conflict involves a single option that has both positive and negative consequences.
Individuals are drawn to the option due to its attractive aspects but simultaneously repelled by its
drawbacks. For example, a person may feel an approach-avoidance conflict when considering
accepting a job offer that requires relocation. The job may offer a significant salary increase and
exciting new challenges, but the move could also bring about stress, leaving behind familiar
surroundings and social networks.
Implications:

● Cognitive Dissonance: Individuals may experience cognitive dissonance as they struggle to


reconcile their desires with their fears and concerns.
● Decision Difficulty: The dual nature of the option can lead to prolonged indecision, as
individuals may oscillate between wanting to pursue the opportunity and fearing the associated
challenges.

4. Double Approach-Avoidance Conflict

Double approach-avoidance conflict occurs when an individual is faced with two options, each
having both positive and negative consequences. This type of conflict creates a more complex
decision-making scenario, as the individual must weigh the appealing and unappealing aspects of
both choices.
Example:
Consider a student deciding between two universities.

● Option 1: University A offers a prestigious program and excellent job prospects (positive
aspects) but is located far from home and has a high cost of living (negative aspects).
● Option 2: University B is closer to home, making it more affordable and providing emotional
support (positive aspects), but it lacks the same level of prestige and networking opportunities
(negative aspects).

In this scenario, the student faces a double approach-avoidance conflict as they evaluate the
attractive and undesirable features of both universities, leading to increased emotional tension
and complexity in their decision-making process.
Learning Styles
Learning styles refer to the preferred methods or approaches individuals use to acquire, process,
and retain new information. Understanding these styles can enhance educational strategies and
improve learning outcomes. Commonly referenced learning styles include:
● Auditory Learners: These individuals prefer listening to information and benefit from
discussions, lectures, and audio materials. They often find it easier to remember
information presented verbally.
● Visual Learners: Visual learners grasp concepts best through visual aids, such as
diagrams, charts, and videos. They often use colors, illustrations, and other visual tools to
enhance their understanding.
● Kinesthetic Learners: Kinesthetic learners prefer hands-on experiences and learn
through physical activity and manipulation of objects. They thrive in environments where
they can engage in experiments, role-playing, or other interactive activities.
● Tactile Learners: Similar to kinesthetic learners, tactile learners benefit from using
touch and movement to learn. They often engage in activities that allow them to handle
materials, conduct experiments, or perform tasks to understand concepts better.
COGNITIVE STYLES
Cognitive Styles are the characteristic ways in which individuals think, perceive, and remember
information. These styles impact how individuals approach problem-solving, learning, and
interaction with new material. Cognitive styles are relatively stable over time and differ from
person to person, often influencing educational and work performance. Here are some commonly
discussed cognitive styles:
● Field-Dependent vs. Field-Independent: Field-dependent individuals rely on external
cues and surrounding context to interpret information. They tend to excel in social
situations and group learning environments. Field-independent individuals, on the other
hand, prefer to analyze information in isolation from external context and tend to be self-
directed learners who thrive in analytical tasks.
● Verbal vs. Visual: Verbal learners process information more effectively through words,
whether written or spoken. They benefit from reading, writing, and verbal instructions.
Visual learners, in contrast, understand concepts best when presented with images, charts,
and other visual aids, allowing them to make connections through imagery rather than
language.
● Sequential vs. Global: Sequential learners favor a linear, step-by-step approach,
focusing on details in a logical order. They are methodical in their problem-solving and
prefer tasks that follow a clear, ordered path. Global learners, however, need to
understand the overall concept first. They tend to view information holistically,
synthesizing information broadly before attending to specific details.

MEMORY
Memory is an active system that receives information from the senses, puts that information
into a usable form, organizes it as it stores it away, and then retrieves the information from
storage.

STAGES OF MEMORY
Although there are several different models of how memory works, all of them involve the same
three processes: getting the information into the memory system, storing it there, and getting it
back out.

PUTTING IT IN: ENCODING


The first process in the memory system is to get sensory information (sight, sound, etc.) into a
form that the brain can use. This is called encoding. Encoding is the set of mental operations that
people perform on sensory information to convert that information into a form that is usable in
the brain’s storage systems. For example, when people hear a sound, their ears turn the vibrations
in the air into neural messages from the auditory nerve (transduction), which make it possible for
the brain to interpret that sound.

It sounds like memory encoding works just like the senses—is there a difference?
Encoding is not limited to turning sensory information into signals for the brain. Encoding is
accomplished differently in each of three different storage systems of memory. In one system,
encoding may involve rehearsing information over and over to keep it in memory, whereas in
another system, encoding involves elaborating on the meaning of the information—but let’s
elaborate on that later.

KEEPING IT IN: STORAGE


The next step in memory is to hold on to the information for some period of time in a process
called storage. The period of time will actually be of different lengths, depending on the system
of memory being used. For example, in one system of memory, people hold on to information
just long enough to work with it, about 20 seconds or so. In another system of memory, people
hold on to information more or less permanently.

GETTING IT OUT: RETRIEVAL


The biggest problem many people have is retrieval, that is, getting the information they know
they have out of storage.

TYPES OF MEMORY
1. Sensory Memory

Sensory memory holds sensory information (visual, auditory, tactile) for a very brief period,
usually less than a second. It serves as a buffer for incoming sensory stimuli, allowing the brain
to process them before they decay.

Iconic memory (visual) and echoic memory (auditory) are the two main types of sensory
memory. While brief, sensory memory is crucial for attention and helps direct our focus to
relevant stimuli.
2. Short-Term Memory (STM)

STM holds limited information for a short period, typically 15-30 seconds. It is often referred to
as "working memory" because it actively holds and manipulates information temporarily.

STM capacity is limited (often around 7±2 items), but techniques like chunking can increase this
capacity. STM plays a crucial role in immediate tasks, such as mental calculations and
comprehending language.

3. Working Memory

A subset of STM, working memory involves not just holding information but actively processing
it. Working memory is essential for complex cognitive tasks like reasoning, comprehension, and
problem-solving.

It includes different components, such as the phonological loop (for auditory information),
visuospatial sketchpad (for visual and spatial data), and central executive (for managing attention
and integrating information).

4. Long-Term Memory (LTM)

LTM stores vast amounts of information over extended periods, from minutes to an entire
lifetime. Unlike STM, LTM has a theoretically unlimited capacity.

LTM is divided into declarative (explicit) memory and non-declarative (implicit) memory.
Declarative memory includes facts and events, while non-declarative memory involves skills and
learned behaviors that do not require conscious recall (e.g., riding a bike).

Declarative (Explicit) Memory:

● Episodic Memory: Stores personal experiences, including specific events and situations.
Episodic memory enables people to recall past events with context and details like time
and place.
● Semantic Memory: Stores general knowledge about the world, such as concepts,
language, and facts, independent of personal experience.

Non-Declarative (Implicit) Memory:

● Procedural Memory: Involves memory for motor skills and actions, like riding a bike or
typing. Procedural memory is largely unconscious, as these skills become automatic with
practice.
● Priming: Occurs when exposure to a stimulus influences a response to a subsequent
stimulus, enhancing memory recall or recognition.
● Conditioned Responses: Formed through classical and operant conditioning, these
memories involve learned associations, like salivating at the smell of food.
NEURAL BASIS OF MEMORY

Memory processes are supported by specific brain regions:

1. Hippocampus

Critical for encoding and transferring new explicit memories to long-term storage. It is
particularly involved in spatial memory and contextual learning, helping individuals remember
where and when events occurred. The hippocampus is highly plastic and involved in memory
consolidation, a process where memories are stabilized after initial acquisition.

2. Amygdala

Plays a role in emotional memories, particularly those associated with fear and reward. The
amygdala interacts with the hippocampus to enhance the consolidation of emotionally significant
memories, making them more vivid and lasting.

3. Prefrontal Cortex

Involved in working memory and executive functions, such as decision-making, planning, and
regulating attention. The prefrontal cortex manages attention and helps retrieve information from
long-term memory, particularly for tasks requiring sustained focus and manipulation of
information.

4. Cerebellum

Essential for procedural memory and motor learning, the cerebellum aids in the coordination of
complex motor actions and muscle memory. It allows individuals to perform learned skills, such
as riding a bike, without conscious effort.

5. Basal Ganglia

Also involved in procedural memory, particularly habit formation and learned movements. It
works with the cerebellum to refine motor skills through practice and repetition.

6. Temporal Lobes

Home to the hippocampus and other memory-associated structures, the temporal lobes are
integral for long-term memory, especially episodic and semantic memories.

IMPACT OF EMOTIONS ON MEMORY

Emotions play a significant role in memory, influencing how information is encoded, stored, and
recalled. Strong emotional experiences tend to be remembered more vividly and for longer
durations. This is due to the amygdala’s involvement in memory consolidation, which prioritizes
emotionally charged memories. However, extreme stress or trauma can impair memory
encoding, leading to incomplete or fragmented recollections, as seen in some cases of trauma-
related disorders.

Retrieval Cues

Retrieval cues play a significant role in how we remember information by helping us access
memories stored in long-term memory (LTM). While maintenance rehearsal, or repeating
information, offers only one type of retrieval cue (the sound of the word or phrase), elaborative
rehearsal provides multiple retrieval cues by linking the information to its meaning and
connecting it with existing knowledge. This depth of encoding makes information easier to
retrieve since multiple cues are stored alongside it (Karpicke, 2012; Pyc et al., 2014; Robin &
Moscovitch, 2017).

Retrieval cues aren’t limited to direct associations with the material. According to the encoding
specificity principle, our environment and context at the time of learning can also become
retrieval cues, aiding memory recall when those conditions are recreated. This phenomenon is
known as context-dependent memory, where the physical environment present during learning
serves as a powerful retrieval aid.

For example, if you learned information in a specific room, taking a test in the same room might
improve recall. Similarly, in a classic study by Godden and Baddeley (1975), scuba divers
learned a list of words either underwater or on land. The divers recalled the words significantly
better when their recall environment matched the environment in which they originally learned
the words. This illustrates how environmental cues, such as being in water or on land, can
facilitate memory retrieval.

Additionally, state-dependent learning refers to cues related to internal states, such as mood or
physiological condition. Memories formed in a particular emotional or psychological state are
often easier to retrieve when in a similar state. For example, Eich and Metcalfe (1989) showed
that participants recalled words better when their mood matched the mood they were in during
learning. This research underscores the impact of retrieval cues, which may be either external
(environment) or internal (state), on enhancing memory access.

RECALL AND RECOGNITION

There are two kinds of retrieval of memories: recall and recognition. It is the difference between
these two retrieval methods that makes some exams seem harder than others. In recall, memories
are retrieved with few or no external cues, like filling in blanks on an application form.
Recognition, on the other hand, involves looking at or hearing information and matching it to
what is already in memory. A word-search puzzle, where the words are already written down and
simply need to be circled, is an example of recognition.

Recall: HMM... LET ME THINK


When someone is asked a question such as "Where were you born?" the question acts as the cue
for retrieval of the answer. This is an example of recall, as are essay, short-answer, and fill-in-
the-blank tests used to measure memory for information (Borges et al., 1977; Gillund & Shiffrin,
1984; Raaijmakers & Shiffrin, 1992). Whenever people struggle for an answer, recall has failed
(at least temporarily). Sometimes the answer feels "on the tip of the tongue"—a phenomenon
called the tip of the tongue (TOT) phenomenon (Brown & McNeill, 1966; Burke et al., 1991;
Forseth et al., 2018). Although people may know certain characteristics of the word, they can’t
retrieve its sound or spelling to fully retrieve it. This memory issue becomes more common with
age but isn’t a sign of dementia unless it increases suddenly (Osshera et al., 2012).

Serial Position Effect

An interesting feature of recall is a kind of "prejudice" in memory retrieval, where information at


the beginning and end of a list tends to be remembered more easily and accurately. This is
known as the serial position effect (Murdock, 1962). Words at the beginning of a list are
remembered better than those in the middle—this is the primacy effect (Craik, 1970; Murdock,
1962). The end of the list shows an increase in recall, known as the recency effect, as the last
few words were just heard and remain in short-term memory (Bjork & Whitten, 1974; Murdock,
1962). Business schools sometimes advise students to avoid being "in the middle" for job
interviews to improve memorability, as going first or last can make an interview more
memorable.

Recognition: HEY, DON’T I KNOW YOU FROM SOMEWHERE?

The other form of memory retrieval is recognition—the ability to match a piece of information
or a stimulus to a stored image or fact (Borges et al., 1977; Gillund & Shiffrin, 1984;
Raaijmakers & Shiffrin, 1992). Recognition is usually easier than recall because the cue is the
actual object, word, or sound to be detected as familiar and known. Tests like multiple-choice,
matching, and true–false rely on recognition. Recognition is particularly accurate for images,
especially human faces (Russell et al., 2009; Standing et al., 1970).

However, recognition isn't foolproof. Sometimes, there’s enough similarity between a new
stimulus and one in memory to create a false positive—when a person believes they recognize
something but does not actually have it in memory (Kersten & Earles, 2016; Muter, 1978). False
positives can have serious consequences, as illustrated in a Delaware case where eyewitnesses
mistakenly identified an innocent priest as a robbery suspect due to suggestive lineup practices.

SERIAL POSITION EFFECT


The serial position effect explains why we remember items at the beginning and end of a list
better than those in the middle. This phenomenon is influenced by the primacy and recency
effects.
 Primacy Effect: This refers to improved recall for the first items in a list. These items
receive more rehearsal and have a better chance of moving into long-term memory, as
they don’t compete with later information.
 Recency Effect: This effect is observed for items at the end of a list, which are often still
in short-term memory during recall. With no new items to displace them, they are readily
accessible, making them easier to remember.

AUTOMATIC ENCODING: FLASHBULB MEMORIES

Some long-term memories require effortful encoding or maintenance rehearsal to move from
short-term memory (STM) into long-term memory (LTM). However, many memories are stored
with little effort through a process known as automatic encoding (Kvavilashvili et al., 2009;
Mandler, 1967; Schneider et al., 1984). People often remember everyday details, such as the
passage of time, spatial layout, or frequency of events, without conscious effort. For example, a
person might not try to remember how many times cars have passed by but could still estimate it
as “often” or “hardly any.”

A unique type of automatic encoding happens when an unexpected, emotionally charged event
creates a highly vivid memory. Known as flashbulb memories (Hirst & Phelps, 2016; Kraha &
Boals, 2014; Neisser, 1982), these memories capture details of intense moments, almost like a
mental “flash picture.” Flashbulb memories are often linked to significant, widely-shared events.
For instance, Baby Boomers might remember where they were when President Kennedy was
shot, while Millennials may recall the September 11 attacks. More recent generations may
remember tragic events like the 2018 Parkland shooting. Personal flashbulb memories also
occur, capturing details of meaningful or emotional experiences such as first dates or
graduations.

The vividness of flashbulb memories is thought to be tied to the strong emotions felt during the
event. Emotional reactions trigger the release of hormones that enhance memory formation
(Dolcos et al., 2005; McEwen, 2000; McGaugh, 2004), and memory-enhancing proteins may
also play a role (Korneev et al., 2018). However, research shows that while flashbulb memories
feel vivid, they are not always accurate. Despite feeling real, these memories can decay or
change over time, just like other memories (Neisser & Harsch, 1992). Studies indicate that even
memories of stressful events, such as witnessing a crime, are often less accurate than other
memories (Loftus, 1975), reminding us that no memory is entirely immune to the effects of time
and alteration.

RECONSTRUCTIVE NATURE OF LONG-TERM MEMORY

Many people believe that recalling a memory is like an “instant replay.” However, as new long-
term memories are formed, old ones can be altered or lost (Baddeley, 1988). In reality,
memories, including vivid flashbulb memories, are never completely accurate, and inaccuracies
tend to increase over time. Sir Frederic Bartlett (1932), a memory schema theorist, viewed
memory as a storytelling process, where retrieval is akin to problem-solving. Individuals
reconstruct past events by inferring from current knowledge and available evidence (Kihlstrom,
2002a).

Elizabeth Loftus and other researchers (Hyman, 1993; Hyman & Loftus, 1998, 2002) support the
idea of constructive processing in memory retrieval. In this view, memories are literally “built”
or reconstructed from information encoded earlier. Each time a memory is retrieved, it may be
altered or revised to incorporate new details or exclude previously included ones.

An example of memory reconstruction is when individuals, after learning new information,


revise their memories to reflect a belief that they “knew it all along.” They may discard incorrect
details and replace them with more accurate post-event information. This phenomenon, known as
hindsight bias (Bahrick et al., 1996; Hoffrage et al., 2000; Von der Beck et al., 2017), illustrates
how people often fall victim to the belief that they could have predicted an outcome beforehand,
as seen in "Monday morning quarterbacking" when discussing sports outcomes.

MEMORY RETRIEVAL PROBLEMS


Despite some claims of "total recall," true perfect memory is unlikely. Several factors contribute
to difficulties in accurate recall.

The Misinformation Effect

The misinformation effect occurs when misleading information presented after an event alters a
person's memory of that event (Loftus et al., 1978). For example, if eyewitnesses to a crime talk
to each other, one person's description may influence another's recall. In a study, participants
viewed a slide presentation of a traffic accident featuring a stop sign. However, a subsequent
summary incorrectly referred to it as a yield sign. Participants exposed to this misleading
information were less accurate in recalling the sign's nature compared to those who received no
such misinformation. This illustrates how new information, even in a different format (e.g.,
written vs. visual), can reconstruct memories inaccurately.
False Memory Syndrome

False memory syndrome refers to the development of inaccurate memories through suggestion,
often during hypnosis (Frenda et al., 2014). While hypnosis can aid in recalling real memories, it
can also lead to the creation of false ones, increasing confidence in both true and false memories
(Bowman, 1996). Therapists may unintentionally induce false memories during sessions.
Research indicates that even mindfulness meditation can enhance the likelihood of false
memories (Wilson et al., 2015).

False memories are constructed in the brain similarly to real ones, particularly when visual
imagery is involved (Gonsalves et al., 2004). For example, studies using fMRI scans show that
people cannot always distinguish between real and imagined visual images. This can explain
how prompting someone to recall a specific person at a crime scene may lead them to
inaccurately remember that individual being present.

In her "Lost in the Mall" study, Elizabeth Loftus explored how memories can be manipulated
and how people can come to remember events that never actually happened. The experiment
aimed to implant a false memory into participants’ minds: being lost in a shopping mall as a
child. Here’s how it worked:

Participants were asked to recall four childhood events that family members provided. Three of
these stories were real, and one—the story of getting lost in the mall—was entirely fabricated.
Family members played a role in making the false story feel plausible, adding details that made it
sound real. When asked to recall these events, many participants were able to provide details
about the imaginary mall experience. They remembered things like how they felt, what they saw,
and even what the "rescuer" looked like.

This study showed how easily memories can be distorted or created from scratch. Loftus
demonstrated that through suggestion and association with familiar memories, it’s possible for
someone to "remember" vivid details of an event that never occurred. The implications extend
beyond childhood memories: they raise questions about the reliability of eyewitness testimony
and suggest that our memories are not static records but are instead open to revision,
reinterpretation, and even invention.

Trustworthiness of Memories

While false memories complicate the reliability of recollections, certain factors affect their
plausibility. Research by Kathy Pezdek indicates that only plausible events can typically be
implanted as false memories. In her studies, children were more likely to “remember” plausible
false events (like getting lost) compared to implausible ones (like receiving a rectal enema)
(Hyman et al., 1998; Pezdek et al., 1997).
However, Loftus’s earlier work suggests that even implausible memories can be implanted with
the right suggestion, especially if the event is framed as typical of others’ experiences. Two key
steps must occur for individuals to reinterpret their thoughts about false events as real memories:

1. The event must be made plausible.


2. Individuals must receive information that encourages them to believe it could have happened to
them.

The personality traits of those reporting such memories also play a role. For example, individuals
claiming to have experienced alien abductions were more likely to recall false memories
compared to controls (Clancy et al., 2002). Factors such as susceptibility to hypnosis, depression
symptoms, and unusual beliefs predict higher rates of false recall.

Parallel Distributed Processing (PDP) Model

The Parallel Distributed Processing (PDP) model, also known as the connectionist model,
presents a framework for understanding cognitive processes through a network of computational
elements called units, inspired by neural functioning. This model posits that information is
represented in the brain through various activation patterns among these units, which can have
activation values ranging from 0 to 1. The connections between units can be either positive,
enhancing activation, or negative, diminishing it. Knowledge is stored within these connections,
influencing how input data is processed and how memories are recalled.

For example, if a person is amidst two groups discussing a topic, they may recall certain
information from both groups simultaneously due to the overlapping activation patterns. This
reflects the nature of parallel processing, where multiple processes occur simultaneously,
contrasting with serial processing, which handles tasks in a sequential manner and often leads to
slower and less accurate results.

Main Principles of the PDP Framework

1. Cognitive Processes: The PDP model suggests that all cognitive behaviors are governed
by the same underlying principle. Units adjust their activation levels based on the total
inputs they receive through their connections. Various models utilize different methods
for aggregating inputs and adjusting activations, while cognitive filtering facilitates the
flow of activations in neural networks.
2. Interactive Processing: The processing in PDP models is dynamic and interactive. The
transfer of activation is bi-directional; when a signal is sent from one unit to another, the
receiving unit also sends feedback back. This interaction allows for ongoing adjustments
and modifications to the data, enhancing the model's dynamism.
3. Knowledge Encoding: Unlike traditional models that store data in separate structures,
knowledge in the PDP framework is encoded directly in the connections between units.
This means that the way the network behaves is determined by the interconnections and
their activation patterns during processing.
4. Continuous Processing: In the PDP model, processing, learning, and representations are
continuous. Representations are coded as distributed activation patterns across units,
allowing for similarities among representations. These similarities form the basis for
generalizing activations within the network, indicating that similar activation patterns
yield familiar outputs.
5. Environmental Dependence: The PDP model emphasizes the importance of the
environment's statistical structure in understanding cognitive processing. Learning occurs
through the activation patterns formed in neural networks, with daily experiences
contributing to errors and expectations that shape processing.
6. Patterns of Activation: Within the PDP framework, stimuli are represented by patterns
of activation across multiple units. Each input corresponds to unique activation patterns
among numerous neurons, with each neuron contributing to the representation of various
cognitive contents such as colors, images, words, and structures.

LEVELS OF PROCESSING (LOP) THEORY

The Levels of Processing (LOP) Theory by Craik and Lockhart (1972) posits that memory
retention is influenced by the depth at which information is processed. The theory contends that
the more deeply information is processed, the more enduring the resulting memory trace, in
contrast to the structure-based Multi-Store Model, which emphasizes separate short-term and
long-term memory systems. Instead, LOP theory suggests memory is simply the by-product of
information processing and lacks distinct structures like STM and LTM.

Depth of Processing

Craik and Lockhart defined depth as the degree of meaningful engagement with a stimulus,
rather than the number of analyses performed on it. They proposed that memory retention
depends on whether information undergoes shallow or deep processing:
1. Shallow Processing: This involves basic sensory analysis, such as:
o Structural Processing: Encoding only physical qualities (e.g., typeface).
o Phonemic Processing: Encoding based on sounds (e.g., rhyming).
o Shallow processing often involves maintenance rehearsal, or mere repetition, leading to
short-lived memory traces.
2. Deep Processing: This involves semantic encoding, where one engages in meaningful
analysis, linking information with prior knowledge or other concepts. For example,
understanding the meaning of a word and relating it to other concepts leads to better
recall, as it involves elaboration rehearsal, a process associated with more sustained
memory retention.

Key Study by Craik and Tulving (1975)

Craik and Tulving investigated how different types of processing impact recall. Participants
processed words under three conditions: structural, phonemic, and semantic. After a distraction
task, they were asked to identify the originally presented words from a larger list. The study
found that participants recalled more semantically processed words than phonemically or
structurally processed words. This confirmed that deeper processing, involving elaboration and
meaning, results in improved recall.

Real-Life Applications

The LOP model has practical applications in enhancing memory retention:

 Reworking: Rephrasing information in one’s own words.


 Method of Loci: Associating items with familiar locations.
 Imagery: Visualizing concepts to aid recall. These strategies encourage deep, semantic
processing, making information easier to remember and useful for activities like studying.

Strengths

LOP theory shifted memory research away from structure-based models toward understanding
processing depth. This was significant in demonstrating that memory depends on more than just
storing information in distinct memory types (STM/LTM). It also led to numerous studies
supporting the effectiveness of deep, semantic processing in enhancing memory recall.
Additionally, LOP theory has practical value in education and study techniques, as deeper
processing aids in better retention.

Weaknesses

Despite its contributions, LOP theory has several limitations:

 Lack of Explanation: The theory describes depth but fails to explain why deeper processing
results in better memory.
 Vague Concept of Depth: The concept is difficult to objectively measure, and it is challenging to
isolate depth from other factors like effort or time spent on processing.
 Circular Reasoning: Deep processing is predicted to yield better memory, but memory
effectiveness is also used to define depth, creating a potential circular argument.
 Evidence for Memory Structures: Studies such as those on H.M. and the serial position effect
suggest that distinct memory structures (STM/LTM) are present, which LOP theory overlooks.

HIERARCHICAL NETWORK MODEL OF SEMANTIC MEMORY


(COLLINS & QUILLIAN, 1969)

The Hierarchical Network Model of Semantic Memory by Collins and Quillian (1969)
explains how our memory for general knowledge (like facts, colors, and sounds) is organized.
Unlike personal memories, semantic memory holds information we learn over time.

How the Model Works

 Nodes: These are main concepts, like "bird" or "animal."


 Properties: These are features of the concepts, like "brown" or "has wings."

When we try to remember something, the brain activates a “node,” which spreads to related
ideas. This makes it easier to retrieve information from memory.

Why It’s Useful in Classrooms This model suggests students may need more time to respond to
questions. In classrooms, teachers often expect answers within a second, which may not be
enough time for students to find and say their answers. According to the model, remembering
involves activating linked ideas, which can take a few extra moments.

Cognitive Economy Cognitive economy means storing information efficiently. For example,
instead of repeating that fish and birds are “animals” in every category, this model keeps general
facts at a higher level. This structure helps us remember without unnecessary repetition.

Think about how we categorize animals in our minds:

 At the Top: Imagine the broad category Animal. This is where we group all types of living
creatures.
o Branching Out: From Animal, we can break it down into different groups, like:
 Birds: These are creatures that can fly and often have feathers.
 Properties of Birds:
 They have wings.
 They often build nests.
 Fish: These are the ones that live underwater.
 Properties of Fish:
 They breathe through gills.
 They typically have fins.
 Mammals: This group includes animals like dogs and cats.
 Dogs:
 They have fur.
 They bark to communicate.
 Cats:
 They also have fur.
 They meow and are often independent.

How It Works:

When you think about a dog, your brain quickly activates related concepts. You might remember
that a dog is a mammal and also an animal. This network of associations makes it easier to
recall details about dogs—like that they have fur and bark.

DEFINITION OF FORGETTING
Forgetting can be defined as the loss or inability to retrieve information that was once available
in memory. It is a natural and adaptive process that allows individuals to prioritize and manage
information by discarding details that are no longer relevant, thus freeing up cognitive resources
for more critical and current information. This selective memory process helps prevent mental
overload and is essential for effective functioning. As William James suggested, the ability to
forget is nearly as crucial as remembering, as it enables us to focus on information that supports
current goals and decisions, while reducing mental clutter from less relevant or outdated
information.

EBBINGHAUS AND THE FORGETTING CURVE

Hermann Ebbinghaus was one of the first to look deeply into how and why we forget things. To
do this, he took an unusual approach by creating lists of "nonsense syllables"—strings of letters
like "GEX" and "WOL" that had no meaning. He wanted to see how well he could memorize
them without any familiar words to help him out. After memorizing a list, he’d take a break, then
test himself to see what he remembered. He recorded his results, and the outcome was eye-
opening: he noticed that most forgetting happens quickly, especially within the first hour of
learning. After that, the rate of forgetting slows down, creating what we now call the "curve of
forgetting." This pattern holds even when we study meaningful material, although we forget
meaningful information more slowly than nonsense.

Ebbinghaus also found that how we study matters a lot. Instead of cramming all at once, he
found it’s much better to space out study sessions—this is called “distributed practice.” Research
since then has shown that studying in smaller chunks with breaks leads to better memory
retention than trying to take in all the information at once. For example, a 3-hour cram session
might feel productive, but breaking it up into shorter, 30- to 60-minute sessions with breaks in
between actually helps your brain hold onto the material longer. This approach takes the pressure
off and gives your brain time to process and store information more effectively.

Massed Practice:
Massed practice, often known as cramming, involves studying a large amount of material in a
single, uninterrupted session. Although this approach can give the illusion of productivity, it
generally results in poorer long-term retention. Without breaks, the brain becomes fatigued,
making it harder to absorb and retain information effectively. While massed practice might lead
to short-term recall, it often doesn’t support lasting memory.

Distributed Practice:
Distributed practice involves breaking study sessions into smaller chunks with breaks in
between. This approach allows the brain to consolidate information over time, making it easier to
retain in the long run. Studies have shown that distributed practice significantly enhances
memory retention compared to massed practice. By spacing out learning, individuals can avoid
mental fatigue and improve their ability to recall information later.

REASONS FOR FORGETTING

There are several theories that explain why people forget things. Here are three significant ones:

1. Encoding Failure
One simple reason for forgetting is encoding failure, where some information never gets encoded
into memory. For instance, if a friend speaks to you while you're distracted, you may not truly
process what they said. This means the information doesn’t pass beyond sensory memory. A
classic study by Nickerson and Adams (1979) tested this by asking people to identify a correct
view of a penny, an object they see often. Most people struggle with this because they haven't
paid enough attention to encode the details into long-term memory.

2. Memory Trace Decay Theory


Another theory involves memory traces—physical changes in the brain that occur when a
memory is formed. Over time, if these traces are not used, they may decay and fade away. This is
similar to how a path in grass becomes less visible if it is not frequently walked on. Forgetting in
sensory and short-term memory can easily be explained as decay; information that isn’t actively
attended to will fade. However, decay theory (or disuse) raises questions regarding long-term
memory, as sometimes people can recall memories they thought were lost.

3. Interference Theory
Interference theory suggests that forgetting occurs when other information interferes with the
retrieval of stored memories. Long-term memories might be stored permanently but can become
inaccessible due to interference from newer or older information. There are two types of
interference:

 Proactive Interference: This occurs when older information interferes with the learning
or retrieval of new information. For example, if you need to remember a new password
but keep recalling your old one, you are experiencing proactive interference. Another
common example is when someone changes their phone number but continues to
remember their old number instead of the new one.
 Retroactive Interference: This happens when new information interferes with the
retrieval of older information. For instance, if you need to recall an old password but only
remember your new one, the new information is retroactively interfering with your
memory of the old password.

ENGRAMS AND KARL LASHLEY'S CONTRIBUTION TO DECAY THEORY

Engrams are theoretical constructs in the study of memory, representing the physical traces or
changes in the brain that correspond to the storage of memories. The concept of engrams was
significantly developed by psychologist Karl Lashley in the early to mid-20th century. His
research aimed to understand how memories are stored in the brain and what happens to them
over time.

Lashley’s Research

Lashley conducted a series of experiments with rats in the 1920s and 1930s, where he trained
them to navigate mazes. After they learned the maze, he would perform lesions on various parts
of their brains to determine where memories were stored. His findings were surprising; he
discovered that no single area of the brain was solely responsible for the storage of memories.
Instead, he found that memory traces (engrams) seemed to be distributed across different
regions of the brain. This led him to propose the idea of mass action, suggesting that the amount
of memory loss correlated with the amount of brain tissue removed, rather than the specific
location of the lesion.

MOTIVATED FORGETTING
Motivated forgetting is a psychological phenomenon where individuals consciously or
unconsciously forget information, memories, or experiences that are emotionally distressing,
threatening, or uncomfortable. This process can be understood through two primary mechanisms:
repression and suppression.

Mechanisms of Motivated Forgetting

1. Repression:
o Definition: Repression is an unconscious defense mechanism proposed by Sigmund
Freud. It involves the automatic forgetting of distressing memories or thoughts that are
deemed too painful or anxiety-provoking to recall.
o Function: This mechanism serves to protect the individual from emotional pain or
psychological distress. For example, a person who has experienced trauma may
unconsciously block out memories of the event to avoid the associated pain.
o Example: A child who experiences abuse might repress those memories, making it
difficult for them to recall the details of the abuse later in life.
2. Suppression:
o Definition: Suppression is a conscious effort to forget unwanted memories or thoughts.
Unlike repression, suppression involves a deliberate attempt to avoid thinking about
specific experiences or emotions.
o Function: This mechanism can be useful for managing anxiety or stress in the short
term, allowing individuals to focus on more immediate tasks or responsibilities without
being distracted by distressing thoughts.
o Example: A student who receives a poor grade might consciously choose to suppress
thoughts about it in order to concentrate on studying for an upcoming exam.

The Role of Emotion

Motivated forgetting is often linked to emotional experiences. Memories associated with strong
negative emotions, such as fear, shame, or grief, are more likely to be subjected to motivated
forgetting. This is particularly evident in cases of trauma, where individuals may find it
challenging to confront painful memories directly.

Evidence and Research

Research on motivated forgetting includes various experimental paradigms, such as the


think/no-think task, where participants are instructed to either think about or avoid thinking
about specific memories. Studies have shown that people can successfully suppress memories
through intentional efforts, leading to poorer recall of those memories later.

Additionally, neuroscientific research has indicated that different brain regions are involved in
the processes of suppression and the retrieval of memories, highlighting the complexity of how
memories are managed in the mind.
Implications

Understanding motivated forgetting has significant implications for therapy and mental health.
For individuals struggling with traumatic memories, therapeutic approaches, such as cognitive-
behavioral therapy (CBT) or trauma-focused therapy, may help them process and confront
repressed or suppressed memories, leading to healing and recovery. Conversely, motivated
forgetting can also hinder emotional healing if it leads to the avoidance of addressing important
issues.

Distorted Memories

Distorted memories occur when the details of a memory are altered, leading to a recall that does
not accurately reflect what actually happened. This can happen due to several factors:

1. Misinformation Effect: When individuals are exposed to misleading information after


an event, it can distort their memory of the original event. For instance, if eyewitnesses
are given incorrect details about a crime scene, their recollections may change to
incorporate this new, inaccurate information.
o Example: In a classic study by Loftus and Palmer (1974), participants viewed a video of a
car accident and were later asked questions that included misleading information, such
as the speed of the vehicles involved. Those who were asked about the “smashed” cars
reported higher speeds and more severe accidents compared to those asked about “hit”
cars.
2. Memory Reconsolidation: Every time a memory is recalled, it may be altered before it
is stored again. This means that memories are not static; they can be updated or changed
each time they are retrieved. As a result, the memory may become distorted over time.
3. Emotional Influence: Strong emotions associated with an event can impact how
memories are encoded and retrieved. For example, traumatic events may be remembered
vividly, but details surrounding those events can be distorted due to the emotional
intensity involved.
4. Schemas and Scripts: Our pre-existing knowledge and expectations (schemas) can shape
how we interpret and remember new information. When we encounter new experiences,
we may unintentionally fill in gaps based on these schemas, leading to distorted
memories.
5. Confabulation: This involves the unintentional creation of false memories to fill in gaps
in recollection. Individuals may confidently recall events that never happened or distort
real memories without realizing they are fabricating details. Confabulation often occurs
in people with certain memory disorders but can happen to anyone when trying to make
sense of incomplete memories.

Misattributed Memories

Misattributed memories involve recalling a memory but incorrectly assigning it to the wrong
source or context. This can happen for various reasons:
1. Source Amnesia: This refers to the inability to remember where, when, or how one
learned something, leading to confusion about the origin of a memory. Individuals may
remember the content of a memory but fail to accurately attribute it to the correct source.
o Example: Someone might hear a piece of information from a friend and later recall it as
something they read in a book or saw in a movie, leading to misattribution.
2. False Memories: These are recollections of events that did not actually occur or are
significantly altered from reality. False memories can be created through suggestive
questioning, leading individuals to believe they experienced events that never happened.
o Example: In research by Loftus and Pickrell (1995), participants were asked to recall
childhood events, including a fictitious event of getting lost in a shopping mall. Many
participants later recalled details of the fabricated event, demonstrating how easily false
memories can be formed.
3. Confusing Familiarity: Sometimes, individuals might confuse the familiarity of a
memory with its accuracy. For instance, a person may feel that they’ve seen a face before
and mistakenly believe they know the person, even if they’ve never met them.
4. Cross-Race Effect: This phenomenon occurs when individuals have difficulty accurately
identifying people of different races from their own. This can lead to misattributions in
eyewitness testimony, where witnesses may mistakenly identify an individual based on
limited exposure or familiarity with a particular racial group.

TIP-OF-THE-TONGUE PHENOMENON

The tip-of-the-tongue (TOT) phenomenon is a common experience in which an individual is


unable to retrieve a word or piece of information from memory, even though they feel that they
are on the verge of recalling it. This feeling of impending retrieval is often accompanied by a
strong sense of familiarity and the belief that the information is readily accessible, yet it remains
elusive. Here are some key aspects of the phenomenon:

1. Characteristics: During a TOT experience, individuals may recall certain details related
to the target word or information, such as its initial sound, the number of syllables, or
similar words. Despite these partial recollections, they are unable to produce the full
word or information.
2. Frequency: The TOT phenomenon is quite common and can occur at any age, though it
may become more frequent with aging. Many people report experiencing this sensation
multiple times a week or even daily.
3. Causes:
o Interference: TOT occurrences can arise due to interference from similar words or
concepts that compete for retrieval, making it difficult to access the specific target.
o Inadequate Retrieval Cues: Sometimes, the cues available for retrieval may not be
strong enough to trigger the full memory, leading to a sense of being "stuck."
o Brain Function: Research suggests that the TOT phenomenon may be related to how the
brain encodes and retrieves information. Studies have shown that TOT experiences are
often linked to difficulties in the retrieval process rather than a complete loss of the
information.
4. Resolution: The TOT phenomenon typically resolves on its own, and individuals often
find that the word or information comes to mind after a short period of time. Engaging in
a different activity or discussing related topics can sometimes facilitate the retrieval
process.
5. Psychological and Social Aspects: The TOT experience can be frustrating and can lead
to social embarrassment, especially in conversations. However, it also highlights the
complexities of memory retrieval and the cognitive processes involved in accessing
stored information.

IMPROVING MEMORY TECHNIQUES


1. Mnemonics: Mnemonics are memory aids that use associations to help recall
information. They often involve acronyms, rhymes, or visual imagery to make complex
information more memorable. For example, using the acronym "HOMES" to remember
the Great Lakes (Huron, Ontario, Michigan, Erie, Superior) simplifies the recall process.
2. Linking and Story Method: This technique involves creating a narrative or story that
links together the items or concepts to be remembered. By associating each item with a
vivid image or a sequential part of the story, the information becomes easier to recall. For
example, if you need to remember a grocery list (milk, bread, eggs), you might create a
story about a character who needs to buy each item in a particular order.
3. Peg Word Method: The peg word method associates new information with a
predetermined list of "peg" words that are easy to remember. For example, using
numbers as pegs (one is a bun, two is a shoe) allows you to link new information to these
words. To remember a list of items, you would visualize each item in relation to the peg
word. For example, if you need to remember "apples" and "bananas," you might visualize
an apple in a bun and bananas in a shoe.
4. Method of Loci: Also known as the memory palace technique, this method involves
visualizing a familiar place and associating the items to be remembered with specific
locations within that place. As you mentally walk through the location, you can recall the
associated items. For instance, if you need to remember a speech, you might visualize
each point placed in different rooms of your home.
5. Verbal/Rhythmic Organization: This technique uses rhythm, rhyme, or music to
enhance memory retention. By organizing information into a catchy rhythm or melody, it
becomes easier to memorize. For example, setting information to a familiar tune or
creating a rap can help in memorizing facts or concepts.

ORGANIC AMNESIA
There are two forms of severe loss of memory disorders caused by problems in the functioning
of the memory areas of the brain. These problems can result from concussions, brain injuries
brought about by trauma, alcoholism (Korsakoff’s syndrome), or disorders of the aging brain.

RETROGRADE AMNESIA
If the hippocampus is that important to the formation of declarative memories, what would
happen if it got temporarily “disconnected”? People who are in accidents in which they’ve
received a head injury often are unable to recall the accident itself. Sometimes they cannot
remember the last several hours or even days before the accident. This type of amnesia (literally,
“without memory”) is called retrograde amnesia, which is loss of memory from the point of
injury backward (Hodges, 1994). What apparently happens in this kind of memory loss is that
the consolidation process, which was busy making the physical changes to allow new memories
to be stored, gets disrupted and loses everything that was not already nearly “finished.”

Think about this: You are working on your computer, trying to finish a history paper that is due
tomorrow. Your computer saves the document every 10 minutes, but you are working so
furiously that you’ve written a lot in the last 10 minutes. Then the power goes out—horrors!
When the power comes back on, you find that while all the files you had already saved are still
intact, your history paper is missing that last 10 minutes’ worth of work. This is similar to what
happens when someone’s consolidation process is disrupted. All memories that were in the
process of being stored—but are not yet permanent—are lost.

One of the therapies for severe depression is ECT, or electroconvulsive therapy, in use for this
purpose for many decades. One of the common side effects of this therapy is the loss of memory,
specifically retrograde amnesia (Meeter et al., 2011; Sackeim et al., 2007; Squire & Alvarez,
1995; Squire et al., 1975). While the effects of the induced seizure seem to significantly ease the
depression, the shock also seems to disrupt the memory consolidation process for memories
formed prior to the treatment. While some researchers in the past found that the memory loss can
go back as far as three years for certain kinds of information (Squire et al., 1975), later research
suggests that the loss may not be a permanent one (Meeter et al., 2011; Ziegelmayer et al., 2017).

ANTEROGRADE AMNESIA
Concussions can also cause a more temporary version of the kind of amnesia experienced by
H.M. This kind of amnesia is called anterograde amnesia, or the loss of memories from the point
of injury or illness forward (Squire & Slater, 1978). People with this kind of amnesia, like H.M.,
have difficulty remembering anything new. One of your authors knows a young man who was
struck by lightning in the summer of 2018. He remembers walking behind his brother, pulling a
cart they had their tools in. His next memories start about two months later. He cannot remember
the lightning strike nor his time in the hospital or other experiences that occurred in the two
months following the strike. This is also the kind of amnesia most often seen in people with
dementia, a neurocognitive disorder, or decline in cognitive functioning, in which severe
forgetfulness, mental confusion, and mood swings are the primary symptoms. (Dementia patients
also may suffer from retrograde amnesia in addition to anterograde amnesia.)

If retrograde amnesia is like losing a document in the computer because of a power loss,
anterograde amnesia is like discovering that your hard drive has become defective—you can read
data that are already on the hard drive, but you can’t store any new information. As long as you
are looking at the data in your open computer window (i.e., attending to it), you can access it, but
as soon as you close that window (stop thinking about it), the information is lost because it was
never transferred to the hard drive (long-term memory). This makes for some very repetitive
conversations, such as being told the same story or being asked the same question numerous
times in the space of a 20-minute conversation.
ALZHEIMER’S DISEASE

Nearly 5.7 million Americans have Alzheimer's disease, the most common type of dementia,
accounting for 60 to 80 percent of all dementia cases. Approximately 1 in 10 people over 65 has
Alzheimer's, which is the sixth-leading cause of death in the U.S. and the fifth for those aged 65
and older.

In the early stages of Alzheimer's, the primary memory issue is anterograde amnesia, where
individuals struggle to form new memories. Memory loss starts mild but becomes severe, leading
to dangerous forgetfulness, such as taking extra medication or leaving food unattended. As the
disease progresses, retrograde amnesia can also occur, erasing past memories.

The causes of Alzheimer's are not fully understood. While the formation of beta-amyloid plaques
and tau tangles is normal with aging, individuals with Alzheimer's have significantly more of
these. The neurotransmitter acetylcholine, crucial for memory formation in the hippocampus,
breaks down early in the disease. Some forms of Alzheimer's have a genetic basis, but this
accounts for fewer than 5 percent of cases. Other forms of dementia can arise from strokes,
dehydration, and medications.

Treatments can slow but not halt or reverse the disease. Currently, six drugs are approved for
treatment, which only extend symptom relief for an average of 6 to 12 months. However,
manageable risk factors include high cholesterol, high blood pressure, smoking, obesity, Type II
diabetes, and lack of exercise. Mental stimulation through continued learning can also support
cognitive health. Research suggests that certain drugs may help restore memory in Alzheimer's-
affected brain cells.

Myths about Alzheimer's causation include concerns over aluminum cookware, artificial
sweeteners, silver dental fillings, and flu shots—all unfounded.

INFANTILE AMNESIA
Most people cannot recall events from their infancy, typically before age 3. Claims of such
memories often stem from family recounts rather than genuine recollection. These "memories"
may feel like watching a movie rather than experiencing them firsthand.

Infantile amnesia may occur because early memories are implicit and difficult to bring to
consciousness. Explicit memory, which develops after age 2 with the maturation of the
hippocampus and language skills, enables the formation of autobiographical memories through
social interactions with caregivers.

Autobiographical Memory

Definition and Nature


Autobiographical memory refers to the memory system that enables individuals to recall
personal experiences and specific events from their lives. It encompasses both episodic memory
(memories of specific events) and semantic memory (general knowledge about the world and
oneself). Autobiographical memories are unique to each individual and are often tied to personal
significance, providing a narrative of one’s life history.

Development
Autobiographical memory begins to develop in early childhood, usually around age 2 to 3, when
children start forming a sense of self and can recount personal experiences. This development is
influenced by language acquisition and social interactions, particularly discussions about past
events with caregivers, which help children organize their memories. The ability to narrate
personal experiences contributes to a coherent sense of identity and self-concept.

Components

1. Episodic Memories: These are vivid recollections of specific events, including details
about the context (time, place, people involved). For example, a birthday party or a
family vacation.
2. Semantic Memories: These include facts and general knowledge about oneself, such as
one’s name, age, or the names of family members. They provide context for episodic
memories and contribute to self-identity.
3. Emotional Significance: Autobiographical memories are often emotionally charged and
can significantly impact a person’s mood and behavior. Emotional experiences tend to be
remembered more vividly, contributing to their lasting presence in memory.

Functions
Autobiographical memory serves several critical functions:

 Self-Identity: It helps individuals understand who they are by connecting past experiences to
present identities.
 Cognitive Organization: It allows for the organization of knowledge and experiences, aiding in
decision-making and problem-solving.
 Social Connection: Sharing autobiographical memories strengthens social bonds and facilitates
communication with others, allowing individuals to relate their experiences.
Influence of Culture
Cultural factors can shape how autobiographical memories are formed and recalled. Different
cultures emphasize various aspects of memory, such as the importance of family narratives or
individual achievements, which can influence the content and emotional tone of memories. For
instance, collectivist cultures may focus more on shared experiences, while individualist cultures
may highlight personal achievements.

You might also like