learning and memory pdf
learning and memory pdf
THE BEGINNING
Ciccarelli and White define learning as a relatively permanent change in behavior or knowledge
that occurs as a result of experience or practice. This definition highlights key aspects of
learning:
In the early 1900s, research scientists were unhappy with psychology’s focus on mental activity. Many
were looking for a way to bring some kind of objectivity and scientific research to the field. It was a
Russian physiologist (a person who studies the workings of the body) named Ivan Pavlov (1849–1936)
who pioneered the empirical study of the basic principles of a particular kind of learning (Pavlov, 1906,
1926).
Studying the digestive system in his dogs, Pavlov and his assistants built a device to accurately
measure the amount of saliva produced by the dogs when they were fed a measured amount of
food. Normally, when food is placed in the mouth of any animal, the salivary glands
automatically start releasing saliva to help with chewing and digestion. This is a normal reflex—
an unlearned, involuntary response that is not under personal control or choice, and one of many
that occur in both animals and humans. The food causes a particular reaction, salivation. A
stimulus can be defined as any object, event, or experience that causes a response—the
reaction of an organism. In the case of Pavlov’s dogs, the food is the stimulus and salivation
is the response.
Pavlov soon discovered that the dogs began salivating when they weren’t supposed to be
salivating. Some dogs would start salivating when they saw a lab assistant bringing their food,
others when they heard the clatter of the food bowl in the kitchen, and still others when it was the
time of day they were usually fed. Shifting his focus, Pavlov spent the rest of his career studying
what he eventually termed classical conditioning, learning to elicit an involuntary, reflex-like
response to a stimulus other than the original, natural stimulus that normally produces the
response.
ASSOCIATIVE LEARNING
Associative learning is a way in which we naturally connect things in our minds to make sense of
the world. Through this process, we learn to predict what might happen next, to understand
patterns, and to make decisions based on our experiences. This kind of learning isn’t about
memorizing facts; it’s about linking events, actions, or ideas that happen around us, so we can
react or prepare for what comes next.
Key principles in associative learning include:
1. Contiguity
Contiguity is the idea that associations are most easily formed when stimuli or events
occur close together in time. When one stimulus consistently follows another, the brain
begins to link them, expecting one to predict the other. This principle helps organisms
identify causal relationships or sequence patterns.
2. Frequency
The strength of an association typically increases with the frequency of pairing. The more
often two events or stimuli are paired, the more likely the association becomes robust and
lasting. For instance, a frequent pairing of certain behaviors with rewards strengthens that
behavioral response, which is why repetition is crucial for effective learning.
3. Salience
Salience refers to the intensity or distinctiveness of stimuli. More intense or noticeable
stimuli tend to form associations more readily because they attract attention, making the
learning process more effective. For example, a loud sound following a specific action is
more likely to be remembered than a faint noise, leading to stronger associative learning.
4. Biological Preparedness
Biological preparedness is the concept that some associations are more easily formed due
to evolutionary adaptations. Certain stimuli naturally provoke responses without much
experience, like a fear response to potentially dangerous animals. This predisposition
influences how quickly and easily organisms form associations, particularly in survival-
related scenarios.
Here’s a look at the different types of associative learning and how they help us in everyday life:
1. Classical Conditioning
This type of learning happens when we start to connect two things because they
repeatedly happen together. For example, if you hear your favorite song every time you
go to a certain café, you might start feeling happy just by stepping inside the café—even
before you hear the song. The association becomes automatic, forming a link that
reminds you of the pleasant experience whenever you’re there.
2. Operant Conditioning
In operant conditioning, we learn by understanding the outcomes of our actions. If a
certain behavior brings a reward or positive outcome, we’re more likely to repeat it. For
instance, if you study hard and get praised for good grades, you’re more likely to keep
putting in the effort. On the other hand, if an action leads to something unpleasant, like
getting a parking ticket, you might try to avoid repeating it. This type of learning helps us
figure out what’s worth doing and what’s better left avoided.
3. Sensitization and Habituation
These are types of “tuning” our responses to what’s around us.
o Sensitization: Sometimes, our reaction to a certain event grows stronger the more
it happens, like being more startled by a creaking door late at night if we’re
already on edge.
o Habituation: Other times, we might stop reacting to things that aren’t meaningful
or helpful. For instance, after hearing the same car honk repeatedly in a busy
street, we may start tuning it out, allowing us to focus on more important sounds.
Conditioned Stimulus CS
Conditioned Response CR
● Before Conditioning:
o NS: Metronome → No Salivation
o UCS: Food → UCR: Salivation
● During Conditioning:
o NS: Metronome + UCS: Food → UCR: Salivation
● After Conditioning:
o CS: Metronome → CR: Salivation
In Pavlov’s experiment, acquisition occurred when the dogs learned to associate the neutral
stimulus (metronome) with the unconditioned stimulus (food) through repeated pairings. Before
conditioning, the ticking of the metronome was a neutral stimulus and had no effect. But after
being paired multiple times with food, the ticking alone caused salivation.
For example, consider the scenario where Pavlov has already conditioned his dogs to salivate at
the sound of the metronome. If, just before turning on the metronome, he snaps his fingers, the
sequence would now be “snap-ticking-salivation,” or “NS–CS–CR” (neutral
stimulus/conditioned stimulus/conditioned response).
If this sequence occurs enough times, the finger snap will eventually produce a salivation
response. The finger snap becomes associated with the ticking through the same process that
initially connected the ticking with the food. However, the food (UCS) would need to be
presented periodically to maintain the original conditioned response to the metronome's ticking.
Without the UCS, the higher-order conditioning would be difficult to sustain and would
gradually fade away.
Pavlov believed that the conditioned stimulus (CS) activates the same area in the animal's brain
that the unconditioned stimulus (UCS) originally activated, a process he termed stimulus
substitution. However, if mere temporal association is sufficient, why does conditioning not
occur when the CS follows the UCS immediately? Robert Rescorla (1988) discovered that the
CS must provide information about the upcoming UCS to achieve conditioning; in other words,
the CS must predict the UCS's arrival.
In one study, Rescorla exposed one group of rats to a tone, followed by an electric shock while
the tone was still audible. These rats became agitated, exhibiting a conditioned emotional
response by shivering and squealing at the tone's onset. In contrast, another group of rats
received the shock only after the tone stopped. This group responded with fear when the tone
ceased. The tone provided different information for the two groups: for the first, it indicated an
impending shock, while for the second, it signaled the absence of a shock during the tone. This
expectancy, determined by the tone's timing relative to the shock, influenced the rats' responses.
This cognitive perspective emphasizes the mental activity of consciously anticipating an event as
an explanation for classical conditioning.
Key Findings
1. Conditioning of Fear: The primary finding was that emotional responses, specifically
fear, could be conditioned through association. Albert learned to associate the previously
neutral stimulus (the rat) with the unconditioned stimulus (the loud noise), resulting in a
conditioned response (fear).
2. Generalization of Fear: The experiment also demonstrated that fear could generalize to
other similar stimuli. Albert exhibited fear not only toward the rat but also toward other
furry objects, such as a rabbit and even a Santa Claus mask.
3. Implications for Understanding Phobias: Watson's work suggested that many phobias
could be rooted in similar conditioning processes, highlighting the role of environmental
factors in shaping emotional responses.
Ethical Concerns
● Child Welfare: The study involved inducing fear in an infant without any subsequent
attempt to decondition him. This raises concerns about the psychological harm inflicted
on Little Albert.
● Lack of Informed Consent: Although permission was obtained from Albert's mother,
ethical standards today would require more stringent measures to ensure the child's
welfare and understanding.
● Long-Term Effects: The long-term psychological impact on Albert remains unknown, as
he was removed from the study before any follow-up deconditioning could occur.
COUNTER CONDITIONING
Counterconditioning is a technique used to change behavior by pairing a stimulus with a new, opposite
reaction. Like extinction, it reduces the impact of the original learned behavior but doesn't completely
erase it. However, while extinction has gained a lot of attention in research, counterconditioning hasn’t
been studied as much.
For over a century, psychologists have been exploring how to effectively and permanently eliminate
maladaptive behaviors and unwanted memories. This has clear clinical relevance, as many mental health
disorders involve disruptions in learning, memory, and behavior. Examples include intrusive memories in
trauma-related disorders, addiction, excessive worry in generalized anxiety disorder, compulsive
behaviors in OCD, and fear or avoidance in phobias and panic disorders. This drives the need to apply
findings from learning and memory research to better understand the neurobiological mechanisms behind
these disorders and develop innovative treatments. However, despite significant advances in psychology
and neuroscience, techniques to consistently modify maladaptive behaviors in humans remain elusive.
Most research in this area focuses on extinction of conditioned behaviors, but here, we review the
alternative approach of counterconditioning.
Counterconditioning, unlike standard extinction, involves replacing an expected outcome with one of the
opposite feeling. It's the basis for therapies like systematic desensitization or aversion therapy, where
unwanted responses are reduced by triggering an opposite reaction. For example, someone afraid of dogs
after being bitten might receive extinction treatment by safely being exposed to dogs. With
counterconditioning, they would gradually encounter dogs while feeling relaxed or doing something
enjoyable, which helps reduce fear. Both methods create new learning (e.g., "dogs are safe") that
competes with the old association (e.g., "dogs are dangerous"). While counterconditioning seems to
prevent relapse better than extinction in some cases, research on long-term effects is mixed.
In the "Little Peter" experiment, a three-year-old boy named Peter showed fear towards various stimuli,
including a white rabbit. To reduce his fear, Jones used counterconditioning, gradually pairing Peter’s
exposure to the rabbit with a positive experience—eating candy, which Peter found enjoyable. Initially,
the rabbit was placed at a distance so that Peter could see it while eating, allowing the positive association
to form without triggering too much fear. Over multiple sessions, the rabbit was moved closer and closer
to Peter. Eventually, Peter's fear decreased to the point where he allowed the rabbit to nibble his fingers
without feeling afraid.
This experiment is a foundational example of behavior modification and a precursor to therapies like
systematic desensitization. The key principle demonstrated by the "Little Peter" study was that positive or
pleasurable stimuli (like eating candy) could be used to counteract negative emotional reactions (like
fear). The idea behind this approach is known as reciprocal inhibition, where one emotional system, such
as pleasure, inhibits the activation of another, like fear. The success of Peter's treatment laid the
groundwork for modern desensitization therapies, which are still widely used to treat phobias and anxiety
disorders by gradually exposing patients to feared stimuli in the presence of relaxing or positive
experiences.
Criticisms of Counterconditioning Theory
1. Limited Long-Term Effectiveness
Counterconditioning may not always lead to lasting behavior change. Research shows
that in some cases, the original negative association (e.g., fear) can resurface over time,
especially in different contexts or when the positive stimulus (like relaxation) is absent.
2. Context Dependency
The success of counterconditioning can be highly dependent on the environment in which
it occurs. Changes in context, such as moving from a therapeutic setting to real-life
situations, may lead to the return of the original fear or behavior, limiting its practical
applicability.
3. Competing Responses
Reciprocal inhibition relies on activating a stronger positive response (e.g., relaxation) to
override the negative one (e.g., fear). However, if the negative response is too strong,
such as in cases of severe phobias or trauma, counterconditioning may be less effective
because the anxiety may overpower the positive stimulus.
4. Over-simplification of Complex Behaviors
Critics argue that counterconditioning oversimplifies complex emotional and behavioral
issues, assuming that merely associating a positive stimulus with a negative one will
create lasting change. For deeper-rooted psychological problems, this method may not
address underlying causes.
5. Difficulty in Application to Certain Disorders
Some mental health conditions, like obsessive-compulsive disorder or severe trauma,
may not respond well to counterconditioning. In these cases, more comprehensive
approaches, such as cognitive-behavioral therapy or exposure therapy, may be more
effective for managing symptoms.
CLASSICAL CONDITIONING APPLIED TO HUMAN BEHAVIOR
Pavlov’s concepts were later expanded by scientists to explain human behavior, particularly
emotional responses. A key study demonstrating this is the Little Albert experiment, which
applied classical conditioning to human emotional responses, notably phobias.
Phobias and the "Little Albert" Experiment
Phobias are irrational fear responses, and John B. Watson and Rosalie Rayner's experiment with
"Little Albert" demonstrated how such fears can be conditioned. In the experiment, Watson and
Rayner paired a white rat (neutral stimulus) with a loud, scary noise (unconditioned stimulus or
UCS). Although Albert was not initially afraid of the rat, he was naturally scared of the loud
noise (unconditioned response or UCR). After seven pairings, Albert began to fear the rat, which
had now become the conditioned stimulus (CS). The resulting fear of the rat was the conditioned
response (CR), showcasing the conditioning of a phobia.
Stimulus Generalization in Little Albert’s Case
Following the conditioning, Little Albert also demonstrated fear of other stimuli, such as a
rabbit, a dog, and a sealskin coat. This is an example of stimulus generalization, where the fear
response generalized to similar objects, though some researchers question whether true
generalization occurred.
Conditioned Emotional Responses (CER)
Phobias are a type of conditioned emotional response (CER), which is one of the easiest forms
of classical conditioning. Common examples of CERs include a child's fear of the doctor’s
office, a person's fear of dogs, or a puppy’s fear of a rolled-up newspaper. Vicarious
conditioning can also occur when individuals develop a conditioned emotional response simply
by observing another person's reaction to a stimulus.
Vicarious Conditioning
In vicarious conditioning, people can develop emotional responses by observing others'
reactions. For example, if a child observes a parent reacting with fear to stray dogs, the child may
also develop a fear of dogs, even if they have never been attacked.
Classical Conditioning in Advertising
Advertisers frequently use classical conditioning to evoke emotional responses in consumers.
By associating products with stimuli that generate positive emotions (e.g., cute animals or
attractive models), advertisers hope to condition viewers to associate these emotions with their
products. Vicarious classical conditioning is often used, where viewers observe emotional
reactions in the advertisements and develop similar feelings toward the product.
Treatment of Phobias Using Classical Conditioning
The principles of classical conditioning can also be applied in the treatment of phobias and
anxiety disorders. Therapies based on classical conditioning help individuals unlearn irrational
fears or conditioned responses through repeated exposure to the feared stimuli without the
unconditioned stimulus.
Conditioned Taste Aversions
A specific form of classical conditioning is conditioned taste aversion, where an individual
develops an aversion to a particular food or drink after a negative experience, such as nausea.
Research has shown that rats can develop taste aversions after consuming food up to six hours
before becoming nauseous, and similar aversions occur in humans undergoing chemotherapy or
alcoholics undergoing aversion therapy.
OPERANT CONDITIONING
There are two kinds of behavior that all organisms are capable of doing: involuntary and
voluntary. If Inez blinks her eyes because a gnat flies close to them, that’s a reflex and totally
involuntary. But if she then swats at the gnat to frighten it, that’s a voluntary choice. She had to
blink, but she chose to swat. Classical conditioning is the kind of learning that occurs with
automatic, involuntary behavior. In this section we’ll describe the kind of learning that applies to
voluntary behavior, which is both different from and similar to classical conditioning.
As discussed earlier, Thorndike (1874–1949) was one of the first researchers to study voluntary
learning, which later became known as operant conditioning. In his famous experiment, a hungry
cat was placed in a "puzzle box" with a lever that, when pressed, allowed it to escape and access
food. Initially, the cat’s actions were random, but over time, it accidentally pressed the lever and
learned to escape faster. This led to Thorndike’s law of effect: actions followed by pleasurable
outcomes are more likely to be repeated, while those followed by unpleasant outcomes are less
likely to be repeated.
● Law of Readiness: Learning occurs when an individual is ready to act. If not, actions
may lead to frustration.
● Law of Exercise: Repetition strengthens learning; the more an action is practiced, the
stronger the association.
● Law of Use and Disuse: Frequently used connections become stronger, while unused
ones weaken over time.
B. F. Skinner (1904–1990) was the behaviorist who assumed leadership of the field after John
Watson. He was even more determined than Watson that psychologists should study only
measurable, observable behavior. In addition to his knowledge of Pavlovian classical
conditioning, Skinner found in the work of Thorndike a way to explain all behavior as the
product of learning. He even gave the learning of voluntary behavior a special name: operant
conditioning (Skinner, 1938).
Voluntary behavior is what people and animals do to operate in the world. When people perform
a voluntary action, it is to get something they want or to avoid something they don’t want, right?
So voluntary behavior, for Skinner, is operant behavior, and the learning of such behavior is
operant conditioning. The heart of operant conditioning is the effect of consequences on
behavior. Thinking back to the section on classical conditioning, learning an involuntary
behavior really depends on what comes before the response—the unconditioned stimulus (UCS)
and what will become the conditioned stimulus (CS). These two stimuli are the antecedent
stimuli (antecedent means something that comes before another thing). But in operant
conditioning, learning depends on what happens after the response—the consequence. In a way,
operant conditioning could be summed up as this: “If I do this, what’s in it for me?”
B.F. Skinner, a prominent figure in behavioral psychology, is best known for his work on
operant conditioning rather than classical conditioning, which was primarily developed by Ivan
Pavlov. However, we can draw parallels and discuss the concepts of conditioned response (CR),
unconditioned response (UCR), conditioned stimulus (CS), and unconditioned stimulus (UCS) in
the context of Pavlov's classical conditioning experiments, as well as how Skinner's work relates
to operant conditioning.
Pavlov's Classical Conditioning Components
1. Unconditioned Stimulus (UCS): In Pavlov's experiment, the UCS is the stimulus that
naturally and automatically triggers a response without any prior learning. In his classic
study, the UCS was food, which naturally elicited salivation from dogs.
2. Unconditioned Response (UCR): The UCR is the natural, unlearned reaction to the
UCS. In Pavlov’s experiment, the UCR was the salivation that occurred in response to the
food presented to the dogs.
3. Conditioned Stimulus (CS): The CS is a previously neutral stimulus that, after being
paired repeatedly with the UCS, begins to elicit a conditioned response. In Pavlov's
study, the CS was the bell sound, which initially did not trigger salivation.
4. Conditioned Response (CR): The CR is the learned response to the previously neutral
stimulus (CS) after conditioning has occurred. In Pavlov’s experiment, the CR was the
salivation in response to the bell, even when food was not presented.
Skinner's Operant Conditioning
While Skinner did not conduct experiments involving these classical conditioning terms, he
emphasized the importance of reinforcement and punishment in shaping behavior. In Skinner's
work:
● Operant Response: The behavior that is strengthened or weakened through
reinforcement (positive or negative) or punishment.
● Reinforcement: A stimulus that follows a behavior and increases the likelihood of that
behavior occurring again in the future.
● Punishment: A stimulus that follows a behavior and decreases the likelihood of that
behavior occurring again.
In Thorndike’s puzzle-box experiment, the cat’s reinforcement was both escaping the box and
receiving food. Each time the cat escaped, its lever-pushing behavior was reinforced. According
to Skinner, this reinforcement is the key to why the cat learned to repeat the behavior. In operant
conditioning, reinforcement is crucial for learning.
Skinner designed his own research tool called the Skinner box or operant conditioning chamber.
In these experiments, he trained animals, like rats, to press a lever to receive food, demonstrating
how reinforcement strengthens voluntary behavior.
Responses are voluntary, emitted by Responses are involuntary and automatic, elicited
the organism. by a stimulus.
As new methods for studying the brain and neuronal functions evolve, researchers are exploring
the neural bases of both classical and operant conditioning (Gallistel & Matzel, 2013). One
critical area involved in learning is the anterior cingulate cortex (ACC), located in the frontal
lobe above the front of the corpus callosum (Apps et al., 2015). The ACC connects to the nucleus
accumbens, which we discussed in relation to drug dependence and the reward pathway in
Chapter Four (see Learning Objective 4.11). Both the ACC and nucleus accumbens are involved
in dopamine release (Gale et al., 2016; Morita et al., 2013; Yavuz et al., 2015).
Dopamine plays a significant role in the reinforcement process, as it amplifies some input signals
while decreasing the intensity of others in the nucleus accumbens (Floresco, 2015). For example,
when you hear a specific sound from your cell phone indicating an incoming message, you may
feel compelled to check it. This sound—whether it's a chime or a ding—has become a
conditioned stimulus (CS) associated with a pleasurable outcome. The actual message serves as
an unconditioned stimulus (UCS) for pleasure, leading to a conditioned response (CR) of
enjoyment.
When you hear that sound followed by rewarding activities, excitatory activity occurs in several
brain areas, along with increased dopamine activity. This signals that the behavior was
beneficial, encouraging you to repeat it. Just as dopamine and the reward pathway are implicated
in drug dependency, they also play a crucial role in our “learned” addictions.
Positive Reinforcement
Positive reinforcement involves the addition of a pleasurable consequence following a response,
which increases the likelihood of that response being repeated. This can be understood as a
reward for desired behavior.
Example: Consider a student who studies hard for an exam and receives an A as a result. The
positive feedback from getting a good grade serves as a reward, encouraging the student to
continue studying diligently in the future. Other common examples include receiving praise,
bonuses, or tokens for good behavior. For instance, a child who cleans their room may receive
praise or a small treat, reinforcing the behavior of tidying up.
Negative Reinforcement
In contrast, negative reinforcement involves the removal or escape from an unpleasant stimulus,
which also increases the likelihood of the associated behavior being repeated. This concept may
initially seem counterintuitive; however, it emphasizes that eliminating discomfort can motivate
individuals to repeat a particular behavior.
Example: If a person has a headache and takes medication to alleviate the pain, the relief from
the headache reinforces the behavior of taking medication when experiencing pain. Similarly,
consider a situation where a student submits their assignment on time to avoid losing points for
lateness. The act of submitting the assignment is reinforced by the removal of the unpleasant
consequence (the penalty for late submission).
Positive Punishment
Positive punishment involves the addition of an unpleasant consequence following a behavior,
which decreases the likelihood of that behavior being repeated. This form of punishment is often
seen as a way to discourage unwanted actions by introducing an aversive stimulus.
Example: Consider a child who touches a hot stove. If the child experiences pain from the burn,
that painful experience serves as a positive punishment. The addition of pain (an aversive
consequence) following the behavior of touching the stove decreases the likelihood that the child
will touch the stove again in the future. Other examples include receiving a speeding ticket for
driving too fast or a reprimand from a teacher for talking out of turn. In each case, the addition of
an unpleasant consequence aims to deter the undesirable behavior.
Negative Punishment
Negative punishment, on the other hand, involves the removal of a pleasurable consequence
following a behavior, which also decreases the likelihood of that behavior being repeated. This
form of punishment works by taking away something valued or desired, thereby discouraging the
unwanted behavior.
Example: Imagine a teenager who stays out past curfew. As a result, their parents take away
their driving privileges for a week. The removal of the privilege to drive serves as negative
punishment because it eliminates a pleasurable activity in response to the undesirable behavior of
breaking curfew. Another example is when a child loses access to their favorite toy for
misbehaving. In both scenarios, the removal of a positive experience is intended to reduce the
likelihood of the undesirable behavior occurring again.
On the other hand, punishment (whether positive or negative) aims to reduce or suppress a
behavior. Positive punishment involves adding something unpleasant (like scolding), while
negative punishment involves taking away something pleasant (like losing phone privileges).
Both types of punishment decrease the likelihood of a behavior happening again.
SCHEDULES OF REINFORCEMENT
The timing of reinforcement significantly influences the speed of learning and the strength of the
learned response. Research by Skinner (1956) highlights that reinforcing every response is not
the most effective strategy for fostering long-lasting learning. This phenomenon is illustrated by
the partial reinforcement effect. For instance, Alicia receives a quarter every night for putting
her dirty clothes in the hamper, while Bianca earns a dollar only if she consistently puts her
clothes away each night for a week. Initially, Alicia learns faster due to immediate
reinforcement. However, when reinforcement ceases, Alicia is more likely to stop the behavior,
whereas Bianca may continue for a while longer, demonstrating greater resistance to extinction.
This illustrates that responses reinforced intermittently are more enduring than those reinforced
continuously.
Partial reinforcement can occur through various patterns or schedules. It can focus on the time
interval, known as an interval schedule, or the number of responses required, called a ratio
schedule. Schedules can also be classified as fixed (consistent requirements) or variable
(changing requirements). Thus, we have four main types of reinforcement schedules: fixed
interval, variable interval, fixed ratio, and variable ratio.
1. Fixed Interval Schedule: In this schedule, reinforcement is provided after a set period.
For example, receiving a paycheck every week follows a fixed interval. In a controlled
setting, if a rat must press a lever at least once every two minutes to receive food, this
exemplifies a fixed interval schedule. While this schedule leads to slower response rates,
it produces a scalloped response pattern as the rat becomes more active just before the
end of the interval, akin to how workers may speed up just before payday.
2. Variable Interval Schedule: This schedule involves unpredictable reinforcement at
varying time intervals. An example is a pop quiz, where students study regularly because
they don’t know when a quiz might occur. In a controlled experiment, a rat may receive
food every 5 minutes on average, but the actual intervals may vary. This results in a
steady but slower response rate, as the subject cannot predict when reinforcement will
occur.
3. Fixed Ratio Schedule: Here, reinforcement is given after a specific number of responses.
An example is piecework, where a worker is paid after completing a set number of tasks.
In a controlled setting, a rat may receive a food pellet after pressing a lever ten times.
This schedule produces high response rates, as the rat quickly pushes the lever to reach
the next reinforcement, leading to short pauses after each reinforcer.
4. Variable Ratio Schedule: This schedule involves reinforcement after an unpredictable
number of responses, resulting in high and steady response rates. A common example is
gambling, where players may win after an unpredictable number of bets. In an
experiment, a rat may have to push a lever an average of 20 times to receive food, but the
actual number of presses varies. This unpredictability keeps the rat continuously
responding, as it cannot afford to take breaks.
Regardless of the reinforcement schedule used, effective reinforcement relies on two additional
factors: timing and the reinforcement of only the desired behavior. Immediate reinforcement is
more effective, especially for animals and young children. Furthermore, reinforcing the specific
behavior we wish to encourage is crucial; otherwise, mixed signals can undermine the learning
process.
TYPE OF
FIXED SCHEDULE VARIABLE SCHEDULE
SCHEDULE
Fixed Ratio (FR): Variable Ratio (VR):
Reinforcement/Punishment Reinforcement/punishment is
is delivered after a set delivered after an unpredictable
Ratio (based on number of responses. For number of responses. For example a
the number of example a student receives a slot machine pays out after an
responses) sticker for every 5 math unpredictable number of lever pulls.
problems they solve. A child A student is reprimanded after
loses their tv privilege after unpredictable number of cheating
failing in 3 tests. instances.
Fixed Interval (FI):
Variable Interval (VI):
Reinforcement/Punishment
Reinforcement/punishment is
is delivered after a fixed
delivered after an unpredictable time
period of time, as long as the
interval, provided that the behaviour
behaviour occurs atleast
Interval (based on occurs atleast once. For example, a
once. For example a worker
time interval ) supervisor randomly checks and
gets paid after every two
gives praises at unpredictable times.
weeks. A student loses
A teacher gives surprise quizzes at
recess if they misbehave
unpredictable times to punish
during 10 minutes
students at unpredictable times.
observation period.
BEHAVIOUR MODIFICATION
Operant conditioning is more than just the reinforcement of simple responses. It can be used to
modify the behavior of both animals and humans.
Behaviour modification is the field of psychology concerned with analyzing and modifying
human behaviour. Analyzing means identifying the functional relationship between
environmental events and a particular behaviour to understand the reasons for behaviour or to
determine why a person behaved as he or she did. Modifying means developing and
implementing procedures to help people change their behaviour. It involves altering
environmental events so as to influence behaviour. Behaviour modification procedures are
developed by professionals and used to change socially significant behaviours, with the goal of
improving some aspect of a person’s life.
CHARACTERISTICS OF BEHAVIOUR MODIFICATION
1. Focus on behaviour:
Behaviour modification procedures are designed to change behaviour, not a personal
characteristic or trait. Therefore, behaviour modification deemphasizes labelling. For
example, behaviour modification is not used to change autism (a label); rather, behaviour
modification is used to change problem behaviours exhibited by children with autism.
Behavioural excesses and deficits are targets for change with behaviour modification
procedures. In behaviour modification, the behaviour to be modified is called the target
behaviour. A behavioural excess is an undesirable target behaviour the person wants to
decrease in frequency, duration, or intensity. Smoking is an example of a behavioural
excess. A behavioural deficit is a desirable target behaviour the person wants to increase
in frequency, duration, or intensity. Exercise and studying are possible examples of
behavioural deficits.
2. Procedures based on behavioural principles:
Behaviour modification is the application of basic principles originally derived from
experimental research with laboratory animals. The scientific study of behaviour is called
the experimental analysis of behaviour, or behaviour analysis. The scientific study of
human behaviour is called the experimental analysis of human behaviour, or applied
behaviour analysis. Behaviour modification procedures are based on research in applied
behaviour analysis that has been conducted for more than 40 years.
3. Emphasis on current environmental events:
Behaviour modification involves assessing and modifying the current environmental
events that are functionally related to the behaviour. Human behaviour is controlled by
events in the immediate environment, and the goal of behaviour modification is to
identify those events. Once these controlling variables have been identified, they are
altered to modify the behaviour. Successful behaviour modification procedures alter the
functional relationships between the behaviour and the controlling variables in the
environment to produce a desired change in the behaviour. Sometimes labels are
mistakenly identified as the causes of behaviour. For example, a person might say that a
child with autism engages in problem behaviours (such as screaming, hitting himself,
refusal to follow instructions) because the child is autistic. In other words, the person is
suggesting that autism causes the child to engage in the behaviour. However, autism is
simply a label that describes the pattern of behaviours the child engages in. The label
cannot be the cause of the behaviour because the label does not exist as a physical entity
or event. The causes of the behaviour must be found in the environment (including the
biology of the child).
4. Precise description of behaviour modification procedures:
Behaviour modification procedures involve specific changes in environmental events that
are functionally related to the behaviour. For the procedures to be effective each time
they are used, the specific changes in environmental events must occur each time. By
describing procedures precisely, researchers and other professionals make it more likely
that the procedures will be used correctly each time.
5. Treatment implemented by people in everyday life:
Behaviour modification procedures are developed by professionals or paraprofessionals
trained in behaviour modification. However, behaviour modification procedures often are
implemented by people such as teachers, parents, job supervisors, or others to help people
change their behaviour. People who implement behaviour modification procedures
should do so only after sufficient training. Precise descriptions of procedures and
professional supervision make it more likely that parents, teachers, and others will
implement procedures correctly.
6. Measurement of behaviour change:
One of the hallmarks of behaviour modification is its emphasis on measuring the
behaviour before and after intervention to document the behaviour change resulting from
the behaviour modification procedures. In addition, ongoing assessment of the behaviour
is done well beyond the point of intervention to determine whether the behaviour change
is maintained in the long run. If a supervisor is using behaviour modification procedures
to increase work productivity (to increase the number of units assembled each day), he or
she would record the workers’ behaviours for a period before implementing the
procedures. The supervisor would then implement the behaviour modification procedures
and continue to record the behaviours. This recording would establish whether the
number of units assembled increased. If the workers’ behaviours changed after the
supervisor’s intervention, he or she would continue to record the behaviour for a further
period. Such long term observation would demonstrate whether the workers continued to
assemble units at the increased rate or whether further intervention was necessary.
7. De-emphasis on past events as causes of behaviour:
As stated earlier, behaviour modification places emphasis on recent environmental events
as the causes of behaviour. However, knowledge of the past also provides useful
information about environmental events related to the current behaviour. For example,
previous learning experiences have been shown to influence current behaviour.
Therefore, understanding these learning experiences can be valuable in analysing current
behaviour and choosing behaviour modification procedures. Although information on
past events is useful, knowledge of current controlling variables is most relevant to
developing effective behaviour modification interventions because those variables, unlike
past events, can still be changed.
8. Rejection of hypothetical underlying causes of behaviour:
Although some fields of psychology, such as Freudian psychoanalytic approaches, might
be interested in hypothesised underlying causes of behaviour, such as an unresolved
Oedipus complex, behaviour modification rejects such hypothetical explanations of
behaviour. Skinner (1974) has called such explanations “explanatory fictions” because
they can never be proved or disproved, and thus are unscientific. These supposed
underlying causes can never be measured or manipulated to demonstrate a functional
relationship to the behaviour they are intended to explain.
HISTORY OF BEHAVIOUR MODIFICATION
The development of behaviour modification is rooted in significant historical
contributions from key figures. Ivan P. Pavlov uncovered respondent conditioning
through experiments demonstrating conditioned reflexes, while Edward L. Thorndike
established the law of effect, showing that behaviours producing favorable outcomes are
likely to be repeated. John B. Watson promoted behaviorism, emphasizing observable
behaviour and environmental control, and B. F. Skinner expanded this field by
distinguishing between respondent and operant conditioning, laying the groundwork for
behaviour modification. Following Skinner's principles, researchers in the 1950s applied
these concepts to human behaviour, leading to thousands of studies validating behaviour
modification techniques. Influential publications and professional organizations, such as
the Society for the Experimental Analysis of Behaviour and the Journal of Applied
Behaviour Analysis, further advanced the field by supporting research and disseminating
findings in behaviour analysis and modification.
TYPES OF BEHAVIOUR MODIFICATION
● Forward Chaining: This approach starts with the first behavior in the sequence. The
trainer reinforces the learner for completing the first step, then introduces the second step,
reinforcing the learner for completing the first and second steps together. This process
continues until the entire sequence is learned.
● Backward Chaining: In this method, the last behavior in the sequence is taught first. The
trainer reinforces the learner for completing the last step, then introduces the second-to-
last step, reinforcing the learner for completing both the second-to-last and last steps
together. This continues until the learner can complete the entire sequence from start to
finish.
Example of Chaining: Teaching a child to brush their teeth might involve chaining the steps of
the process:
1. Nature of Behavior:
o Chaining involves linking multiple discrete behaviors to form a sequence, while
shaping focuses on developing a single behavior through successive
approximations.
2. Reinforcement:
oIn chaining, each step is typically reinforced until the entire sequence is learned,
while in shaping, reinforcement is provided for any behavior that is closer to the
desired outcome, regardless of whether it is part of a sequence.
3. Learning Process:
o Chaining requires the learner to learn a complete sequence of actions, while
shaping allows for gradual learning, where each approximation builds on the
previous behavior until the final behavior is achieved.
3. INSTINCTIVE DRIFT: Instinctive drift refers to the tendency for animals to revert to
genetically determined behaviors, even after learning new behaviors through conditioning. This
concept was identified by the Brelands, who were trained by B.F. Skinner, during their studies
on animal training. They observed that animals, despite being trained to perform specific tasks,
would often fall back into instinctive behaviors that were inherent to their species.
For example, raccoons were trained to place coins in a container but would instinctively rub the
coins with their paws, mimicking their natural behavior of washing food before eating. Similarly,
pigs trained to pick up objects would revert to their instinct of rooting and throwing food around.
These behaviors were not taught but were part of the animals’ innate tendencies. The Brelands'
research revealed several key points:
1. Animals are not blank slates and cannot be taught just any behavior.
2. Species differences are crucial in determining what behaviors can be conditioned.
3. Not all behaviors are equally trainable for all species.
These findings suggest that biological instincts can sometimes limit or override the effects of
conditioning, showing that nature and genetics play a significant role in learning. This principle
applies not only to animals but also to humans, where instinctive or ingrained behaviors may
resist modification despite attempts at conditioning.
For example, in a classroom setting, a teacher may use a token economy to encourage a child to
focus during lessons. The teacher would first select a target behavior, like making eye contact
during instruction. Each time the child successfully engages in the target behavior, they receive a
token, such as a gold star on a chart. After accumulating a certain number of tokens, the child
can exchange them for a predetermined reward, such as extra playtime or a treat.
This system operates similarly to how money functions in society. People work to earn money,
which they can then exchange for goods and services. Token economies are used in various
settings, such as schools, mental health facilities, and rehabilitation programs, to reinforce
positive behaviors.
Additionally, credit card companies and airlines use similar systems, offering reward points or
frequent flyer miles that can be exchanged for goods, services, or discounts. This encourages
desired behaviors, such as using a particular service or spending more money. By consistently
reinforcing behaviors with tokens, individuals can develop long-term behavioral changes.
In combination with other behavior modification tools like time-out (which removes attention as
a form of mild punishment), token economies are effective ways to increase desirable behaviors
and decrease unwanted ones.
Behaviour modification procedures have been used in many areas to help people change a vast
array of problematic behaviours.
In the early days of behaviorism, psychologists like Watson and Skinner focused solely on
observable, measurable behavior, ignoring internal mental processes. However, Gestalt
psychologists, such as Edward Tolman and Wolfgang Köhler, remained interested in how the
mind influences behavior, studying how people perceive and organize stimuli. By the 1950s and
1960s, the rise of computers led to comparisons between the human mind and machines, shifting
psychology’s focus to cognitive processes. This gave rise to cognitive learning theory,
emphasizing the role of thoughts, feelings, and expectations in behavior. Martin Seligman later
contributed to this theory.
Edward Tolman, a Gestalt psychologist, conducted a well-known experiment on latent learning using
three groups of rats (Tolman & Honzik, 1930). Each group was placed in the same maze, but their
experiences differed.
● Group 1: Rats received food as reinforcement for successfully exiting the maze. They
were placed back in the maze repeatedly and reinforced until they could solve the maze
without errors.
● Group 2: Rats did not receive any reinforcement until the 10th day of the experiment.
They were simply placed in the maze and allowed to explore, without reinforcement until
later.
● Group 3 (Control): Rats received no reinforcement for the entire duration of the
experiment.
According to strict behaviorist theory, only the first group of rats should have learned the maze
effectively, as behaviorists believed learning required reinforcement. Initially, this seemed true—
Group 1 solved the maze after repeated trials, while Groups 2 and 3 wandered aimlessly.
However, on the 10th day, when Group 2 finally received reinforcement, they solved the maze
much faster than expected. Instead of needing as many trials as Group 1, they began solving the
maze almost immediately.
Tolman concluded that the rats in Group 2 had formed a "cognitive map" of the maze during the
first nine days, learning the layout without showing it since there was no reason to do so. Once
reinforcement was provided, this hidden or latent learning emerged. Tolman's experiment
challenged traditional operant conditioning, as it demonstrated that learning could occur without
reinforcement, only becoming visible when there was motivation to use the knowledge.
● KÖHLER’S SMART CHIMP: INSIGHT LEARNING
Wolfgang Köhler, a Gestalt psychologist, explored cognitive learning through his studies
with chimpanzees while marooned on an island during World War I. In a well-known
experiment (Köhler, 1925), a chimp named Sultan was presented with a challenge: a banana
placed just out of reach outside his cage. Initially, Sultan solved the problem by using a stick
to retrieve the banana—simple trial-and-error learning.
The challenge was then made harder. With the banana still out of reach, Sultan had two sticks
in his cage that could be joined together to create a longer pole. After unsuccessfully trying
each stick individually, Sultan suddenly had an "aha" moment. He realized he could combine
the sticks to make a longer tool to reach the banana. Köhler referred to this sudden
understanding as insight, where the solution to a problem comes from perceiving the
relationships between elements.
Köhler’s experiment demonstrated that insight is not purely the result of trial-and-error
learning, but a cognitive process involving the sudden integration of information. While early
learning theorists like Thorndike believed animals couldn’t show insight, Köhler’s work
challenged this view, sparking debate on the role of insight in animal learning.
Research underscores the importance of dopamine signals from the nucleus accumbens, with low
dopamine levels being linked to both depression and a diminished ability to avoid threatening
situations (Wenzel et al., 2018). This suggests that learned helplessness—a condition where
individuals feel they have no control over outcomes—can be tied to these biological processes.
Learned helplessness plays a significant role in coping with chronic or acute health conditions,
impacting both the individual with the disorder and family members making critical medical
decisions (Camacho et al., 2013; Sullivan et al., 2012).
The concept of learned helplessness also applies in educational settings. Many students believe
they are poor at subjects like math due to past failures. This belief may cause them to exert less
effort, reinforcing a cycle of failure. Such thinking reflects learned helplessness, where previous
negative experiences create a mental block, preventing students from trying harder.
Alternatively, it could be that they simply haven’t experienced enough success or feelings of
control in the subject.
Cognitive learning, which involves understanding the mental processes underlying behavior,
extends beyond just learned helplessness. In the next section, we explore observational learning,
often summarized as “monkey see, monkey do,” where individuals learn by watching others’
actions.
● OBSERVATIONAL LEARNING
Observational learning is the learning of new behavior through watching the actions of a model
(someone else who is doing that behavior). Sometimes that behavior is desirable, and sometimes it is
not, as the next section describes.
Albert Bandura’s classic study on observational learning involved preschool children watching
a model interact with toys (Bandura et al., 1961). In one condition, the model played non-
aggressively, ignoring a "Bobo" doll. In another, the model was aggressive toward the doll,
kicking, yelling, and hitting it with a hammer.
When left alone, children exposed to the aggressive model imitated the behavior, attacking the
doll. However, those who observed the non-aggressive model did not exhibit such behavior. This
demonstrated that learning can occur without direct reinforcement—a concept known as
learning/performance distinction.
In a later version of the study, children watched a film where the aggressive model was either
rewarded or punished. Children who saw the reward replicated the aggression, while those who
saw the punishment did not—until offered a reward to imitate the behavior. This confirmed that
although all children learned the aggression, only the rewarded model motivated imitation.
Bandura’s research has raised concerns about violent media exposure and its influence on
children's aggression. Studies over decades have linked media violence with increased
aggression in children and young adults (Allen et al., 2018; Anderson et al., 2015). On the
positive side, prosocial behavior has also been shown to increase when children watch media that
models helping behavior (Anderson et al., 2015; Prot et al., 2014).
Verbal learning, distinct from conditioning, is a form of learning predominantly seen in humans.
Unlike conditioning, where associations are formed through repeated stimulus-response pairings,
verbal learning involves acquiring knowledge about objects, events, and their properties
primarily through language. This process enables individuals to understand and categorize the
world in terms of words and symbols, allowing complex information to be stored, organized, and
recalled.
In verbal learning, words do not function merely as labels; they develop associations with each
other, forming intricate mental networks. For example, hearing the word "sun" may evoke
related terms like "warmth," "light," or "summer." This associative learning mechanism supports
memory, comprehension, and communication, as humans are able to recall and relate words in
meaningful ways.
Psychologists have developed a range of experimental methods to explore how verbal learning
occurs, often in controlled laboratory environments. These methods focus on understanding how
different types of verbal materials are learned and recalled. For example, nonsense syllables
(combinations of letters that do not form actual words, like "BAF" or "VIK") are often used to
study basic memory processes without the influence of prior knowledge or meaning. Similarly,
familiar and unfamiliar words are used to observe the effects of prior experience on learning,
with familiar words typically being easier to learn and recall.
Other materials, such as sentences and paragraphs, allow researchers to examine more
complex language processing and the effects of context on memory.
Verbal learning has undergone extensive experimental research, revealing that various factors
influence the learning process. Key determinants include the features of the material to be
learned, such as the length of the list and the meaningfulness of the content. Meaningfulness is
assessed through several metrics, including the number of associations elicited in a set time,
familiarity, frequency of usage, relationships among the words, and the sequential dependency of
each word on the previous ones. Nonsense syllables are often used in studies, with lists available
at different association levels. For consistent results, nonsense syllables should have uniform
association values.
Research has yielded several generalizations: as the list length and the occurrence of low-
association words increase, learning time also increases. However, this extended time
strengthens learning, aligning with the total time principle, which states that a fixed amount of
time is necessary to learn a given amount of material, regardless of how many trials it takes.
Essentially, the more time spent, the stronger the learning.
When participants are permitted free recall rather than following a fixed sequence, verbal
learning becomes organizational. In free recall, participants reorder items, often grouping them
by category. Bousfield first observed this in an experiment using a list of 60 words from four
semantic categories (names, animals, professions, vegetables) presented randomly. Participants
tended to cluster words by category in their recall, demonstrating a phenomenon called
category clustering. This finding underscores that even randomly presented words can be
organized during recall, driven by either category-based or subjective organization. Subjective
organization indicates that individuals may recall words in personally meaningful ways,
reflecting their unique organizational patterns.
While verbal learning is typically intentional, it may also occur incidentally. During incidental
learning, participants might notice word features such as rhyming, identical starting letters, or
shared vowels. Thus, verbal learning encompasses both intentional and incidental aspects,
allowing participants to recognize specific features of words consciously or subconsciously.
DISCRIMINATION LEARNING
Discrimination learning refers to the process through which individuals learn to differentiate
between various stimuli and respond appropriately based on those distinctions. It involves
recognizing specific features that distinguish one stimulus from another, enabling a person to
make informed decisions or react differently depending on the stimulus presented. This type of
learning is critical for survival and adaptation, allowing individuals to navigate their environment
effectively.
Mechanisms of Discrimination Learning
1. Stimulus Discrimination:
This involves learning to respond differently to similar stimuli. For instance, a person
may learn to distinguish between different types of vehicles by their horn sounds. This
ability relies on the recognition of distinctive features of each stimulus.
2. Reinforcement:
Reinforcement plays a crucial role in discrimination learning. When a response to a
specific stimulus is rewarded (positive reinforcement) or a negative outcome is avoided
(negative reinforcement), the likelihood of that response occurring again increases. Over
time, this reinforcement strengthens the association between the stimulus and the
appropriate response.
3. Generalization:
Discrimination learning is often contrasted with generalization, where a response is
made to similar stimuli. For example, if a dog learns to sit when commanded, it may also
sit when hearing similar commands. Discrimination learning involves narrowing down
responses to specific stimuli while minimizing responses to others.
Applications of Discrimination Learning
1. Animal Training:
Discrimination learning is widely used in training animals. Trainers teach pets to respond
to specific commands or cues by rewarding them for correctly identifying the stimulus.
For example, a dog may learn to differentiate between the command "sit" and "stay."
2. Cognitive Development:
In humans, discrimination learning is essential for cognitive development. Children learn
to differentiate between letters, numbers, and shapes, which lays the foundation for
reading and mathematical skills.
3. Therapeutic Settings:
In psychological therapy, discrimination learning techniques can help individuals with
anxiety or phobias learn to differentiate between real threats and harmless stimuli. For
example, a person who fears dogs may undergo desensitization training to learn that not
all dogs pose a danger.
Factors Influencing Discrimination Learning
1. Salience of Stimuli:
More salient (noticeable) stimuli are easier to discriminate. For example, bright colors or
loud sounds attract more attention and facilitate faster learning.
2. Complexity of Stimuli:
Simpler stimuli are typically easier to discriminate. For instance, distinguishing between
two different shades of color may be more challenging than identifying a red light from a
green light.
3. Practice and Experience:
Repeated exposure to stimuli enhances the ability to discriminate. The more often an
individual encounters and interacts with different stimuli, the better their ability to
differentiate between them.
4. Feedback:
Receiving feedback after making a response can significantly improve discrimination
learning. Correct responses should be reinforced, while incorrect responses should be
corrected to facilitate better understanding.
3. Technology-Enhanced Learning
The integration of technology in education and psychology has transformed learning
experiences:
● E-Learning and Online Education: The rise of e-learning platforms and online courses
allows for flexible and accessible learning. These platforms often employ multimedia
resources, interactive assessments, and social networking features to enhance engagement
and retention.
● Gamification: Incorporating game design elements into learning activities has become
increasingly popular. Gamification leverages competition, rewards, and engaging
narratives to motivate learners and enhance their overall experience.
● Artificial Intelligence (AI) and Adaptive Learning: AI-driven tools personalize
learning experiences by assessing individual strengths and weaknesses. Adaptive
learning systems adjust content and pacing based on real-time performance, fostering
personalized learning paths.
7. Interdisciplinary Approaches
There is a growing trend toward interdisciplinary research in learning, blending insights from
psychology, education, neuroscience, and sociology:
● Integrative Learning Models: Researchers are developing models that combine
cognitive, emotional, and social aspects of learning. This holistic approach acknowledges
that learning is influenced by a range of factors, including environmental, cultural, and
individual differences.
● Real-World Applications: Collaborative research initiatives focus on applying
psychological principles to address real-world challenges in education, such as dropout
rates, literacy development, and educational disparities.
GARCIA EFFECT
The Garcia Effect, named after psychologist John Garcia, refers to a phenomenon in which
animals, including humans, develop a strong aversion to a taste or food that has been associated
with illness or negative consequences. This conditioned taste aversion can occur even if the
illness happens hours after the consumption of the food. For example, if someone eats a
particular food and then becomes ill (due to a virus, not the food itself), they may develop an
aversion to that food in the future. This effect illustrates how powerful associations can be
formed between stimuli (in this case, taste) and experiences, highlighting the role of
evolutionary factors in learning. It suggests that organisms are biologically prepared to learn
certain associations more readily than others, emphasizing the importance of survival
mechanisms.
PREMACK PRINCIPLE (GRANDMA'S RULE)
The Premack Principle, often referred to as Grandma’s Rule, states that a more probable
behavior can be used to reinforce a less probable behavior. In simpler terms, it suggests that
individuals are more likely to engage in a less desired activity if it is followed by a more desired
activity. For example, a child may be encouraged to finish their homework (less preferred
activity) before they can play video games (more preferred activity). This principle underscores
the importance of motivation in learning, as it highlights how preferences can be utilized to
encourage specific behaviors or tasks.
Motivational Conflicts
Motivational conflicts occur when an individual experiences competing desires or goals that
may be mutually exclusive, leading to a struggle in decision-making. These conflicts can
significantly influence behavior, emotional well-being, and overall motivation. Understanding
the dynamics of these conflicts is essential for psychologists, educators, and anyone interested in
human behavior. Motivational conflicts can be categorized into three main types:
1. Approach-Approach Conflict
This type of conflict arises when an individual must choose between two desirable options. Both
choices are appealing, making the decision process difficult. For example, someone might face
an approach-approach conflict when deciding between two job offers that both promise
exciting career opportunities, attractive salaries, and positive work environments.
Implications:
● Emotional Tension: The individual may feel anxiety or indecision as they weigh the pros and
cons of each option.
● Satisfaction: Once a choice is made, individuals often experience relief and satisfaction, but
some may also experience regret over the option they did not choose.
2. Avoidance-Avoidance Conflict
This conflict occurs when an individual must choose between two undesirable options. Each
option presents its own set of negative consequences, making the decision even more
challenging. For instance, a student may face an avoidance-avoidance conflict when deciding
whether to complete a difficult assignment or take a failing grade in a course.
Implications:
● Procrastination: Individuals may delay making a decision or resort to avoidance behaviors, such
as procrastination, as they dread both options.
● Emotional Distress: This type of conflict can lead to increased stress, anxiety, and feelings of
helplessness, as the individual feels trapped between two unpleasant outcomes.
3. Approach-Avoidance Conflict
This conflict involves a single option that has both positive and negative consequences.
Individuals are drawn to the option due to its attractive aspects but simultaneously repelled by its
drawbacks. For example, a person may feel an approach-avoidance conflict when considering
accepting a job offer that requires relocation. The job may offer a significant salary increase and
exciting new challenges, but the move could also bring about stress, leaving behind familiar
surroundings and social networks.
Implications:
Double approach-avoidance conflict occurs when an individual is faced with two options, each
having both positive and negative consequences. This type of conflict creates a more complex
decision-making scenario, as the individual must weigh the appealing and unappealing aspects of
both choices.
Example:
Consider a student deciding between two universities.
● Option 1: University A offers a prestigious program and excellent job prospects (positive
aspects) but is located far from home and has a high cost of living (negative aspects).
● Option 2: University B is closer to home, making it more affordable and providing emotional
support (positive aspects), but it lacks the same level of prestige and networking opportunities
(negative aspects).
In this scenario, the student faces a double approach-avoidance conflict as they evaluate the
attractive and undesirable features of both universities, leading to increased emotional tension
and complexity in their decision-making process.
Learning Styles
Learning styles refer to the preferred methods or approaches individuals use to acquire, process,
and retain new information. Understanding these styles can enhance educational strategies and
improve learning outcomes. Commonly referenced learning styles include:
● Auditory Learners: These individuals prefer listening to information and benefit from
discussions, lectures, and audio materials. They often find it easier to remember
information presented verbally.
● Visual Learners: Visual learners grasp concepts best through visual aids, such as
diagrams, charts, and videos. They often use colors, illustrations, and other visual tools to
enhance their understanding.
● Kinesthetic Learners: Kinesthetic learners prefer hands-on experiences and learn
through physical activity and manipulation of objects. They thrive in environments where
they can engage in experiments, role-playing, or other interactive activities.
● Tactile Learners: Similar to kinesthetic learners, tactile learners benefit from using
touch and movement to learn. They often engage in activities that allow them to handle
materials, conduct experiments, or perform tasks to understand concepts better.
COGNITIVE STYLES
Cognitive Styles are the characteristic ways in which individuals think, perceive, and remember
information. These styles impact how individuals approach problem-solving, learning, and
interaction with new material. Cognitive styles are relatively stable over time and differ from
person to person, often influencing educational and work performance. Here are some commonly
discussed cognitive styles:
● Field-Dependent vs. Field-Independent: Field-dependent individuals rely on external
cues and surrounding context to interpret information. They tend to excel in social
situations and group learning environments. Field-independent individuals, on the other
hand, prefer to analyze information in isolation from external context and tend to be self-
directed learners who thrive in analytical tasks.
● Verbal vs. Visual: Verbal learners process information more effectively through words,
whether written or spoken. They benefit from reading, writing, and verbal instructions.
Visual learners, in contrast, understand concepts best when presented with images, charts,
and other visual aids, allowing them to make connections through imagery rather than
language.
● Sequential vs. Global: Sequential learners favor a linear, step-by-step approach,
focusing on details in a logical order. They are methodical in their problem-solving and
prefer tasks that follow a clear, ordered path. Global learners, however, need to
understand the overall concept first. They tend to view information holistically,
synthesizing information broadly before attending to specific details.
MEMORY
Memory is an active system that receives information from the senses, puts that information
into a usable form, organizes it as it stores it away, and then retrieves the information from
storage.
STAGES OF MEMORY
Although there are several different models of how memory works, all of them involve the same
three processes: getting the information into the memory system, storing it there, and getting it
back out.
It sounds like memory encoding works just like the senses—is there a difference?
Encoding is not limited to turning sensory information into signals for the brain. Encoding is
accomplished differently in each of three different storage systems of memory. In one system,
encoding may involve rehearsing information over and over to keep it in memory, whereas in
another system, encoding involves elaborating on the meaning of the information—but let’s
elaborate on that later.
TYPES OF MEMORY
1. Sensory Memory
Sensory memory holds sensory information (visual, auditory, tactile) for a very brief period,
usually less than a second. It serves as a buffer for incoming sensory stimuli, allowing the brain
to process them before they decay.
Iconic memory (visual) and echoic memory (auditory) are the two main types of sensory
memory. While brief, sensory memory is crucial for attention and helps direct our focus to
relevant stimuli.
2. Short-Term Memory (STM)
STM holds limited information for a short period, typically 15-30 seconds. It is often referred to
as "working memory" because it actively holds and manipulates information temporarily.
STM capacity is limited (often around 7±2 items), but techniques like chunking can increase this
capacity. STM plays a crucial role in immediate tasks, such as mental calculations and
comprehending language.
3. Working Memory
A subset of STM, working memory involves not just holding information but actively processing
it. Working memory is essential for complex cognitive tasks like reasoning, comprehension, and
problem-solving.
It includes different components, such as the phonological loop (for auditory information),
visuospatial sketchpad (for visual and spatial data), and central executive (for managing attention
and integrating information).
LTM stores vast amounts of information over extended periods, from minutes to an entire
lifetime. Unlike STM, LTM has a theoretically unlimited capacity.
LTM is divided into declarative (explicit) memory and non-declarative (implicit) memory.
Declarative memory includes facts and events, while non-declarative memory involves skills and
learned behaviors that do not require conscious recall (e.g., riding a bike).
● Episodic Memory: Stores personal experiences, including specific events and situations.
Episodic memory enables people to recall past events with context and details like time
and place.
● Semantic Memory: Stores general knowledge about the world, such as concepts,
language, and facts, independent of personal experience.
● Procedural Memory: Involves memory for motor skills and actions, like riding a bike or
typing. Procedural memory is largely unconscious, as these skills become automatic with
practice.
● Priming: Occurs when exposure to a stimulus influences a response to a subsequent
stimulus, enhancing memory recall or recognition.
● Conditioned Responses: Formed through classical and operant conditioning, these
memories involve learned associations, like salivating at the smell of food.
NEURAL BASIS OF MEMORY
1. Hippocampus
Critical for encoding and transferring new explicit memories to long-term storage. It is
particularly involved in spatial memory and contextual learning, helping individuals remember
where and when events occurred. The hippocampus is highly plastic and involved in memory
consolidation, a process where memories are stabilized after initial acquisition.
2. Amygdala
Plays a role in emotional memories, particularly those associated with fear and reward. The
amygdala interacts with the hippocampus to enhance the consolidation of emotionally significant
memories, making them more vivid and lasting.
3. Prefrontal Cortex
Involved in working memory and executive functions, such as decision-making, planning, and
regulating attention. The prefrontal cortex manages attention and helps retrieve information from
long-term memory, particularly for tasks requiring sustained focus and manipulation of
information.
4. Cerebellum
Essential for procedural memory and motor learning, the cerebellum aids in the coordination of
complex motor actions and muscle memory. It allows individuals to perform learned skills, such
as riding a bike, without conscious effort.
5. Basal Ganglia
Also involved in procedural memory, particularly habit formation and learned movements. It
works with the cerebellum to refine motor skills through practice and repetition.
6. Temporal Lobes
Home to the hippocampus and other memory-associated structures, the temporal lobes are
integral for long-term memory, especially episodic and semantic memories.
Emotions play a significant role in memory, influencing how information is encoded, stored, and
recalled. Strong emotional experiences tend to be remembered more vividly and for longer
durations. This is due to the amygdala’s involvement in memory consolidation, which prioritizes
emotionally charged memories. However, extreme stress or trauma can impair memory
encoding, leading to incomplete or fragmented recollections, as seen in some cases of trauma-
related disorders.
Retrieval Cues
Retrieval cues play a significant role in how we remember information by helping us access
memories stored in long-term memory (LTM). While maintenance rehearsal, or repeating
information, offers only one type of retrieval cue (the sound of the word or phrase), elaborative
rehearsal provides multiple retrieval cues by linking the information to its meaning and
connecting it with existing knowledge. This depth of encoding makes information easier to
retrieve since multiple cues are stored alongside it (Karpicke, 2012; Pyc et al., 2014; Robin &
Moscovitch, 2017).
Retrieval cues aren’t limited to direct associations with the material. According to the encoding
specificity principle, our environment and context at the time of learning can also become
retrieval cues, aiding memory recall when those conditions are recreated. This phenomenon is
known as context-dependent memory, where the physical environment present during learning
serves as a powerful retrieval aid.
For example, if you learned information in a specific room, taking a test in the same room might
improve recall. Similarly, in a classic study by Godden and Baddeley (1975), scuba divers
learned a list of words either underwater or on land. The divers recalled the words significantly
better when their recall environment matched the environment in which they originally learned
the words. This illustrates how environmental cues, such as being in water or on land, can
facilitate memory retrieval.
Additionally, state-dependent learning refers to cues related to internal states, such as mood or
physiological condition. Memories formed in a particular emotional or psychological state are
often easier to retrieve when in a similar state. For example, Eich and Metcalfe (1989) showed
that participants recalled words better when their mood matched the mood they were in during
learning. This research underscores the impact of retrieval cues, which may be either external
(environment) or internal (state), on enhancing memory access.
There are two kinds of retrieval of memories: recall and recognition. It is the difference between
these two retrieval methods that makes some exams seem harder than others. In recall, memories
are retrieved with few or no external cues, like filling in blanks on an application form.
Recognition, on the other hand, involves looking at or hearing information and matching it to
what is already in memory. A word-search puzzle, where the words are already written down and
simply need to be circled, is an example of recognition.
The other form of memory retrieval is recognition—the ability to match a piece of information
or a stimulus to a stored image or fact (Borges et al., 1977; Gillund & Shiffrin, 1984;
Raaijmakers & Shiffrin, 1992). Recognition is usually easier than recall because the cue is the
actual object, word, or sound to be detected as familiar and known. Tests like multiple-choice,
matching, and true–false rely on recognition. Recognition is particularly accurate for images,
especially human faces (Russell et al., 2009; Standing et al., 1970).
However, recognition isn't foolproof. Sometimes, there’s enough similarity between a new
stimulus and one in memory to create a false positive—when a person believes they recognize
something but does not actually have it in memory (Kersten & Earles, 2016; Muter, 1978). False
positives can have serious consequences, as illustrated in a Delaware case where eyewitnesses
mistakenly identified an innocent priest as a robbery suspect due to suggestive lineup practices.
Some long-term memories require effortful encoding or maintenance rehearsal to move from
short-term memory (STM) into long-term memory (LTM). However, many memories are stored
with little effort through a process known as automatic encoding (Kvavilashvili et al., 2009;
Mandler, 1967; Schneider et al., 1984). People often remember everyday details, such as the
passage of time, spatial layout, or frequency of events, without conscious effort. For example, a
person might not try to remember how many times cars have passed by but could still estimate it
as “often” or “hardly any.”
A unique type of automatic encoding happens when an unexpected, emotionally charged event
creates a highly vivid memory. Known as flashbulb memories (Hirst & Phelps, 2016; Kraha &
Boals, 2014; Neisser, 1982), these memories capture details of intense moments, almost like a
mental “flash picture.” Flashbulb memories are often linked to significant, widely-shared events.
For instance, Baby Boomers might remember where they were when President Kennedy was
shot, while Millennials may recall the September 11 attacks. More recent generations may
remember tragic events like the 2018 Parkland shooting. Personal flashbulb memories also
occur, capturing details of meaningful or emotional experiences such as first dates or
graduations.
The vividness of flashbulb memories is thought to be tied to the strong emotions felt during the
event. Emotional reactions trigger the release of hormones that enhance memory formation
(Dolcos et al., 2005; McEwen, 2000; McGaugh, 2004), and memory-enhancing proteins may
also play a role (Korneev et al., 2018). However, research shows that while flashbulb memories
feel vivid, they are not always accurate. Despite feeling real, these memories can decay or
change over time, just like other memories (Neisser & Harsch, 1992). Studies indicate that even
memories of stressful events, such as witnessing a crime, are often less accurate than other
memories (Loftus, 1975), reminding us that no memory is entirely immune to the effects of time
and alteration.
Many people believe that recalling a memory is like an “instant replay.” However, as new long-
term memories are formed, old ones can be altered or lost (Baddeley, 1988). In reality,
memories, including vivid flashbulb memories, are never completely accurate, and inaccuracies
tend to increase over time. Sir Frederic Bartlett (1932), a memory schema theorist, viewed
memory as a storytelling process, where retrieval is akin to problem-solving. Individuals
reconstruct past events by inferring from current knowledge and available evidence (Kihlstrom,
2002a).
Elizabeth Loftus and other researchers (Hyman, 1993; Hyman & Loftus, 1998, 2002) support the
idea of constructive processing in memory retrieval. In this view, memories are literally “built”
or reconstructed from information encoded earlier. Each time a memory is retrieved, it may be
altered or revised to incorporate new details or exclude previously included ones.
The misinformation effect occurs when misleading information presented after an event alters a
person's memory of that event (Loftus et al., 1978). For example, if eyewitnesses to a crime talk
to each other, one person's description may influence another's recall. In a study, participants
viewed a slide presentation of a traffic accident featuring a stop sign. However, a subsequent
summary incorrectly referred to it as a yield sign. Participants exposed to this misleading
information were less accurate in recalling the sign's nature compared to those who received no
such misinformation. This illustrates how new information, even in a different format (e.g.,
written vs. visual), can reconstruct memories inaccurately.
False Memory Syndrome
False memory syndrome refers to the development of inaccurate memories through suggestion,
often during hypnosis (Frenda et al., 2014). While hypnosis can aid in recalling real memories, it
can also lead to the creation of false ones, increasing confidence in both true and false memories
(Bowman, 1996). Therapists may unintentionally induce false memories during sessions.
Research indicates that even mindfulness meditation can enhance the likelihood of false
memories (Wilson et al., 2015).
False memories are constructed in the brain similarly to real ones, particularly when visual
imagery is involved (Gonsalves et al., 2004). For example, studies using fMRI scans show that
people cannot always distinguish between real and imagined visual images. This can explain
how prompting someone to recall a specific person at a crime scene may lead them to
inaccurately remember that individual being present.
In her "Lost in the Mall" study, Elizabeth Loftus explored how memories can be manipulated
and how people can come to remember events that never actually happened. The experiment
aimed to implant a false memory into participants’ minds: being lost in a shopping mall as a
child. Here’s how it worked:
Participants were asked to recall four childhood events that family members provided. Three of
these stories were real, and one—the story of getting lost in the mall—was entirely fabricated.
Family members played a role in making the false story feel plausible, adding details that made it
sound real. When asked to recall these events, many participants were able to provide details
about the imaginary mall experience. They remembered things like how they felt, what they saw,
and even what the "rescuer" looked like.
This study showed how easily memories can be distorted or created from scratch. Loftus
demonstrated that through suggestion and association with familiar memories, it’s possible for
someone to "remember" vivid details of an event that never occurred. The implications extend
beyond childhood memories: they raise questions about the reliability of eyewitness testimony
and suggest that our memories are not static records but are instead open to revision,
reinterpretation, and even invention.
Trustworthiness of Memories
While false memories complicate the reliability of recollections, certain factors affect their
plausibility. Research by Kathy Pezdek indicates that only plausible events can typically be
implanted as false memories. In her studies, children were more likely to “remember” plausible
false events (like getting lost) compared to implausible ones (like receiving a rectal enema)
(Hyman et al., 1998; Pezdek et al., 1997).
However, Loftus’s earlier work suggests that even implausible memories can be implanted with
the right suggestion, especially if the event is framed as typical of others’ experiences. Two key
steps must occur for individuals to reinterpret their thoughts about false events as real memories:
The personality traits of those reporting such memories also play a role. For example, individuals
claiming to have experienced alien abductions were more likely to recall false memories
compared to controls (Clancy et al., 2002). Factors such as susceptibility to hypnosis, depression
symptoms, and unusual beliefs predict higher rates of false recall.
The Parallel Distributed Processing (PDP) model, also known as the connectionist model,
presents a framework for understanding cognitive processes through a network of computational
elements called units, inspired by neural functioning. This model posits that information is
represented in the brain through various activation patterns among these units, which can have
activation values ranging from 0 to 1. The connections between units can be either positive,
enhancing activation, or negative, diminishing it. Knowledge is stored within these connections,
influencing how input data is processed and how memories are recalled.
For example, if a person is amidst two groups discussing a topic, they may recall certain
information from both groups simultaneously due to the overlapping activation patterns. This
reflects the nature of parallel processing, where multiple processes occur simultaneously,
contrasting with serial processing, which handles tasks in a sequential manner and often leads to
slower and less accurate results.
1. Cognitive Processes: The PDP model suggests that all cognitive behaviors are governed
by the same underlying principle. Units adjust their activation levels based on the total
inputs they receive through their connections. Various models utilize different methods
for aggregating inputs and adjusting activations, while cognitive filtering facilitates the
flow of activations in neural networks.
2. Interactive Processing: The processing in PDP models is dynamic and interactive. The
transfer of activation is bi-directional; when a signal is sent from one unit to another, the
receiving unit also sends feedback back. This interaction allows for ongoing adjustments
and modifications to the data, enhancing the model's dynamism.
3. Knowledge Encoding: Unlike traditional models that store data in separate structures,
knowledge in the PDP framework is encoded directly in the connections between units.
This means that the way the network behaves is determined by the interconnections and
their activation patterns during processing.
4. Continuous Processing: In the PDP model, processing, learning, and representations are
continuous. Representations are coded as distributed activation patterns across units,
allowing for similarities among representations. These similarities form the basis for
generalizing activations within the network, indicating that similar activation patterns
yield familiar outputs.
5. Environmental Dependence: The PDP model emphasizes the importance of the
environment's statistical structure in understanding cognitive processing. Learning occurs
through the activation patterns formed in neural networks, with daily experiences
contributing to errors and expectations that shape processing.
6. Patterns of Activation: Within the PDP framework, stimuli are represented by patterns
of activation across multiple units. Each input corresponds to unique activation patterns
among numerous neurons, with each neuron contributing to the representation of various
cognitive contents such as colors, images, words, and structures.
The Levels of Processing (LOP) Theory by Craik and Lockhart (1972) posits that memory
retention is influenced by the depth at which information is processed. The theory contends that
the more deeply information is processed, the more enduring the resulting memory trace, in
contrast to the structure-based Multi-Store Model, which emphasizes separate short-term and
long-term memory systems. Instead, LOP theory suggests memory is simply the by-product of
information processing and lacks distinct structures like STM and LTM.
Depth of Processing
Craik and Lockhart defined depth as the degree of meaningful engagement with a stimulus,
rather than the number of analyses performed on it. They proposed that memory retention
depends on whether information undergoes shallow or deep processing:
1. Shallow Processing: This involves basic sensory analysis, such as:
o Structural Processing: Encoding only physical qualities (e.g., typeface).
o Phonemic Processing: Encoding based on sounds (e.g., rhyming).
o Shallow processing often involves maintenance rehearsal, or mere repetition, leading to
short-lived memory traces.
2. Deep Processing: This involves semantic encoding, where one engages in meaningful
analysis, linking information with prior knowledge or other concepts. For example,
understanding the meaning of a word and relating it to other concepts leads to better
recall, as it involves elaboration rehearsal, a process associated with more sustained
memory retention.
Craik and Tulving investigated how different types of processing impact recall. Participants
processed words under three conditions: structural, phonemic, and semantic. After a distraction
task, they were asked to identify the originally presented words from a larger list. The study
found that participants recalled more semantically processed words than phonemically or
structurally processed words. This confirmed that deeper processing, involving elaboration and
meaning, results in improved recall.
Real-Life Applications
Strengths
LOP theory shifted memory research away from structure-based models toward understanding
processing depth. This was significant in demonstrating that memory depends on more than just
storing information in distinct memory types (STM/LTM). It also led to numerous studies
supporting the effectiveness of deep, semantic processing in enhancing memory recall.
Additionally, LOP theory has practical value in education and study techniques, as deeper
processing aids in better retention.
Weaknesses
Lack of Explanation: The theory describes depth but fails to explain why deeper processing
results in better memory.
Vague Concept of Depth: The concept is difficult to objectively measure, and it is challenging to
isolate depth from other factors like effort or time spent on processing.
Circular Reasoning: Deep processing is predicted to yield better memory, but memory
effectiveness is also used to define depth, creating a potential circular argument.
Evidence for Memory Structures: Studies such as those on H.M. and the serial position effect
suggest that distinct memory structures (STM/LTM) are present, which LOP theory overlooks.
The Hierarchical Network Model of Semantic Memory by Collins and Quillian (1969)
explains how our memory for general knowledge (like facts, colors, and sounds) is organized.
Unlike personal memories, semantic memory holds information we learn over time.
When we try to remember something, the brain activates a “node,” which spreads to related
ideas. This makes it easier to retrieve information from memory.
Why It’s Useful in Classrooms This model suggests students may need more time to respond to
questions. In classrooms, teachers often expect answers within a second, which may not be
enough time for students to find and say their answers. According to the model, remembering
involves activating linked ideas, which can take a few extra moments.
Cognitive Economy Cognitive economy means storing information efficiently. For example,
instead of repeating that fish and birds are “animals” in every category, this model keeps general
facts at a higher level. This structure helps us remember without unnecessary repetition.
At the Top: Imagine the broad category Animal. This is where we group all types of living
creatures.
o Branching Out: From Animal, we can break it down into different groups, like:
Birds: These are creatures that can fly and often have feathers.
Properties of Birds:
They have wings.
They often build nests.
Fish: These are the ones that live underwater.
Properties of Fish:
They breathe through gills.
They typically have fins.
Mammals: This group includes animals like dogs and cats.
Dogs:
They have fur.
They bark to communicate.
Cats:
They also have fur.
They meow and are often independent.
How It Works:
When you think about a dog, your brain quickly activates related concepts. You might remember
that a dog is a mammal and also an animal. This network of associations makes it easier to
recall details about dogs—like that they have fur and bark.
DEFINITION OF FORGETTING
Forgetting can be defined as the loss or inability to retrieve information that was once available
in memory. It is a natural and adaptive process that allows individuals to prioritize and manage
information by discarding details that are no longer relevant, thus freeing up cognitive resources
for more critical and current information. This selective memory process helps prevent mental
overload and is essential for effective functioning. As William James suggested, the ability to
forget is nearly as crucial as remembering, as it enables us to focus on information that supports
current goals and decisions, while reducing mental clutter from less relevant or outdated
information.
Hermann Ebbinghaus was one of the first to look deeply into how and why we forget things. To
do this, he took an unusual approach by creating lists of "nonsense syllables"—strings of letters
like "GEX" and "WOL" that had no meaning. He wanted to see how well he could memorize
them without any familiar words to help him out. After memorizing a list, he’d take a break, then
test himself to see what he remembered. He recorded his results, and the outcome was eye-
opening: he noticed that most forgetting happens quickly, especially within the first hour of
learning. After that, the rate of forgetting slows down, creating what we now call the "curve of
forgetting." This pattern holds even when we study meaningful material, although we forget
meaningful information more slowly than nonsense.
Ebbinghaus also found that how we study matters a lot. Instead of cramming all at once, he
found it’s much better to space out study sessions—this is called “distributed practice.” Research
since then has shown that studying in smaller chunks with breaks leads to better memory
retention than trying to take in all the information at once. For example, a 3-hour cram session
might feel productive, but breaking it up into shorter, 30- to 60-minute sessions with breaks in
between actually helps your brain hold onto the material longer. This approach takes the pressure
off and gives your brain time to process and store information more effectively.
Massed Practice:
Massed practice, often known as cramming, involves studying a large amount of material in a
single, uninterrupted session. Although this approach can give the illusion of productivity, it
generally results in poorer long-term retention. Without breaks, the brain becomes fatigued,
making it harder to absorb and retain information effectively. While massed practice might lead
to short-term recall, it often doesn’t support lasting memory.
Distributed Practice:
Distributed practice involves breaking study sessions into smaller chunks with breaks in
between. This approach allows the brain to consolidate information over time, making it easier to
retain in the long run. Studies have shown that distributed practice significantly enhances
memory retention compared to massed practice. By spacing out learning, individuals can avoid
mental fatigue and improve their ability to recall information later.
There are several theories that explain why people forget things. Here are three significant ones:
1. Encoding Failure
One simple reason for forgetting is encoding failure, where some information never gets encoded
into memory. For instance, if a friend speaks to you while you're distracted, you may not truly
process what they said. This means the information doesn’t pass beyond sensory memory. A
classic study by Nickerson and Adams (1979) tested this by asking people to identify a correct
view of a penny, an object they see often. Most people struggle with this because they haven't
paid enough attention to encode the details into long-term memory.
3. Interference Theory
Interference theory suggests that forgetting occurs when other information interferes with the
retrieval of stored memories. Long-term memories might be stored permanently but can become
inaccessible due to interference from newer or older information. There are two types of
interference:
Proactive Interference: This occurs when older information interferes with the learning
or retrieval of new information. For example, if you need to remember a new password
but keep recalling your old one, you are experiencing proactive interference. Another
common example is when someone changes their phone number but continues to
remember their old number instead of the new one.
Retroactive Interference: This happens when new information interferes with the
retrieval of older information. For instance, if you need to recall an old password but only
remember your new one, the new information is retroactively interfering with your
memory of the old password.
Engrams are theoretical constructs in the study of memory, representing the physical traces or
changes in the brain that correspond to the storage of memories. The concept of engrams was
significantly developed by psychologist Karl Lashley in the early to mid-20th century. His
research aimed to understand how memories are stored in the brain and what happens to them
over time.
Lashley’s Research
Lashley conducted a series of experiments with rats in the 1920s and 1930s, where he trained
them to navigate mazes. After they learned the maze, he would perform lesions on various parts
of their brains to determine where memories were stored. His findings were surprising; he
discovered that no single area of the brain was solely responsible for the storage of memories.
Instead, he found that memory traces (engrams) seemed to be distributed across different
regions of the brain. This led him to propose the idea of mass action, suggesting that the amount
of memory loss correlated with the amount of brain tissue removed, rather than the specific
location of the lesion.
MOTIVATED FORGETTING
Motivated forgetting is a psychological phenomenon where individuals consciously or
unconsciously forget information, memories, or experiences that are emotionally distressing,
threatening, or uncomfortable. This process can be understood through two primary mechanisms:
repression and suppression.
1. Repression:
o Definition: Repression is an unconscious defense mechanism proposed by Sigmund
Freud. It involves the automatic forgetting of distressing memories or thoughts that are
deemed too painful or anxiety-provoking to recall.
o Function: This mechanism serves to protect the individual from emotional pain or
psychological distress. For example, a person who has experienced trauma may
unconsciously block out memories of the event to avoid the associated pain.
o Example: A child who experiences abuse might repress those memories, making it
difficult for them to recall the details of the abuse later in life.
2. Suppression:
o Definition: Suppression is a conscious effort to forget unwanted memories or thoughts.
Unlike repression, suppression involves a deliberate attempt to avoid thinking about
specific experiences or emotions.
o Function: This mechanism can be useful for managing anxiety or stress in the short
term, allowing individuals to focus on more immediate tasks or responsibilities without
being distracted by distressing thoughts.
o Example: A student who receives a poor grade might consciously choose to suppress
thoughts about it in order to concentrate on studying for an upcoming exam.
Motivated forgetting is often linked to emotional experiences. Memories associated with strong
negative emotions, such as fear, shame, or grief, are more likely to be subjected to motivated
forgetting. This is particularly evident in cases of trauma, where individuals may find it
challenging to confront painful memories directly.
Additionally, neuroscientific research has indicated that different brain regions are involved in
the processes of suppression and the retrieval of memories, highlighting the complexity of how
memories are managed in the mind.
Implications
Understanding motivated forgetting has significant implications for therapy and mental health.
For individuals struggling with traumatic memories, therapeutic approaches, such as cognitive-
behavioral therapy (CBT) or trauma-focused therapy, may help them process and confront
repressed or suppressed memories, leading to healing and recovery. Conversely, motivated
forgetting can also hinder emotional healing if it leads to the avoidance of addressing important
issues.
Distorted Memories
Distorted memories occur when the details of a memory are altered, leading to a recall that does
not accurately reflect what actually happened. This can happen due to several factors:
Misattributed Memories
Misattributed memories involve recalling a memory but incorrectly assigning it to the wrong
source or context. This can happen for various reasons:
1. Source Amnesia: This refers to the inability to remember where, when, or how one
learned something, leading to confusion about the origin of a memory. Individuals may
remember the content of a memory but fail to accurately attribute it to the correct source.
o Example: Someone might hear a piece of information from a friend and later recall it as
something they read in a book or saw in a movie, leading to misattribution.
2. False Memories: These are recollections of events that did not actually occur or are
significantly altered from reality. False memories can be created through suggestive
questioning, leading individuals to believe they experienced events that never happened.
o Example: In research by Loftus and Pickrell (1995), participants were asked to recall
childhood events, including a fictitious event of getting lost in a shopping mall. Many
participants later recalled details of the fabricated event, demonstrating how easily false
memories can be formed.
3. Confusing Familiarity: Sometimes, individuals might confuse the familiarity of a
memory with its accuracy. For instance, a person may feel that they’ve seen a face before
and mistakenly believe they know the person, even if they’ve never met them.
4. Cross-Race Effect: This phenomenon occurs when individuals have difficulty accurately
identifying people of different races from their own. This can lead to misattributions in
eyewitness testimony, where witnesses may mistakenly identify an individual based on
limited exposure or familiarity with a particular racial group.
TIP-OF-THE-TONGUE PHENOMENON
1. Characteristics: During a TOT experience, individuals may recall certain details related
to the target word or information, such as its initial sound, the number of syllables, or
similar words. Despite these partial recollections, they are unable to produce the full
word or information.
2. Frequency: The TOT phenomenon is quite common and can occur at any age, though it
may become more frequent with aging. Many people report experiencing this sensation
multiple times a week or even daily.
3. Causes:
o Interference: TOT occurrences can arise due to interference from similar words or
concepts that compete for retrieval, making it difficult to access the specific target.
o Inadequate Retrieval Cues: Sometimes, the cues available for retrieval may not be
strong enough to trigger the full memory, leading to a sense of being "stuck."
o Brain Function: Research suggests that the TOT phenomenon may be related to how the
brain encodes and retrieves information. Studies have shown that TOT experiences are
often linked to difficulties in the retrieval process rather than a complete loss of the
information.
4. Resolution: The TOT phenomenon typically resolves on its own, and individuals often
find that the word or information comes to mind after a short period of time. Engaging in
a different activity or discussing related topics can sometimes facilitate the retrieval
process.
5. Psychological and Social Aspects: The TOT experience can be frustrating and can lead
to social embarrassment, especially in conversations. However, it also highlights the
complexities of memory retrieval and the cognitive processes involved in accessing
stored information.
ORGANIC AMNESIA
There are two forms of severe loss of memory disorders caused by problems in the functioning
of the memory areas of the brain. These problems can result from concussions, brain injuries
brought about by trauma, alcoholism (Korsakoff’s syndrome), or disorders of the aging brain.
RETROGRADE AMNESIA
If the hippocampus is that important to the formation of declarative memories, what would
happen if it got temporarily “disconnected”? People who are in accidents in which they’ve
received a head injury often are unable to recall the accident itself. Sometimes they cannot
remember the last several hours or even days before the accident. This type of amnesia (literally,
“without memory”) is called retrograde amnesia, which is loss of memory from the point of
injury backward (Hodges, 1994). What apparently happens in this kind of memory loss is that
the consolidation process, which was busy making the physical changes to allow new memories
to be stored, gets disrupted and loses everything that was not already nearly “finished.”
Think about this: You are working on your computer, trying to finish a history paper that is due
tomorrow. Your computer saves the document every 10 minutes, but you are working so
furiously that you’ve written a lot in the last 10 minutes. Then the power goes out—horrors!
When the power comes back on, you find that while all the files you had already saved are still
intact, your history paper is missing that last 10 minutes’ worth of work. This is similar to what
happens when someone’s consolidation process is disrupted. All memories that were in the
process of being stored—but are not yet permanent—are lost.
One of the therapies for severe depression is ECT, or electroconvulsive therapy, in use for this
purpose for many decades. One of the common side effects of this therapy is the loss of memory,
specifically retrograde amnesia (Meeter et al., 2011; Sackeim et al., 2007; Squire & Alvarez,
1995; Squire et al., 1975). While the effects of the induced seizure seem to significantly ease the
depression, the shock also seems to disrupt the memory consolidation process for memories
formed prior to the treatment. While some researchers in the past found that the memory loss can
go back as far as three years for certain kinds of information (Squire et al., 1975), later research
suggests that the loss may not be a permanent one (Meeter et al., 2011; Ziegelmayer et al., 2017).
ANTEROGRADE AMNESIA
Concussions can also cause a more temporary version of the kind of amnesia experienced by
H.M. This kind of amnesia is called anterograde amnesia, or the loss of memories from the point
of injury or illness forward (Squire & Slater, 1978). People with this kind of amnesia, like H.M.,
have difficulty remembering anything new. One of your authors knows a young man who was
struck by lightning in the summer of 2018. He remembers walking behind his brother, pulling a
cart they had their tools in. His next memories start about two months later. He cannot remember
the lightning strike nor his time in the hospital or other experiences that occurred in the two
months following the strike. This is also the kind of amnesia most often seen in people with
dementia, a neurocognitive disorder, or decline in cognitive functioning, in which severe
forgetfulness, mental confusion, and mood swings are the primary symptoms. (Dementia patients
also may suffer from retrograde amnesia in addition to anterograde amnesia.)
If retrograde amnesia is like losing a document in the computer because of a power loss,
anterograde amnesia is like discovering that your hard drive has become defective—you can read
data that are already on the hard drive, but you can’t store any new information. As long as you
are looking at the data in your open computer window (i.e., attending to it), you can access it, but
as soon as you close that window (stop thinking about it), the information is lost because it was
never transferred to the hard drive (long-term memory). This makes for some very repetitive
conversations, such as being told the same story or being asked the same question numerous
times in the space of a 20-minute conversation.
ALZHEIMER’S DISEASE
Nearly 5.7 million Americans have Alzheimer's disease, the most common type of dementia,
accounting for 60 to 80 percent of all dementia cases. Approximately 1 in 10 people over 65 has
Alzheimer's, which is the sixth-leading cause of death in the U.S. and the fifth for those aged 65
and older.
In the early stages of Alzheimer's, the primary memory issue is anterograde amnesia, where
individuals struggle to form new memories. Memory loss starts mild but becomes severe, leading
to dangerous forgetfulness, such as taking extra medication or leaving food unattended. As the
disease progresses, retrograde amnesia can also occur, erasing past memories.
The causes of Alzheimer's are not fully understood. While the formation of beta-amyloid plaques
and tau tangles is normal with aging, individuals with Alzheimer's have significantly more of
these. The neurotransmitter acetylcholine, crucial for memory formation in the hippocampus,
breaks down early in the disease. Some forms of Alzheimer's have a genetic basis, but this
accounts for fewer than 5 percent of cases. Other forms of dementia can arise from strokes,
dehydration, and medications.
Treatments can slow but not halt or reverse the disease. Currently, six drugs are approved for
treatment, which only extend symptom relief for an average of 6 to 12 months. However,
manageable risk factors include high cholesterol, high blood pressure, smoking, obesity, Type II
diabetes, and lack of exercise. Mental stimulation through continued learning can also support
cognitive health. Research suggests that certain drugs may help restore memory in Alzheimer's-
affected brain cells.
Myths about Alzheimer's causation include concerns over aluminum cookware, artificial
sweeteners, silver dental fillings, and flu shots—all unfounded.
INFANTILE AMNESIA
Most people cannot recall events from their infancy, typically before age 3. Claims of such
memories often stem from family recounts rather than genuine recollection. These "memories"
may feel like watching a movie rather than experiencing them firsthand.
Infantile amnesia may occur because early memories are implicit and difficult to bring to
consciousness. Explicit memory, which develops after age 2 with the maturation of the
hippocampus and language skills, enables the formation of autobiographical memories through
social interactions with caregivers.
Autobiographical Memory
Development
Autobiographical memory begins to develop in early childhood, usually around age 2 to 3, when
children start forming a sense of self and can recount personal experiences. This development is
influenced by language acquisition and social interactions, particularly discussions about past
events with caregivers, which help children organize their memories. The ability to narrate
personal experiences contributes to a coherent sense of identity and self-concept.
Components
1. Episodic Memories: These are vivid recollections of specific events, including details
about the context (time, place, people involved). For example, a birthday party or a
family vacation.
2. Semantic Memories: These include facts and general knowledge about oneself, such as
one’s name, age, or the names of family members. They provide context for episodic
memories and contribute to self-identity.
3. Emotional Significance: Autobiographical memories are often emotionally charged and
can significantly impact a person’s mood and behavior. Emotional experiences tend to be
remembered more vividly, contributing to their lasting presence in memory.
Functions
Autobiographical memory serves several critical functions:
Self-Identity: It helps individuals understand who they are by connecting past experiences to
present identities.
Cognitive Organization: It allows for the organization of knowledge and experiences, aiding in
decision-making and problem-solving.
Social Connection: Sharing autobiographical memories strengthens social bonds and facilitates
communication with others, allowing individuals to relate their experiences.
Influence of Culture
Cultural factors can shape how autobiographical memories are formed and recalled. Different
cultures emphasize various aspects of memory, such as the importance of family narratives or
individual achievements, which can influence the content and emotional tone of memories. For
instance, collectivist cultures may focus more on shared experiences, while individualist cultures
may highlight personal achievements.