100% found this document useful (1 vote)
3K views89 pages

Cognitive Psychology Notes

Cognitive psychology is the study of mental processes like thinking, memory, and problem-solving. The field has its roots in philosophers like Aristotle who studied topics like perception and memory. Early researchers used introspection to study cognition, while later behaviorism focused on observable behaviors. Contemporary cognitive psychology applies methods like brain imaging to understand how humans acquire, represent, and use information.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
3K views89 pages

Cognitive Psychology Notes

Cognitive psychology is the study of mental processes like thinking, memory, and problem-solving. The field has its roots in philosophers like Aristotle who studied topics like perception and memory. Early researchers used introspection to study cognition, while later behaviorism focused on observable behaviors. Contemporary cognitive psychology applies methods like brain imaging to understand how humans acquire, represent, and use information.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Cognitive Psychology Notes

MODULE -1
Historical Background

Cognitive psychology is the study of thinking.


Cognitive psychology can also be viewed as the study of processes underlying
mental events.
What is Cognitive Psychology?
• It is branch of psychology concerned with how people acquire, store,
transform, use, and communicate information (Neisser,1967).
• It is the branch of psychology that explores the operation of mental
processes related to perceiving, attending, thinking, language, and
memory, mainly through inferences from behavior (APA, 2015).

History of cognitive Psychology

Aristotle
• the Greek philosopher Aristotle (384–322 bce) examined topics such as
perception, memory, and mental imagery.
• He also discussed how humans acquire knowledge through experience
and observation (Barnes, 2004;Sternberg, 1999).
• Aristotle emphasized the importance of empirical evidence, or scientific
evidence obtained by careful observation and experimentation.
Wilhelm Wundt (1832 - 1920)
• Wundt proposed that psychology should study mental processes, using a
technique called introspection.
• Introspection meant that carefully trained observers would
systematically analyze their own sensations and report them as
objectively as possible, under standardized conditions.
Hermann Ebbinghaus (1850–1909)
• Was the first person to scientifically study human memory.
• He examined a variety of factors that might influence performance, such
as the amount of time between two presentations of a list of items.
• nonsense syllables. Eg. TVX, DRG, UAJ, etc.

Mary Whiton Calkins (1863–1930).


(First woman president of the American Psychological Association)
• Calkins reported a memory phenomenon called the recency effect.
• The recency effect refers to the observation that our recall is
especially accurate for the final items in a series of stimuli.
• She emphasized that psychologists should study how real people use
their cognitive processes in the real world, as opposed to the
psychology laboratory.
William James (1842–1910)
• James was not impressed with Wundt’s introspection technique or
Ebbinghaus’s research with nonsense syllables.
• Instead, James preferred to theorize about our everyday psychological
experiences
• Principles of Psychology provides clear, detailed descriptions about
people’s everyday experiences (Benjamin, 2009). It also emphasizes that
the human mind is active and inquiring.
• Covered topics such as perception, attention, memory, understanding,
reasoning, and the tip-of-the-tongue phenomenon
Behaviorism.
• During the first half of the 20th century, behaviorism was the most
prominent theoretical perspective in the United States.
• According to the principles of behaviorism, psychology must focus on
objective, observable reactions to stimuli in the environment, rather than
introspection .
• Behaviorists also argued that researchers could not objectively study
mental representations, such as an image, idea, or thought
• For example, behaviorists emphasized the importance of the operational
definition, a precise definition that specifies exactly how a concept is to
be measured.
The Gestalt Approach.
• Gestalt psychology emphasizes that we humans have basic tendencies to
actively organize what we see; furthermore, the whole is greater than the
sum of its parts .

Frederic Bartlett (1886–1969)


• Conducted his research on human memory. Remembering: An
Experimental and Social Study (Bartlett, 1932)
• Bartlett advanced the concept that memories of past events and
experiences are actually mental reconstructions that are colored by
cultural attitudes and personal habits, rather than being direct
recollections of observations made at the time.

Information Processing Approach


Information Processing Theory is a cognitive theory that focuses on how
information is encoded into our memory. The theory describes how our brains
filter information, from what we’re paying attention to in the present moment,
to what gets stored in our short-term or working memory and ultimately into our
long-term memory.

The premise of Information Processing Theory is that creating a long-term


memory is something that happens in stages; first we perceive something
through our sensory memory, which is everything we can see, hear, feel or taste
in a given moment; our short-term memory is what we use to remember things
for very short periods, like a phone number; and long-term memory is stored
permanently in our brains.

History of Information Processing Theory

Developed by American psychologists including George Miller in the 1950s,


Information Processing Theory has in recent years compared the human brain to
a computer. The ‘input’ is the information we give to the computer – or to our
brains – while the CPU is likened to our short-term memory, and the hard-drive
is our long-term memory.

Our cognitive processes filter information, deciding what is important enough to


‘save’ from our sensory memory to our short-term memory, and ultimately to
encode into our long-term memory. Our cognitive processes include thinking,
perception, remembering, recognition, logical reasoning, imagining, problem-
solving, our sense of judgment, and planning.

Stages of Information Processing


According to the information processing theory, there are four main stages of
information processing which include attending, encoding, storing, and
retrieving. These four stages are used to describe how the brain gathers
information, processes this information, creates memories, and uses this
information when it is needed.
Attending

Attending is the first stage of information processing, and it refers to when a


person is gathering information from their environment. For example, when a
student is listening to their professor giving a lecture, they are in the attending
stage of information processing. People can also gather information using their
other senses such as sight and smell.

Encoding

Encoding is the second stage of information processing, and it refers to a


person focusing and trying to truly understand something. Encoding is more
involved than attending. For example, a student can simply listen to their
professor (i.e., attending), but if they are not focusing on what the professor has
to say and trying to understand the information, they will not likely learn the
information. However, if a student really focuses on the professor during the
lecture and tries to truly understand the information, they are in the encoding
phase of information processing.

Storing

Storing is the third stage of information processing, and it refers to keeping or


maintaining information in the brain for an extended period. The storing phase
can be thought of as keeping information in a person's ''memory bank''. For
example, a student who can attend and encode information they receive during a
lecture should be able to store this information in their memory for an extended
period.

Retrieving

Retrieving is the fourth stage of information processing, and it refers to when a


person remembers information they had stored in their memory bank. For
example, when a student can remember information from a lecture while taking
a final, they are retrieving this information from their memory bank.
The Ecological Approach

Ecology

• the branch of biology that deals with the relations of organisms to one
another and to their physical surroundings.

• the study of relationships between organisms and their physical and social
environments

Ecological Psychology
The ecological Psychology stresses the ways in which the environment and the
context shape the way cognitive processing occurs.
• the analysis of behavior settings with the aim of predicting patterns of
behavior that occur within certain settings.
• The focus is on the role of the physical and social elements of the setting
in producing the behavior.
• According to behavior-setting theory, the behavior that will occur in a
particular setting is largely prescribed by the roles that exist in that setting
and the actions of those in such roles, irrespective of the personalities,
age, gender, and other characteristics of the individuals present.
• In a place of worship, for example, one or more individuals have the role
of leaders (the clergy), whereas a larger number of participants function
as an audience (the congregation).
• the influences of both the functionalist and the Gestalt schools on the
ecological approach. The functionalists focused on the purposes served
by cognitive processes, certainly an ecological question. Gestalt
psychology’s emphasis on the context surrounding any experience is
likewise compatible with the ecological approach.
Contemporary Cognitive Psychology
• In 1879, the first psychology lab was opened, and this date is generally
thought of as the beginning of psychology as a separate science.
• The early work in psychology paved the way for the study of
contemporary cognitive psychology today.
• Contemporary cognitive psychology is an outgrowth of historical
cognitive psychology, with influences from Jean Piaget and Information
Processing theorists.
• Contemporary cognitive psychology can apply the methods of today,
such as brain imaging, and can also examine how the tenets of cognitive
psychology are applied to contemporary issues.
Cognitive Revolution
• Intellectual shift in psychology in the 1950s focusing on the internal
mental processes driving human behavior.
• The study of human thought became interdisciplinary by directing
attention to processing skills including language acquisition, memory,
problem-solving, and learning.
• This scientific approach to understanding how the brain works moved
away from behavioral psychology and embraced understanding the
processes that drive behavior.
• The events in the 1950s and 1960s as the cognitive revolution.
• But it is important to realize that although the revolution made it
acceptable to study the mind, the field of cognitive psychology continued
to evolve in the decades that followed.
MODULE 2
ATTENTION

• Attention state in which cognitive resources are focused on certain aspects


of the environment rather than on others and the central nervous system is in
a state of readiness to respond to stimuli.

• Attention is the means by which we actively process a limited amount of


information from the enormous amount of information available through our
senses, our stored memories, and our other cognitive processes

• the ability to focus on specific stimuli or locations.

Process of Attention

The process through which certain stimuli are selected from a group of others is
generally referred to as attention. Besides selection, attention also refers to
several other properties like alertness, concentration, and search.

§ Alertness refers to an individual’s readiness to deal with stimuli that


appear before her/him. For Example, Alertness at starting point of race.
§ Concentration refers to focusing of awareness on certain specific
objects while excluding others for the moment. For example- Teachers
voice rather than other noises.
§ Search, an observer looks for some specified subset of objects among
a set of objects. For example we search for a friend.

Types of Attention
Selective attention
Have you ever been at a loud concert or a busy restaurant, and you are trying to
listen to the person you are with? While it can be hard to hear every word, you
can usually pick up most of the conversation if you're trying hard enough. This
is because you are choosing to focus on this one person's voice, as opposed to
say, the people speaking around you. Selective attention takes place when we
block out certain features of our environment and focus on one particular
feature, like the conversation you are having with your friend.
Divided attention
Do you ever do two things at once? If you're like most people, you do that a lot.
Maybe you talk to a friend on the phone while you're straightening up the
house. Nowadays, there are people everywhere texting on their phones while
they're spending time with someone. When we are paying attention to two
things at once, we are using divided attention.
Some instances of divided attention are easier to manage than others. For
example, straightening up the home while talking on the phone may not be hard
if there's not much of a mess to focus on. Texting while you are trying to talk to
someone in front of you, however, is much more difficult. Both age and the
degree to which you are accustomed to dividing your attention make a
difference in how adept at it you are.
Sustained attention
Are you someone who can work at one task for a long time? If you are, you are
good at using sustained attention. This happens when we can concentrate on a
task, event, or feature in our environment for a prolonged period of time. Think
about people you have watched who spend a lot of time working on a project,
like painting or even listening intently to another share their story.
Sustained attention is also commonly referred to as one's attention span. It takes
place when we can continually focus on one thing happening, rather than losing
focus and having to keep bringing it back. People can get better at sustained
attention as they practice it.
Executive attention
Do you feel able to focus intently enough to create goals and monitor your
progress? If you are inclined to do these things, you are displaying executive
attention. Executive attention is particularly good at blocking out unimportant
features of the environment and attending to what really matters. It is the
attention we use when we are making steps toward a particular end.
For example, maybe you need to finish a research project by the end of the day.
You might start by making a plan, or you might jump into it and attack different
parts of it as they come. You keep track of what you've done, what more you
have to do, and how you are progressing. You are focusing on these things in
order to reach the goal of a finished research paper. That is using your executive
attention.
Theory of Attention

Cocktail party effect

The cocktail party effect is an example of selective attention and is the


phenomenon of being able to focus one's auditory attention on a particular
stimulus while filtering out a range of other stimuli, much the same way that a
partygoer can focus on a single conversation in a noisy room.[1] This effect is
what allows most people to "tune in" to a single voice and "tune out" all others.
It may also describe a similar phenomenon that occurs when one may
immediately detect words of importance originating from unattended stimuli,
for instance hearing one's name in another conversation.
First defined in 1953 by Colin Cherry, he conducted a series of experiments
to better understand auditory attention. One set of experiments had
participants wear a special headset that played different messages to each
ear. The participant was asked to focus on the message and repeat it aloud.
Cherry found that despite focusing on the message in one ear only,
participants heard and detected their name when it was said in the
unfocused ear!
Later research by Neville Moray in 1959 further compounded these
findings, and he was able to conclude that only subjectively important
messages (such as names) would pass through this mental focus block.
Broadbent’s Filter Model

Broadbent’s Filter Model. Many researchers have investigated how selection


occurs and what happens to ignored information. Donald Broadbent was one of
the first to try to characterize the selection process. His Filter Model was based
on the dichotic listening tasks described above as well as other types of
experiments (Broadbent, 1958). He found that people select information on the
basis of physical features: the sensory channel (or ear) that a message was
coming in, the pitch of the voice, the color or font of a visual message. People
seemed vaguely aware of the physical features of the unattended information,
but had no knowledge of the meaning. As a result, Broadbent argued that
selection occurs very early, with no additional processing for the unselected
information. A flowchart of the model might look like this:

Selective Filter Model (Moray, 1959)


• Not long after Broadbent’s theory, evidence began to suggest that
Broadbent’s model must be wrong (e.g., Gray & Wedderburn, 1960).
• Participants ignore most other high-level (e.g., semantic) aspects of an
unattended message, they frequently still recognize their names in an
unattended ear (Moray, 1959; Wood & Cowan, 1995).
• He suggested that the reason for this effect is that messages that are of
high importance to a person may break through the filter of selective
attention (e.g., Koivisto & Revonsuo, 2007; Marsh et al., 2007)
• Moray suggested that the selective filter blocks out most information at
the sensory level. But some personally important messages are so
powerful that they burst through the filtering mechanism.
Treisman’s Attenuation Model

Anne Treisman proposed her selective attention theory in 1964. His theory is
based on the earlier model by Broadbent. Treisman also believed that this
human filter selects sensory inputs on the basis of physical characteristics.
However, she argued that the unattended sensory inputs (the ones that were not
chosen by the filter and remain in the sensory buffer) are attenuated by the filter
rather than eliminated. Attenuation is a process in which the unselected sensory
inputs are processed in decreased intensity. For instance, if you selectively
attend to a ringing phone in a room where there's TV, a crying baby, and people
talking, the later three sound sources are attenuated or decreased in volume.
However, when the baby's cry goes louder, you may turn your attention to the
baby because the sound input is still there, not lost.

Late Selection Models

Other selective attention models have been proposed as well. A late


selection or response selection model proposed by Deutsch and Deutsch (1963)
suggests that all information in the unattended ear is processed on the basis of
meaning, not just the selected or highly pertinent information. However, only
the information that is relevant for the task response gets into conscious
awareness. This model is consistent with ideas of subliminal perception; in
other words, that you don’t have to be aware of or attending a message for it to
be fully processed for meaning.
Neuropsychological Architecture of Attention

Cognitive neuropsychology

§ the study of the structure and function of the brain as it relates to


perception, reasoning, remembering, and all other forms of knowing and
awareness.
§ Cognitive neuropsychology focuses on examining the effects of brain
damage on thought processes—typically through the use of single-case or
small-group designs—so as to construct models of normal cognitive
functioning.

According to Michael Posner, the attentional system in the brain “is neither a
property of a single brain area nor of the entire brain” (Posner & Dehaene,
1994, p. 75).

In 2007, Posner teamed up with Mary Rothbart and they conducted a review of
neuroimaging studies in the area of attention to investigate whether the many
diverse results of studies conducted pointed to a common direction.

They found that what at first seemed like an unclear pattern of activation could
be effectively organized into areas associated with the three subfunctions of
attention: alerting, orienting, and executive attention.
• They found that what at first seemed like an unclear pattern of activation
could be effectively organized into areas associated with the three
subfunctions of attention: alerting, orienting, and executive attention.
• The researchers organized the findings to describe each of these functions
in terms of
– the brain areas involved,
– the neurotransmitters that modulate the changes,
– and the results of dysfunction within this system.
MODULE 3

PERCEPTION

§ In cognitive psychology we refer to the physical (external) world as well as


the mental (internal) world. The interface between external reality and the
inner world is centered in the sensory system.
§ Sensation refers to the initial detection of energy from the physical world.
The study of sensation generally deals with the structure and processes of the
sensory mechanism and the stimuli that affect those mechanisms.
§ Perception on the other hand, involves higher-order cognition in the
interpretation of the sensory information.
§ Basically, sensation refers to the initial detection of stimuli; perception to an
interpretation of the things we sense.
§ When we read a book, listen to our iPod, get a massage, smell cologne, or
taste sushi, we experience far more than the immediate sensory stimulation

Perception

• n. the process or result of becoming aware of objects, relationships, and


events by means of the senses, which includes such activities as
recognizing, observing, and discriminating.
• These activities enable organisms to organize and interpret the stimuli
received into meaningful knowledge and to act in a coordinated manner.

Top-down and Bottom-up processing

Top-down

• Top-down processing is perceiving the world around us by drawing from


what we already know in order to interpret new information (Gregory,
1970).
• Top-down processing emphasizes how a person’s concepts, expectations,
and memory can influence object recognition.
• In more detail, these higher-level mental processes all help in identifying
objects.
• You expect certain shapes to be found in certain locations, and you
expect to encounter these shapes because of your past experiences. These
expectations help you recognize objects very rapidly.
• In other words, your expectations at the higher (or top) level of visual
processing will work their way down and guide our early processing of
the visual stimulus .
• Think how your top-down processing helped you to quickly recognize the
specific nearby object that you selected a moment ago.
• Your top-down processing made use of your expectations and your
memory about objects that are typically nearby. This top-down process
then combined together with the specific physical information about the
stimulus from bottom-up processing.
• As a result, you could quickly and seamlessly identify the object
(Carlson, 2010). As we noted earlier, object recognition requires both
bottom-up and top-down processing.

Bottom-up processing

• Emphasizes that the stimulus characteristics are important when you


recognize an object.
• Specifically, the physical stimuli from the environment are registered on
the sensory receptors. This information is then passed on to higher, more
sophisticated levels in the perceptual system (Carlson, 2010; Gordon,
2004).
• For example, glance away from your textbook and focus on one specific
object that is nearby. Notice its shape, size, color, and other important
physical characteristics.
• When these characteristics are registered on your retina, the object-
recognition process begins. This information starts with the most basic
(or bottom) level of perception, and it works its way up until it reaches
the more ‘‘sophisticated’’ cognitive regions of the brain, beyond your
primary visual cortex. The combination of simple, bottom-level features
helps you recognize more complex, whole objects.

Bottom-up processing works like this:

1. We start with an analysis of sensory inputs such as patterns of light.


2. This information is replayed to the retina where the process of
transduction into the electrical impulses begin.
3. These impulses are passed into the brain where they trigger further
responses along the visual pathways until they arrive at the visual cortex
for final processing.

Bottom-up processing states that we begin to perceive new stimuli through the
process of sensation and the use of our schemas is not required. James J. Gibson
(1966) argued that no learning was required to perceive new stimuli.
Perceptual Learning

• Perceptual learning is defined as a rather permanent and specific change


in the perception of and reaction to external stimuli based on preceding
training or experience with these stimuli.
• Perceptual learning leads to relatively permanent and often very specific
improvements in solving perceptual tasks as a result of preceding
experience and training.
• E.g. Ornithologists, Blind people
• Perceptual learning is experience-dependent enhancement of our ability
to make sense of what we see, hear, feel, taste or smell.
• These changes are permanent or semi-permanent, as distinct from shorter-
term mechanisms like sensory adaptation or habituation.
• Moreover, these changes are not merely incidental but rather adaptive
and therefore confer benefits, like improved sensitivity to weak or
ambiguous stimuli.

Three aspects of perceptual learning make it of general interest.

• First, perceptual learning reflects an inherent property of our perceptual


systems and thus must be studied to understand perception.
• Second, perceptual learning is robust even in adults and thus represents
an important substrate for studying mechanisms of learning and memory
that persist beyond development.
• Third, perceptual learning is readily studied in a laboratory using simple
perceptual tasks and thus researchers can exploit well-established
psychophysical, physiological and computational methods to investigate
the underlying mechanisms.

Perceptual Development

• the acquisition of skills that enable a person to organize sensory stimuli


into meaningful entities during the course of physical and psychological
development.
• The most active investigations about perceptual development focus on
infants’ visual and auditory capabilities. Researchers who study these
topics have demonstrated impressive creativity and persistence in
designing research techniques for assessing early abilities.

Two representative methods are the following:


(1) the habituation/dishabituation method, in which infants decrease their
attention to an object that has been presented many times, and then
increase their attention when a new object is presented; and

(2) the preference method, in which infants spend consistently longer


responding to one object than to a second object.

Sensation

To understand what an infant can sense researchers often present two stimuli
and record the baby’s response.

• For example a baby is given a sweet tasting substance and a sour tasting
substance
• If the baby consistently responds differently to the two stimuli then the
infant must be able to distinguish between them.

Smell

Infants have a keen sense of smell and respond positively to pleasant smells and
negatively to unpleasant smells (Menella, 1997).

• Honey, vanilla, strawberry, or chocolate: relaxed, produces a


contented-looking facial expression
• Rotten eggs, fish, or ammonia produce exactly what you might
expect…infants frown, grimace or turn away

Taste

• Newborns also have a highly developed sense of taste. They can


differentiate salty, sour, bitter & sweet tastes (Rosenstein, 1997).
• Most infants seem to have a “sweet tooth”.
• Infants will nurse more after their mother has consumed a sweet-
tasting substance like vanilla (Menalla, 1997)
• Newborns prefer sweet. However, at 4 months, infants will have a salty
preference
• They will start liking salt which was aversive to them as newborns.

Vision

• Vision is the least mature of all the senses at birth because the fetus has
nothing to look at, so visual connections in the brain can’t form until
birth.
• Visual acuity is defined as the smallest pattern that can distinguished
dependably.
• Infants prefer to look at patterned stimuli instead of plain, non-
patterned stimuli
• To estimate an infant’s visual acuity, we pair gray squares with squares
that differ in the width of their stripes.
• Newborns can perceive few colors, but By 3-4 months newborns are able
to see the full range of colors (Kellman, 1998).
• In fact, by 3-4 months infants have color perception similar to
adults (Adams, 1995).

Hearing

• Hearing is the most mature sense at birth. In fact, some sounds trigger
reflexes even without conscious perception.
– The fetus most likely heard these sounds in the womb during last
trimester
– Sudden sounds startle babies-making them cry, some rhythmic
sounds, like a heartbeat/lullaby put a baby to sleep.
• Yes, infants in first days of life, turn their head toward source of sounds
and they can distinguish voices, language, and rhythm.

Auditory Threshold

• The fetus can hear in utero at 7-8 months, so it is no surprise that


newborns respond to auditory stimuli but, do infants hear as well as
adults??
• No they cannot. The Auditory threshold refers to the quietest sound that a
person can hear.
• The quietest sound an newborn responds to is about 4 times louder than
the quietest sound an adult responds to.

Perceptual Constancies

• An important part of perceiving objects is that the same object can look
very different
• Infants master size constancy very early on
– They recognize that an object remains the same size despite its
distance from the observer

Depth Perception

• Infants are not born with depth perception, it must develop. The images
on the back of our eyes are flat and 2-dimensional
• To create a 3-D view of the world, the brain combines information from
the separate images of the two eyes, retinal disparity
• Visual experience along with development in the brain lead to the
emergence of binocular depth perception around 3-5 months of age.

Face Recognition

• Infants enjoy looking at faces, a preference that may reflect innate


attraction to faces, or a fact that faces may attract infant’s attention.
• At birth, infants are attracted to the borders of objects When looking at a
human face
– a newborn will pay more attention to the hairline or the edge of the
face (even though the newborn can see the features of the face)
– By 2 months of age, infants begin to attend to the internal features
of the face – such as the nose and mouth
– By 3 months of age, infants focus almost entirely on the interior of
the face, particularly on the eyes and lips. At this age, infants can
tell the difference between mother’s face and a stranger’s face.
– Theorist’s believe that infants are attracted to human faces because
faces have stimuli that move (eyes and lips) and stimuli with dark
and light contrast (the eyes, lips and teeth).

Handedness

• Young babies reach for objects without a preference for one hand over
the other
• The preference for one hand over the other becomes stronger and more
consistent during preschool years
– By the time children are ready to enter kindergarten, handedness is
well established and very difficult to reverse
– Handedness is determined by heredity and environmental factors
– Approximately 10% of children write left-handed
• At birth infants can turn their heads from side to side while lying on their
backs
– By 2-3 months they can lift their heads while lying on their
stomachs
– By 4 months infants can keep heads erect while being held or
supported in a sitting position

Crawling

• Begins as belly-crawling
– The “inchworm belly-flop” style
– Most belly crawlers then shift to hands-and-knees, or in some
cases, hands-and-feet
• Some infants will adopt a different style of locomotion in place of
crawling such as bottom-shuffling while some infants skip crawling
altogether
• Due to the “back-to-sleep” movement, infants spend less time on their
tummies which may limit their opportunity to learn how to propel
themselves

Walking – Stepping

• Children do not step spontaneously until approximately 10 months


because they must be able to stand in order to step
• Maintaining balance when transferring weight from foot to foot seems to
be key
• Thelen and Ulrich (1991) found that 6- and 7-month-olds, if held
upright by an adult, could demonstrate the mature pattern of
walking of alternating steps on a treadmill

Gross motor skills

• Emerge directly from reflexes.


• These are physical abilities involving large body movements and large
muscle groups such as walking and jumping.
• Involve the movement of the entire body-
– Rolling over, standing, walking climbing, running

Fine Motor Skills

• After infancy fine motor skills progress rapidly and older children become
more dexterous because these movements involve the use of small muscle
groups
• These consist of small body movements, especially of the hands and
fingers.
– such as drawing, writing your name, picking up a coin, buttoning
or zipping a coat.

Perception of Shape, Space and Movement

Viewer-centered representation, is that the individual stores the way the object
looks to him or her.
Thus, what matters is the appearance of the object to the viewer (in this case,
the appearance of the computer to the author), not the actual structure of the
object.

The shape of the object changes, depending on the angle from which we look at
it.

A number of views of the object are stored, and when we try to recognize an
object, we have to rotate that object in our mind until it fits one of the stored
images.

Consider, for example, the computer on which this text is being written. It has

different parts: a screen, a keyboard, a mouse, and so forth.

Suppose the author represents the computer in terms of viewer-centered


representation. Then its various parts are stored in terms of their relation to him.
He sees the screen as facing him at perhaps a 20-degree angle. He sees the
keyboard facing him horizontally. He sees the mouse off to the right side and in
front of him.

Suppose, instead, that he uses an object-centered representation. Then he would


see the screen at a 70-degree angle relative to the keyboard. And the mouse is
directly to the right side of the keyboard, neither in front of it nor in back of it.

Perception of Depth and Distance

Depth cue

• any of a variety of means used to inform the visual system about the
depth of a target or its distance from the observer.
• Monocular cues require only one eye and include signals about the state
of the ciliary muscles, atmospheric perspective, linear perspective, and
occlusion of distant objects by near objects.
• Binocular cues require integration of information from the two eyes and
include signals about the convergence of the eyes and binocular disparity.

Monocular Depth Cues

• One such cue is patterns of light and shadow.


• linear perspective, refers to the perception that parallel lines converge, or
angle toward one another, as they recede into the distance.
• Interposition, in which objects closer to us may cut off part of our view of
more distant objects, provides another cue for distance and depth.
• An object’s height in the horizontal plane provides another source of
information.
• Likewise, clarity can be an important cue for judging distance; we can see
nearby hills more clearly than ones that are far away, especially on hazy
days.
• Relative size is yet another basis for distance judgments. If we see two
objects that we know to be of similar size, then the one that looks smaller
will be judged to be farther away.

Binocular Depth Cues

Binocular cues require integration of information from the two eyes and
include signals about the convergence of the eyes and binocular disparity.

Convergence

n. the rotation of the two eyes inward toward a light source so that the image
falls on corresponding points on the foveas. Convergence enables the
slightly different images of an object seen by each eye to come together and
form a single image. The muscular tension exerted is also a cue to the
distance of the object from the eyes.

Binocular Disparity

the slight difference between the right and left retinal images. When both
eyes focus on an object, the different position of the eyes produces a
disparity of visual angle, and a slightly different image is received by each
retina. The two images are automatically compared and, if sufficiently
similar, are fused, providing an important cue to depth perception. Also
called retinal disparity.

Sensory Integration Theory

• Sensory Integration is a theory developed by A. Jean Ayres, an


occupational therapist with advanced training in neuroscience and
educational psychology (Bundy & Murray, 2002).
• Ayres (1972) defines sensory integration as "the neurological process that
organizes sensation from one's own body and from the environment and
makes it possible to use the body effectively within the environment" (p.
11).
• The theory is used to explain the relationship between the brain and
behavior and explains why individuals respond in a certain way to
sensory input and how it affects behavior.

The five main senses are:

• Touch - tactile
• Sound - auditory
• Sight - visual
• Taste - gustatory
• Smell – olfactory

In addition, there are two other powerful senses:

• a) vestibular (movement and balance sense)-provides information about


where the head and body are in space and in relation to the earth's
surface.
• b) proprioception (joint/muscle sense)-provides information about where
body parts are and what they are doing.

How efficiently we process sensory information affects our ability to:

• a) discriminate sensory information to obtain precise information from


the body and the environment in order to physically interact with people
and objects.
• An accurate body scheme is necessary for motor planning, i.e., being able
to plan unfamiliar movements.
• It involves having the idea of what to do, sequencing the required
movements, and executing the movements in a well-timed, coordinated
manner.

b) modulate sensory information to adjust to the circumstances and maintain


optimum arousal for the task at hand.

• Sensory modulation is the "capacity to regulate and organize the degree,


intensity and nature of responses to sensory input in a graded and
adaptive manner" (Miller & Lane, 2000).
• Sensory defensiveness, a type of sensory modulation problem, is defined
by Wilbarger and Wilbarger (1991) as "a constellation of symptoms
related to aversive or defensive reactions to non-noxious stimuli across
one or more sensory systems" (Wilbarger & Wilbarger, 2002a, p. 335)
• It can affect changes in the state of alertness, emotional tone, and stress
(Wilbarger & Wilbarger, 2002a).
Implicit Perception

Just as there are palpable effects on experience, thought, and action of past
events that cannot be consciously remembered, there also appear to be similar
effects of events in the current stimulus environment, which cannot be
consciously perceived.

Explicit perception may be identified with the subject’s conscious perception of


some object in the current environment, or the environment of the very recent
past, as reflected in his or her ability to report the presence, location, form,
identity, and/or activity of that object.

By the same token, implicit perception entails any change in the person’s
experience, thought, or action which is attributable to such an event, in the
absence of (or independent of) conscious perception of that event.

Implicit perception is exemplified by what has been called ‘subliminal’


perception, involving stimuli that are in some sense too weak to cross the
threshold (limen) of conscious perception.

In the earliest demonstrations of subliminal perception, the stimuli in question


were of very low intensity – ‘subliminal’ in the very strict sense of the term.

Later, stimuli of higher, supraliminal levels of intensity were rendered


‘subliminal’ by virtue of extremely brief presentations, or by the addition of a
‘masking’ stimulus which for all practical purposes rendered the stimuli
invisible.

Subliminal perception quickly entered everyday vocabulary, as in the


controversy over subliminal advertising and the marketing of subliminal self-
help tapes.

The adjective ‘implicit’ is preferable to ‘subliminal,’ because there are many


cases of unconscious perception in which the stimulus is not subliminal in the
technical sense of the term. The most dramatic example is that of ‘blindsight’ in
patients with lesions in striate cortex.
Module IV Learning , Memory, & Forgetting

Learning

• the acquisition of novel information, behaviors, or abilities after practice,


observation, or other experiences, as evidenced by change in behavior,
knowledge, or brain function.
• Learning involves consciously or nonconsciously attending to relevant
aspects of incoming information, mentally organizing the information
into a coherent cognitive representation, and integrating it with relevant
existing knowledge activated from long-term memory.

Definitions of Learning:

1. Gardener Murphy: “The term learning covers every modification in


behaviour to meet environmental requirements.”

2. Henry P. Smith: “Learning is the acquisition of new behaviour or the


strengthening or weakening of old behaviour as the result of experience.”

3. Crow & crow: “Learning is the acquisition of habits, knowledge & attitudes.
It involves new ways of doing things and it operates in individuals attempts to
overcome obstacles or to adjust to new situations. It represents progressive
changes in behaviour. It enables him to satisfy interests to attain goals.

NATURE OF LEARNING

1. Learning is Universal. Every creature that lives learns. Man learns most.
The human nervous system is very complex, so are human reactions and so are
human acquisition. Positive learning vital for children’s growth and
development.

2. Learning is through Experience. Learning always involves some kind of


experience, direct or indirect (vicarious).

3. Learning is from all Sides: Today learning is from all sides. Children learn
from parents, teachers, environment, nature, media etc.

4. Learning is Continuous. It denotes the lifelong nature of learning. Every


day new situations are faced and the individual has to bring essential changes in
his style of behaviour adopted to tackle them. Learning is birth to death.
5. It results in Change in Behaviour. It is a change of behaviour influenced by
previous behaviour. It is any activity that leaves a more or less permanent effect
on later activity.

6. Learning is an Adjustment. Learning helps the individual to adjust himself


adequately to the new situations. Most learning in children consists in
modifying, adapting, and developing their original nature. In later life the
individuals acquire new forms of behaviour.
7. It comes about as a result of practice. It is the basis of drill and practice. It
has been proven that students learn best and retain information longer when
they have meaningful practice and repetition. Every time practice occurs,
learning continues.

8. Learning is a relatively Permanent Change. After a rat wake up from his


nap he still remembers the path to the food. Even if you have been on a bicycle
for years, in just a few minutes practice you can be quite proficient again.

9. Learning as Growth and Development. It is never ending growth and


development. At reach stage the learner acquires new visions of his future
growth and news ideals of achievement in the direction of his effort. According
to Woodworth, “All activity can be called learning so far as it develops the
individual.”

10. Learning is not directly observable. The only way to study learning is
through some observable behaviour. Actually, we cannot observe learning; we
see only what precedes performance, the performance itself, and the
consequences of performance.

Process of Learning
4 Theories of learning

1. Classical Conditioning
2. Operant Conditioning
3. Cognitive Theory.
4. Social Learning Theory.

These are explained below:-

Classical Conditioning

Classical conditioning can be defined as a process in which a formerly neutral


stimulus when paired with an unconditional stimulus, becomes a conditioned
stimulus that elicits a conditioned response. (Luthans 1995)

Ivan Pavlov, a Russian psychologist (Nobel Peace Prize) developed classical


conditioning theory of learning based on his experiments to teach a dog to
salivate in response to the ringing of a bell.

• When Pavlov presented meat (unconditioned stimulus) to the dog, he


noticed a great deal of salivation (conditioned response). But, when
merely bell was rung, no salivation was noticed in the dog.

• Then, when next Pavlov did was to accompany the offering of meat to
the dog along with ringing up of bell.

• He did this several times. Afterwards, he merely rang the bell without
presenting the meat. Now, the dog began to salivate as soon as the bell
rang.

• After a while, the dog would salivate merely at the sound of the bell,
even if no meat were presented. In effect, the dog had learned to
respond i.e. to salivate to the bell.

Pavlov concluded that the dog has become classically conditioned to salivate
(response) to the sound of the bell (stimulus). It will be seen that Classical
Conditioning learning can take place amongst animals based on stimulus-
response (SR) connections.

Classical Conditioning Examples

This stimulus-response connection (S-R) can be applied in management to


assess organizational behavior. Historically when a CEO visits an
organization, production charts are updated, individuals put on a good dress,
window panes are cleaned and floors are washed. What all one has to do is to
just say that the Top Boss is visiting.
You will find that all the above work is undertaken (response) without any
instructions. Because the people in the organization have learned the behaviour
(conditioned). It has caused a permanent change in the organization (S-R
connections).

Factors Influencing Classical Conditioning

There are four major factors that affect the strength of a classically
conditioned response and the length of time required for classical conditioning.

1. The number of pairings of the conditioned stimulus and the


unconditional stimulus.
In general, the greater the number of pairings, the stronger the
conditioned response.

2. The intensity of the unconditioned stimulus.


If a conditioned stimulus is paired with a very strong unconditioned
stimulus, the conditioned response will be stronger.

3. The most important factor is how reliably the conditioned stimulus


predicts the unconditioned stimulus.

For example, a tone that is always followed by food will elicit more
salivation than one that is followed by food only some of the time.

4. The temporal relationship between the conditioned stimulus and


the unconditioned stimulus.
Conditioning takes place faster if the conditioned stimulus occurs
shortly before the unconditioned stimulus.

Limitations of Classical Conditioning


Classical conditioning has real limitation in its acceptability to human
behaviour in organisations for at least three reasons:

1. Human beings are more complex than dogs but less amenable to
simple cause-and-effect conditioning.

2. The behavioural environment in organisations is also complex.


3. The human decision-making process being complex in nature makes it
possible to override simple conditioning.

An alternate approach to classical conditioning was proposed by B.F. Skinner,


known as Operant Conditioning, in order to explain the more complex
behaviour of human, especially in an organisational setting

Operant Conditioning

Operant Conditioning is concerned primarily with learning as a consequence


of behaviour Response-Stimulus (R-S). In Operant Conditioning particular
response occurs as a consequence of many stimulus situations.
• Operant conditioning argues that behaviour is a function of its
consequences.

• People learn to behave to get something they want or avoid something


they don’t want.

• Operant behavior means voluntary or learned behavior.

• The tendency to repeat such behaviour is influenced by the


reinforcement or lack of reinforcement brought about by the
consequences of the behaviour.

Reinforcement therefore strengthens behaviour and increases the


likelihood it will be repeated.

Operant Conditioning Examples

This Response-Stimulus (R-S) can be applied in management to assess


organizational behavior. From an organisational point of view, any stimulus
from the work environment will elicit a response. The consequence of such a
response will determine the nature of the future response.

For example, working hard and getting the promotion will probably cause the
person to keep working hard in the future.

Factors Influencing Operant Conditioning


In operant conditioning, several factors affect response rate, resistance to
extinction and how quickly a response is acquired.
1. Magnitude of reinforcement
In general, as magnitude of reinforcement increases, acquisition of a
response is greater. For example, workers would be motivated to work
harder and faster, if they were paid a higher salary.
sxwx
2. Immediacy of reinforcement
Responses are conditioned more effectively when reinforcement is
immediate. As a rule, the longer the delay in reinforcement, the more
slowly a response is acquired.

3. Level of motivation of the learner


If you are highly motivated to learn to play football you will learn
faster and practice more than if you have no interest in the game.

Cognitive Learning Theory

Behaviourists such as Skinner and Watson believed that learning through


operant and classical conditioning would be explained without reference to
internal mental processes.
Today, however, a growing number of psychologists stress the role of mental
processes. They choose to broaden the study of learning theories to include
such cognitive processes as thinking, knowing, problem-solving, remembering
and forming mental representations.
According to cognitive theorists, these processes are critically important in a
more complete, more comprehensive view of learning.

Learning by insight

Wolfang Kohler (1887 – 1967): A German Psychologist studied anthropoid


apes and become convinced that they behave intelligently and were capable of
problem solving.
• In one experiment Kohler hung a bunch of bananas inside the caged
area but overhead, out of reach of the apes; boxes and sticks were left
around the cage.
• Kohler observed the chimp’s unsuccessful attempts to reach the
bananas by jumping or swinging sticks at them.
• Eventually the chimps solved the problem by piling the boxes one on
top of the other until they could reach the bananas.
Kohler’s major contribution is his notion of learning by insight. In human terms,
a solution gained through insight is more easily learned, less likely to be
forgotten, and more readily transferred to new problems than solution learned
through rote memorization.
Latent Learning and Cognitive Maps

Edward Tolman (1886 – 1959) differed with the prevailing ideas on learning

(a) He believed that learning could take place without reinforcing.

(b) He differentiated between learning and performance. He maintained that


latent learning could occur. That is learning could occur without apparent
reinforcement but not be demonstrated until the organism was motivated to do
so.

Social Learning Theory

Albert Bandura contends that many behaviours or responses are acquired


through observational learning. Observational learning, sometimes
called modelling, results when we observe the behaviours of others and note the
consequences of that behaviour.

Social learning theory is a behavioral approach. The approach basically deals


with learning process based on direct observation and the experience.
Social learning theory integrates the cognitive and operant approaches to
learning. It recognises that learning does not take place only because of
environmental stimuli (classical and operant conditioning) or of individual
determinism (cognitive approach) but is a blend of both views.
Usually, the following four processes determine the influence that a model will
have on an individual:

1. Attention Process
2. Retention Process
3. Motor Reproduction Process
4. Reinforcement Process

1. Attention Process: People can learn from their models provided they
recognise and pay attention to the critical features. In practice, the
models that are attractive, repeatedly available or important to us tend
to influence us the most.

2. Retention Process: A model’s influence depends on how well the


individual can remember or retain in memory the behaviour/action
displayed by him when the model is no longer readily available.
3. Motor Reproduction Process: Now, the individual needs to convert
the model’s action into his action. This process evinces how well an
individual can perform the modelled action.

4. Reinforcement Process: Individuals become motivated to display the


modelled action if incentive and rewards are provided to them.

Self-efficacy

Central to Bundura’s social learning theory is the notion of self-efficacy.


Self-efficacy is an individual’s belief and expectancies about his or her ability
to accomplish a specific task effectively.
According to Bandura, self-efficacy expectations may be enhanced through four
means as follows:

1. Performance accomplishments (just do it)


2. Vicarious experiences (watch someone else do it)
3. Verbal persuasion (be convinced by someone else to do it)
4. Emotional arousal (get excited about doing it)

MEMORY

The multi-store model of memory (also known as the modal model) was
proposed by Richard Atkinson and Richard Shiffrin (1968) and is a structural
model. They proposed that memory consisted of three stores: a sensory
register, short-term memory (STM) and long-term memory (LTM).
SENSORY MEMORY

Sensory memory is the first stage of memory, the point at which information
enters the nervous system through the sensory systems—eyes, ears, nose,
tongue, and skin. The sensory register is a memory system that works for a
very brief period of time that stores a record of information received by
receptor cells until the information is selected for further processing or
discarded. Information is encoded into sensory memory as neural messages in
the nervous system. As long as those neural messages are traveling through
the system, it can be said that people have a “memory” for that information
that can be accessed if needed.
A. The sensory memory register is specific to individual senses:
1. Iconic memory for visual information

2. Echoic memory for auditory information

B. Duration is very brief:

1. 150-500 msec for visual information

2. 1-2 sec for auditory information

C. The capacity of the sensory register is believed to be large.

D. Information in store is meaningless unless it is selected for further


processing by being attended to in an effortful way.
E. The general purpose of the sensory information stores seems to be to keep
information around, albeit briefly, for further processing. Processing
information takes time, and it’s helpful to have an initial store that maintains
the presented information beyond its physical duration.
SHORT TERM MEMORY (STM)

INTRODUCTION :-
Short Term memory (STM) is the place where small amounts of
information can be temporarily kept for more than a few seconds but
usually for less than one minute (Baddeley, Vallar, & Shallice, 1990).
Information in short-term memory is not stored permanently but rather
becomes available for us to process, and the processes that we use to make
sense of, modify, interpret, and store information in STM.
Psychologists distinguish STM from LTM on the basis of characteristics,
such as,

1. How much information can be stored (capacity),


2. The form in which the information is stored (coding),
3. The ways in which information is retained or forgotten, and
4. The ways in which information is retrieved.
CAPACITY -

The cognitive psychologist George Miller (1956) referred to “seven plus or


minus two” pieces of information as the “magic number” in short-term
memory. It seems to be the maximum number of independent units one can
hold in STM.
Miller (1956) demonstrated that if one is presented with a string of random
units, they will be able to recall them only if the string contains about seven
or fewer units. The only way to overcome this limitation is by chunking the
individual units into larger units.
Chunking is the process of organizing information into smaller
groupings (chunks), thereby increasing the number of items that can be
held in STM.
For example, N F L C B S F B I M T V. On looking closely this 12-letter string
form four sets of abbreviations for well-known entities: NFL (the National
Football League), CBS (one of the three major television networks currently
operating in the United States), FBI (the Federal Bureau of Investigation), and
MTV (the rock video cable television station).
CODING -

The term coding refers to the way in which information is mentally


represented, that is, the form in which the information is held.
∙ A study by R. Conrad (1964) presented participants with lists of
consonants for later recall. Although the letters were presented visually, the
participants were confused by the sound. Participants were forming a mental
representation of the stimuli that involved acoustic rather than the visual
properties.
∙ Later work by Baddeley (1966a, 1966b) confirmed this effect even when
the stimuli were words rather than letters.

Researchers regard acoustic code as the dominant code used in STM (Neath
& Surprenant, 2003).
RETENTION DURATION AND FORGETTING -

∙ John Brown (1958) and Peterson and Peterson (1959), concluded from
their three consonant trigram counting study that if not rehearsed,
information is lost, decayed, or breaks apart, within about 20 seconds from
STM. That length of time is called the retention duration of the memory.
∙ However, other cognitive psychologists proposed a different mechanism,
called interference.
Some information can “displace” other information, making the former
hard to retrieve.
∙ Waugh and Norman (1965) in their study of probe digit task showed that
interference is the reason for the loss of information in STM rather than
decay.
∙ Keppel and Underwood (1962), found that forgetting in the Brown–
Peterson task doesn’t happen until after a few trials. They suggested that over
time, proactive interference builds up.
This term refers to the fact that material learned first can disrupt
retention of subsequently learned material.
∙ Wickens, Born, and Allen (1963) reasoned that the greater the similarity
among the pieces of information, the greater the interference.
∙ All the evidence might suggest that all cognitive psychologists agree that
only interference causes forgetting in STM, but it is not so. Reitman (1971,
1974) concluded that information really could decay if not rehearsed in STM.

RETRIEVAL OF INFORMATION -
∙ Saul Sternberg (1966, 1969), in a series of experiments, found serial,
exhaustive search as the way one retrieves information from STM. He
explained that the search process itself may be so rapid and have such
momentum it is hard to stop once it starts.
∙ An intriguing twist on the Sternberg study comes from DeRosa and Tkacz
(1976), who demonstrated that with certain kinds of stimuli, people
apparently search STM in a parallel way. This work suggests that STM treats
ordered, organized material differently from unorganized material.

STM is a short-term, limited capacity storehouse where information is coded


acoustically and maintained through rehearsal. Information can be retrieved
from this storage using high speed, serial, exhaustive search. The nature of
the information in STM, however, can help change the capacity and
processing of stored information.
LONG TERM MEMORY

Long-term memory is the storage of information for a long time. Long-term


memory is the final stage in the processing of memory. The Information
stored in long-term memory lasts longer than those is short-term memory.
Long-term memory decays very little with time and it is easier to recall. Our
conscious mind may not be aware of the information stored in long-
term memory. But this information can be recalled with ease and accuracy.
Examples of long-term memory are the recollection of an important event in
distant past or bicycle riding skills someone learned in childhood. Some
things easily become part of long-term memory while others may need
continuous practice to be stored for a long time. It also varies from person
to person. Some people can remember complex things with little or no
difficulty while others may struggle in remembering easier and daily life
information. Long-term memory is usually defined in contrast to short-term
memory. Short-term memories last only for about 18-30 seconds while long-
term memories may last for months or years, or even decades. The capacity of
long-term memory is unlimited in contrast to short-term and working memory.
A lot of research have shown that different types of long-term memories are
stored in different parts of the brain.

TYPES OF LONG TERM

Explicit Memory
Explicit memory usually refers to all the memories and information that can
be evoked consciously. The encoding of explicit memories is done in the
hippocampus, but they are stored
somewhere in the temporal lobe of the brain. The medial temporal lobe is
also involved in this type of memory and damage to MTL is linked to poor
explicit memory.

The other name used for explicit memory is declarative memory. Explicit or
declarative memory is divided into two types: episodic and semantic
memory.

∙ Semantic memory: It is a part of the explicit long-term memory


responsible for storing information about the world. This includes knowledge
about the meaning of words, as well as general knowledge.
For example, London is the capital of England. It involves conscious thought
and is declarative.

The knowledge that we hold in semantic memory focuses on “knowing that”


something is the case (i.e., declarative). For example, we might have a
semantic memory for knowing that Paris is the capital of France.
∙ Episodic memory is a part of the explicit long-term memory responsible
for storing information about events (i.e., episodes) that we have
experienced in our lives.
It involves conscious thought and is declarative. An example would be a
memory of our 1st day at school.
The knowledge that we hold in episodic memory focuses on “knowing that”
something is the case (i.e. declarative). For example, we might have an
episodic memory for knowing that we caught the bus to college today.
Implicit Memory
Implicit memory is the opposite of declarative memory. It refers to the
movement of the body in using objects. An example of implicit memory
would be how to ride a bicycle. Several brain areas which include basal
ganglia, parietal and occipital regions are involved in implicit memory. This
type of memory is largely independent of the hippocampus. Writing,
riding, driving, and swimming are all examples of implicit memory because
they are non-declarative.

∙ Procedural Memory: Procedural memory is the memory of motor skills,


and it is responsible for knowing how to do things. This memory is automatic
i.e., it works at an unconscious level. Procedural memories are non-
declarative and retrieved
automatically for in procedures that involve motor skills. For example, riding
a bicycle is a type of procedural memory.
∙ Associative Memory: Associative memory usually refers to the storage and
retrieval of specific information through association. The acquisition of this
type of memory is carried out with two types of conditioning. One is classical
conditioning and the other is operant conditioning. Classical conditioning
refers to the learning process in which stimuli and Behavior are associated.
On the other side, operant conditioning is a learning process in which new
behaviours develop according to the consequences.
∙ Non-associative: non-associative memory refers to the learning of new
behaviours mainly through repeated exposure to a single type of stimuli. The
new Behavior is classified into habituation and sensitization. Habituation is
the decrease in response to repeated stimuli while sensitization is an increased
response to repeated stimuli.
∙ Priming: Studies have shown that exposure to certain stimuli influences the
response of a person to stimuli that are presented later. This effect of previous
memory on new information is what we call priming.

WORKING MEMORY
In 1690, John Locke distinguished between contemplation, or holding an idea
in mind, and memory, or the power to revive an idea after it has disappeared
from the mind (Logie, 1996). The holding in mind is limited to a few concepts
at once and reflects what is now called working memory, as opposed to the
possibly unlimited store of knowledge from a lifetime that is now called long-
term memory. The term “working memory” became much more dominant in
the field after Baddeley and Hitch (1974) demonstrated that a single module
could not account for all kinds of temporary memory. Their thinking led to
an influential model (Baddeley, 1986) in which verbal-phonological and
visual-spatial representations were held separately, and were managed and
manipulated with the help of attention-related processes, termed the
central executive.
Working memory can be defined as the retention of a small amount of
information in a readily accessible form. It facilitates planning,
comprehension, reasoning, and problem-solving.
Working memory holds new information in place so the brain can work with
it briefly and connect it with other information. For example, in math class,
working memory lets children “see” in their head the numbers the teacher is
saying. They might not remember any of these
numbers by the next class or even 10 minutes later. But that’s fine since the
working memory has done its short-term job by helping them tackle the task
at hand.
Working memory capacity is the ability to hold onto and use specific
information for a certain amount of time. Working memory capacity has been
shown to increase with age; however there are key differences across
individuals. It's also important to understand that working memory capacity
is quite low and limited when compared to other kinds of memory, since a
certain amount of information can only be held for a certain amount of time.
Research has shown that the number of items we can hold at once in our
working memory is seven, plus or minus two items. According to Miller, our
ability to put new information into categories via the process of chunking
increases the number of items we can retain at once. This involves grouping
the information into meaningful pieces that can be remembered better. An
example of this would be if you were given a list of unrelated words to retain,
you would probably categorize them into respective chunks: for example, the
words "bird," "mouse," and even "bug" could be classified into the 'animals'
chunk, and "book," "pen," "house" into the ' inanimate items' chunk. Other
theories, such as those of Cowan, claim that we can only retain three or four
items at once.
Maintenance describes the ability to 'hold' information in the mind for a given
amount of time. This is a very important feature about working memory to
understand because, without it, we lose our ability to use and manipulate this
kind of short-lived memory and information. Biologically, maintenance
requires the activation of key brain regions associated with problem solving
and memory, in addition to many other areas that may aid in our ability to use
the information.
Rehearsal refers to the process of consistently repeating learned information
in a meaningful way, such as going through a list of groceries not just seeing
unrelated words, but trying to think of how they could contribute to future
meals. Rehearsal could also be giving meaning to recently learned or encoded
information, such as making mnemonics in order to remember a list of given
words in order. For example, in order to remember the planets in our solar
system in order, we can remember the mnemonic "My Very Educated Mother
Just Served Us Noodles", corresponding to Mercury, Venus, Earth, Mars,
Jupiter, Saturn, Uranus, and Neptune.
Maintenance and rehearsal are ways to increase the likelihood of
remembering something. This is because memories that are not rehearsed or
maintained degrade and become inaccurate over time.
Working Memory and Short term memory are often confused as the same but
working memory differs from short-term memory in that it assumes both the
storage and manipulation of information, and in the emphasis on its functional
role in complex cognition.

Metamemory

Metamemory refers to the processes and structures whereby people are able
to examine the content of their memories, either prospectively or
retrospectively, and make judgments or commentaries about them.
The simplest form of metamemory is knowing you have a memory without
remembering the details. If you’ve ever experienced the tip-of-the-tongue
phenomena (you can’t recall a memory but know you have it, like a movie’s
name), you’ve concluded you have a memory via your metamemory, not
actual memory. From a computer programming point of view, it’s knowing
there is (or isn’t) an item in a database without actually looking it up, like
looking at only the key or value in a pair.

Metamemory is also similar to a file’s meta-data on the internet.

All humans have the capacity to quickly figure out what they know and don’t
know, and their judgments are quite accurate. For example, without searching
the entire mind for all memory units, we can easily tell if we know the answer
to a question in an exam. We can look at the question paper and quickly assess
what we know without going through the details of what we know. This may
seem paradoxical at first, but it isn’t – how can we know what’s in a drawer
without checking it? Humans have multiple representations for memory that
indicate whether a memory exists or not and to what extent. For example,
artificially triggering a small set of neurons that are attached to a memory can
activate a larger network of neurons that hold that memory.

Metamemory has 8 unique types of details about memory:

1. Strategy: Knowledge of memory techniques and tricks to aid memory.


This would prompt you to use a strategy or learn a strategy for specific
goals like studying, such as mnemonics to learn biology.

2. Task: Knowledge that a task demands memory and recalling ability vs.
familiarity or on-the-go analysis. This would prompt you to know how
you can do a task if it needs memory or analysis or both.

3. Capacity: Knowledge about one’s overall/general memory capacity.


This would reflect an accurate understanding of what you are good at
remembering, what you are bad at, and where you have gaps in
memory.

4. Change: Knowledge about how one’s memory changes over time or in


different circumstances. This would reflect an accurate understanding
of what you could remember well as a child, or what you got good at
as an adult, and how that has changed over time.

5. Anxiety: Knowledge about how anxiety (and stress) affect memory


performance. This would prompt you to pass judgments about how you
might not remember
something important during an interview because of nervousness,
and then you may decide to focus more on those details that are
prone to slip your mind during stress.

6. Activities: Knowledge of what activities support memory. This would


prompt self reflection about how your memory might be affected by lack
of sleep or diet.

7. Achievement motivation: The motivation and intention to maximize


memory potential. This would reflect the motivation and confidence
to perform memory tasks better when taken seriously vs. casually.

8. Locus: Knowledge about how much control one has over their memory
(little control vs. high control): This would contain beliefs about how
well you can improve your memory or how easily you can lose it with
your own actions (such as drinking or no exercise) vs. how easily you
believe genetics play a role or you can’t change your memory abilities
beyond a small limit.

Metamemory is an important factor for age-related memory decline. Older


people often start with subjective memory concerns (SMC), which are then
screened by a doctor. 2 aspects of meta-memory – capacity and change –
usually indicate the need for medical guidance.

Theories of metamemory

Multiple theories explain different aspects of metamemory.

Cue-familiarity hypothesis: Information is stored as a pair of Cue (a heading,


label, trigger, question, concept, etc.) and a Target (the details). For example,
the concept of a car is stored as a cue with the label “car” and the target, which
contains the details like 4 wheels, driver, steering, etc. Humans might only
recall a cue and judge if they have a memory for details. So if the cue is
familiar, they know they have the target information. Remembering a familiar
process but being unable to recall the details indicates the cue is a “model”
and the target is the details.

The accessibility hypothesis: Information is judged based on how easily it is


accessed and processed. Easy processing and access lead to more accurate
judgments about knowing the actual information. For example, if one is
asked, “do you know what gravity is?” they would rely on the most accessible
“portion of information” like how planets move around the sun and conclude
they know the concept. This theory suggests retrieving a small portion
of information because it is easy and accessible creates metamemory
judgments of knowing, as opposed to just recognizing if “gravity” is a
familiar term (the cue-familiarity hypothesis).

The competition hypothesis: The brain gets constant inputs from various
senses and within itself. These create competition within themselves and fight
for processing power in the brain. When the competition is high, metamemory
judgments about knowing something are inaccurate or low. This might
happen because high competition would lead to multiple cue
and target activations which create confusion. So low competition would
increase confidence in memory. It would be easy to say, “I know what love is,”
but when asked, “how is it different from compassion or lust?” one would find
it difficult to differentiate the 3. This extra activation of competing inputs
fighting for the brain’s resources would make recall worse.
The interaction hypothesis: This theory suggests cue-familiarity hypothesis
is the preferred mode for metamemory and the accessibility hypothesis occurs
only when cue-familiarity fails. In the car example, one would instantly know
what a car is. But when asked “what is a 4-wheel drive,” they would take time
to retrieve partial information about a 4-wheel drive, if it exists, and then
judge if they know or don’t know.

How metamemory affects daily thinking


Metamemory creates important judgment about awareness of knowledge that
affects every day experiences and overall metacognition.

1. Feeling of knowing: We often know what we know in an instant.

2. Judgments of learning: When we learn in a classroom or in training,


we often accurately judge if we did or didn’t learn.

3. Knowing what you don’t know: When asked about an unknown


concept, we can instantly judge if we don’t know it, even if the words
are familiar.

4. Knowing what you know: We are typically confident about what we


know for sure, like our name and birthdate.

5. Feelings of knowing vs. Actually knowing: Sometimes, we judge


information based on familiarity (the cue) but not the proof of memory
(successful recall of details). These judgments can create a false
metamemory of knowing something because the actual information is
unknown or unrecallable.

6. Tip-of-the-tongue (Presque vu): When trying to recall a specific name


or a TV episode, you might forget it even when you have seen it and
know you know it. One reason is its memory may be blocked through
competition from all other names surfacing in the mind, but its
metamemory might be active. Generally, waiting for other memory
units to fade out from the mind brings the “target” memory into
awareness.
7. Prospective memory: Prospective memory describes how we can make a
mental note to remember something and then actually remember it later,
without repeating the information. This skill is important in planning.

8. Dunning-Kruger effect: Judgments of knowledge are less accurate for


those who know a little compared to those who know more. But for
those who know very little, their confidence is disproportionately high.
This may be due to the competition hypothesis (no knowledge = low
cues = low competition).
These can be improved with practice and confirmation of accuracy using
feedback or verifying knowledge. Eventually, the verification reinforces that
memory itself. (sometimes called the testing effect – information you test
and verify is remembered better, and other times, called the production
effect – information directly produced by your senses is remembered better).

semantic memory and models


How is it that we know what a dog and a tree are, or, for that matter, what
knowledge is? Our semantic memory consists of knowledge about the world,
including concepts, facts and beliefs. This knowledge is essential for recognizing
entities and objects, and for making inferences and predictions about the world.
In essence, our semantic knowledge determines how we understand and interact
with the world around us.
Tulving (1972, 1983, 1989) proposed a classification of long-term memories
into two kinds: episodic and semantic.
Semantic memory, is thought to store general information about language and
world knowledge. When you recall arithmetic facts (for example, “2 + 2 = 4”),
historical dates (“In fourteen hundred and ninety-two, / Columbus sailed the
ocean blue”), or the past tense forms of various verbs (run, ran; walk, walked;
am, was), you are calling on semantic memory. Notice in these examples that in
recalling “2 + 2 = 4,” you aren’t tracing back to a particular moment when you
learned the fact, as you might do with the 9/11 attacks. Instead of
“remembering” that 2 + 2 = 4, most people speak of “knowing” that 2 + 2 = 4.
• In semantic memory, we store knowledge: facts, concepts, and ideas. •
With semantic memory, the information is encoded as general knowledge,
context effects are less pronounced, and retrieval of information consists of
answering questions from our general knowledge base in the form of
“Remember what . . . .” • Organization of semantic memory is arranged
more on the basis of meanings and meaning relationships among different
pieces of information.
SEMANTIC MEMORY MODELS:
Many of the semantic memory models developed because psychologists and
computer scientists interested in the field of artificial intelligence wanted to build
a system having what most people refer to as “common sense knowledge. Many
of the models described were
developed in the 1970s, when information-processing analogies between
humans and computers were dominating the field of cognitive psychology.
• The Hierarchical Semantic Network Model
A landmark study on semantic memory was performed by Collins and
Quillian (1969). They tested the idea that semantic memory is analogous
to a network of connected ideas. As in later connectionist networks, this
one consists of nodes, which in this case correspond roughly to words or
concepts. Each node is connected to related nodes by means of pointers,
or links that go from one node to another. Thus the node that corresponds
to a given word or concept, together with the pointers to other nodes
to which the first node is connected, constitutes the semantic memory for
that word or concept. The collection of nodes associated with all the words
and concepts one knows about is called a semantic network. Figure 7-1
depicts a portion of such a network

The model was called a hierarchical semantic network model of semantic


memory, because researchers thought the nodes were organized in
hierarchies. Most nodes in the network have superordinate and
subordinate nodes. A superordinate node corresponds to the name of the
category of which the thing corresponding to the subordinate node is a
member. So, for example, a node for “cat” would have the superordinate
node of “animal” and perhaps several subordinate nodes, such as
“Persian,” “tabby,” and “calico.”
Collins and Quillian (1969) presented their model, others found evidence
that contradicted the model’s predictions. One line of evidence was related
to the prediction of cognitive economy, the principle that properties and
facts would be stored with the
highest and most general node possible. Carol Conrad (1972) found
evidence to contradict this assumption. Participants in her sentence
verification experiments were no slower to respond to sentences such as
“A shark can move” than to “A fish can move” or “An animal can move.”
However, the principle of cognitive economy would predict that the
property “can move” would be stored closest to the node for “animal” and
thus that the three sentences would require decreasing amounts of time to
verify. Conrad argued that the property “can move” is one frequently
associated with “animal,” “shark,” and “fish” and that frequency of
association rather than cognitive economy predicts reaction time. A
second prediction of Collins and Quillian’s (1969) model had to do with
hierarchical structure. Presumably, if the network represents such words
(that in turn represent concepts) as animals, mammals, and pigs, then it
should do so by storing the node for “mammal” under the node for
“animal,” and the node for “pig” under the node for “mammal.” However,
Rips, Shoben, and Smith (1973) showed that participants were faster to
verify “A pig is an animal” than to verify “A pig is a mammal,” thus
demonstrating a violation of predicted hierarchical structure. A
third problem for the hierarchical network model was that it failed to
explain why certain other findings kept appearing. One such finding is
called a typicality effect. Rips et al. (1973) found that responses to
sentences such as “A robin is a bird” were faster than responses to “A
turkey is a bird,” even though these sentences should have taken
an equivalent amount of time to verify. In general, typical instances of a
concept are responded to more quickly than atypical instances; robins are
typical birds, and turkeys are not. The hierarchical network model did not
predict typicality effects; instead, it predicted that all instances of a
concept should be processed similarly. These, among other problems, led
to some reformulations as well as to other proposals regarding
the structure of semantic memory. Some investigators abandoned the idea
of networks altogether; others tried to extend and revise the

• The Feature Comparison Model


Smith Shoben, and Rips (1974) proposed one alternative to the hierarchical
semantic network model, called a feature comparison model of semantic
memory. The assumption behind this model is that the meaning of any
word or concept consists of a set of elements called features. Features
come in two types:
Defining, meaning that the feature must be present in every example of the
concept,
Characteristic: the feature is usually, but not necessarily, present.

For instance, think about the concept “bachelor.” The defining features
here include “male,” “unmarried,” and “adult.” It is not possible for a 2-
year-old to be a bachelor (in our common use of the term), nor for a
woman, nor for a married man. Features such as “is young” or “lives in
own apartment” are also typically associated with bachelors, though not
necessarily in the way that “male” and “unmarried” are—these are the
characteristic features.
The feature comparison model can explain many findings that the
hierarchical network model could not. One finding it explains is the
typicality effect: Sentences such as “A robin is a bird” are verified more
quickly than sentences such as “A turkey is a bird” because robins, being
more typical examples of birds, are thought to share more characteristic
features with “bird” than do turkeys.
The feature comparison model also explains fast rejections of false
sentences, such as “A table is a fruit.” In this case, the list of features for
“table” and the list for “fruit” presumably share very few entries.
The feature comparison model also provides an explanation for a finding
known as the category size effect (Landauer & Meyer, 1972). This term
refers to the fact that if one term is a subcategory of another term, people
will generally be faster to verify the sentence with the smaller category.
That is, people are faster to verify the sentence “A collie is a dog” than to
verify “A collie is an animal,” because the set of dogs is part of the set of
animals. The feature comparison model explains this effect as follows.
It assumes that as categories grow larger (for example, from robin, to bird,
to animal, to living thing), they also become more abstract. With increased
abstractness, there are fewer defining features. Thus in the first stage of
processing there is less overlap between the feature list of a term and the
feature list of an abstract category.

• Other Network Models


Collins and Loftus (1975) presented an elaboration of the Collins and
Quillian (1969) hierarchical network model that they called spreading
activation theory. these authors sought both to clarify and to extend the
assumptions made about the manner in which people process semantic
information.
They again conceived of semantic memory as a network, with nodes in
the network corresponding to concepts. They also saw related concepts as
connected by paths in the network. They further asserted that when
one node is activated, the excitation of that node spreads down the paths
or links to related nodes. They believed that as activation spreads outward,
it decreases in strength,
activating very related concepts a great deal but activating distantly
related nodes only a little bit. Collins and Loftus (1975) described a
number of other assumptions this model makes, together with
explanations of how the model accounts for data from many other
experiments. They dispensed with the assumptions of cognitive
economy and hierarchical organization, helping their model avoid the
problems of the Collins and Quillian (1969) model.

• The ACT Models


John Anderson (1976, 1983, 1993, 2005; Anderson, Budiu, & Reder,
2001). Called the adaptive control of thought (ACT) model of memory.
Based on analogies to computers, ACT has given rise to several computer
simulations of cognitive processing of different tasks. ACT models do not
make the semantic/episodic distinction described earlier, but distinguish
among three kinds of memory systems. The first is working
memory, thought to contain information the system is currently using. The
other two kinds are declarative memory and procedural memory.
Based on analogies to computers, ACT has given rise to several computer
simulations of cognitive processing of different tasks. ACT models do not
make the semantic/episodic distinction described earlier, but distinguish
among three kinds of memory systems. The first is working memory,
thought to contain information the system is currently using. The other
two kinds are declarative memory and procedural memory. For example,
when you ride a bicycle, swim, or swing a golf club, you are thought to be
drawing on your procedural memory.
Anderson (1983) believed that declarative memory stores information in
networks that contain nodes. There are different types of nodes, including
those corresponding to spatial images or to abstract propositions. As with
other network models, ACT models allow both for activation of any node
and for spreading activation to connected nodes. Anderson also posited
the existence of a procedural memory. This memory store represents
information in production rules. Production rules specify a goal to
achieve, one or more conditions that must be true for the rule to apply, and
one or more actions that result from applying the rule. For example, a
typical college student could use this production rule: “If the goal is to
study actively and attentively (goal) and the noise level in the dormitory
is high (condition) and the campus library is open (condition),
then gather your study materials (action) and take them to the library
(action) and work there (action).”
In the ACT models, working memory is actually that part of declarative
memory that is very highly activated at any particular moment. The
production rules also become activated when the nodes in the declarative
memory that correspond to the conditions of the relevant production rules
are activated. When production rules are executed, they can create new
nodes within declarative memory. Thus ACT models have been described
as very “activationbased” models of human cognition (Luger, 1994).

• Connectionist Models
Connectionist models, also known as Parallel Distributed Processing
(PDP) models, are a class of computational models often used to model
aspects of human perception, cognition, and behaviour, the learning
processes underlying such behaviour, and the storage and retrieval of
information from memory. The approach embodies a
particular perspective in cognitive science, one that is based on the idea
that our understanding of behaviour and of mental states should be
informed and constrained by our knowledge of the neural processes that
underpin cognition. While neural network modelling has a history dating
back to the 1950s, it was only at the beginning of the 1980s that
the approach gained widespread recognition, with the publication of two
books edited by D.E. Rumelhart & J.L. McClelland (McClelland &
Rumelhart, 1986; Rumelhart & McClelland, 1986), in which the basic
principles of the approach were laid out, and its application to a number
of psychological topics were developed. Connectionist models of
cognitive processes have now been proposed in many different domains,
ranging from different aspects of language processing to cognitive control,
from perception to memory. Whereas the specific architecture of such
models often differs substantially from one application to another, all
models share a number of central assumptions that collectively
characterize the “connectionist ” approach in cognitive science. One of
the central features of the approach is the emphasis it has placed on
mechanisms of change. In contrast to traditional computational modelling
methods in cognitive science, connectionism takes it that understanding
the mechanisms involved in some cognitive process should be informed
by the manner in which the system changed over time as it developed and
learned.
Connectionist models take inspiration from the manner in which
information processing occurs in the brain. Processing involves the
propagation of activation among simple units (artificial neurons)
organized in networks, that is, linked to each other through weighted
connections representing synapses or groups thereof. Each unit
then transmits its activation level to other units in the network by means
of its connections to those units. The activation function, that is, the
function that describes how each unit computes its activation based on its
inputs, may be a simple linear function, but is more typically non-linear
(for instance, a sigmoid function).

Episodic memory
Episodic memory, holds memories of specific events in which you yourself
somehow participated. Episodic memory is memory for information about
one’s personal experiences. With episodic memory, the memories are encoded
in terms of personal experience. Recalling memories from the episodic system
takes the form of “Remember when . . ” In episodic memory, we hold onto
information about events and episodes that have happened to us directly. As
Tulving (1989) put it, episodic memory “enables people to travel back in time,
as it were, into their personal past, and to become consciously aware of
having witnessed or participated in events and happenings at earlier times”.
Organization of episodic memory is temporal; that is, one event will be
recorded as having occurred before, after, or at the same time as another.
Episodic memory has also been described as containing memories that are
temporally dated; the information stored has some sort of marker for when it
was originally encountered. Any of your memories that you can trace to a
single time are considered to be in episodic memory. If you recall your high
school graduation, or your first meeting with your first-year roommate, or the
time you first learned of an important event, you are recalling
episodic memories. Even if you don’t recall the exact date or even the year,
you know the information was first presented at a particular time and place,
and you have a memory of that presentation.
Explanation with an example
Gene, for example, survived a motor-cycle accident in 1981 (when he was 30
years old) that seriously damaged his frontal and temporal lobes, including
the left hippocampus. Gene shows anterograde amnesia and retrograde
amnesia. In particular, Gene cannot recall any specific past events, even with
extensive, detailed cues. That is, Gene cannot recall any birthday parties,
school days, or conversations. Schacter noted further that “even when detailed
descriptions of dramatic events in his life are given to him—the tragic
drowning of his brother, the derailment near his house, of a train carrying
lethal chemicals that required 240,000 people to evacuate their homes for a
week—Gene does not generate any episodic memories”
Gene recalls many facts (as opposed to episodes) about his past life. He knows
where he went to school; he knows where he worked. He can name former co-
worker’s; he can define technical terms he used at the manufacturing plant
where he worked before the accident. Gene’s memories, Schacter argued, are
akin to the knowledge we have of other people’s lives. You may know, for
example, about incidents in your mother’s or father’s lives that occurred before
your birth: where they met, perhaps, or some memorable childhood incidents.
You know about these events, although you do not have specific recall of
them. Similarly, according to Schacter, Gene has knowledge of some aspects
of his past (semantic memory), but no evidence of any recall of specific
happenings (episodic memory)

THEORIES OF FORGETTING

The five theories of forgetting include:

• Displacement theory
• Trace decay theory
• Interference theory
• Retrieval failure theory
• Consolidation theory

Theory #1: Displacement Theory of Forgetting

The displacement theory describes how forgetting works in short-term memory.


Short-term memory has a limited capacity and can only hold a small amount of
information—up to about seven items—at one time. Once the memory is full,
new information will replace the old one. There seems to be no one figurehead
of this theory, but many psychologists have contributed to experiments and
studies that support it. Free recall method studies often support the idea of the
displacement theory of forgetting. Displacement theory plays neatly into
the Multi-Store Model of Memory. This model shows that while some
information reaches long-term memory, other pieces of information in short-
term memory storage are simply forgotten.

Serial Position Effect- In studies based on the free-recall method, participants


are asked to listen to a list of words and then try to remember them. The free
recall method, unlike the serial recall method, allows participants to remember
words in no particular order. Through these studies, psychologists have
discovered that the first and the last items on a list are the easiest ones to
remember. They named this phenomenon the Serial Position Effect. Two other
effects, the primacy and recency effects, explain why the first and last items are
so crucial in memory. The primacy effect suggests that recalling the first item
on the list is simple. At the time they are presented, these initial words don’t yet
compete with the subsequent ones for a place in the short-term memory. The
recency effect explains why the participants remember items at the end of the
list. These words have not yet been suppressed from short-term memory. The
words in the middle of the list, pushed out from the short-term memory by the
last words, are much less likely to be recalled.

Examples of the Displacement Theory of Forgetting

• Suppose you have just learned a seven-digit phone number when you are
given another number to memorize. Your short-term memory doesn’t have
the capacity to store both information. In order to recall the new phone
number, you’ll have to forget the first one. This isn’t always a conscious
process, but it can be. The moment you begin to focus on the new set of
numbers, the first one seems to “go away” or get confused with the new
numbers you are learning.

• Another example of the displacement theory of forgetting involves grocery


lists. As you walk out the door to the grocery store, your partner tells you
that you need to buy “milk, eggs, cheese, flour, and sugar.” You try to
memorize the whole list, but a lot of it goes away while you are driving to
the store. When you arrive, all you can remember is “milk” and “sugar.”

Theory #2: Trace Decay Theory of Forgetting


The trace decay theory was formed by American psychologist Edward
Thorndike in 1914, based on the early memory work by Hermann Ebbinghaus.
The theory states that if we don’t access memories, they will fade over
time. When we learn something new, the brain undergoes neurochemical
changes called memory traces. Memory retrieval requires us to revisit those
traces that the brain formed when encoding the memory. The trace decay theory
implies that the length of time between the memory and recalling determines
whether we will retain or forget a piece of information. The shorter the time
interval, the more we will remember, and vice versa.

Serial Probe Task- In 1965, Waugh and Norman put both displacement theory
and trace decay theories to the test. They put participants through a “serial probe
task.” Each participant listened to a long list of letters. Later, the researchers
would yell out one of the letters from the list, and the participants would have to
name the letter listed after. What they found showed that displacement theory
could explain some instances of forgetting, but not all of them.The interesting
part of the results was that when the list was read at a faster pace, participants
completed the task more successfully.

Criticisms and Need for Other Theories of Forgetting

The trace decay theory, however, doesn’t explain why many people can clearly
remember past events, even if they haven’t given them much thought before.
Neither does it take into account the role of all the events that have taken place
between the learning and the recall of the memory. Just like the serial probe task
suggests that displacement theory is not “enough,” decay theory fails to cover
all instances of forgetting and remembering.

Theory #3: Interference Theory of Forgetting

The interference theory was the dominant theory of forgetting throughout the
20th century. It asserts that the ability to remember can be disrupted both by our
previous learning and by new information. In essence, we forget because
memories interfere with and disrupt one another. For example, by the end of the
week, we won’t remember what we ate for breakfast on Monday because we
had many other similar meals since then. The first study on interference was
conducted by German psychologist John A. Bergstrom in 1892. He asked
participants to sort two decks of word cards into two piles. When the location of
one of the piles changed, the first set of sorting rules interfered with learning the
new ones, and sorting became slower.

• Proactive interference (Example)

Proactive interferences take place when old memories prevent making new
ones. This often occurs when memories are created in a similar context or
include near-identical items. Remembering a new code for the combination lock
might be more difficult than we expect. Our memories of the old code interfere
with the new details and make them harder to retain.

• Retroactive interference (Example)

Retroactive interferences occur when old memories are altered by new ones.
Just like with proactive interference, they often happen with two similar sets of
memories. Let’s say you used to study Spanish and are now learning French.
When you try to speak Spanish, the newly acquired French words may interfere
with your previous knowledge.

Theory #4: Retrieval Failure Theory of Forgetting

The retrieval failure theory was developed by the Canadian psychologist and
cognitive neuroscientist Endel Tulving in 1974. According to this theory,
forgetting often involves a failure in memory retrieval. Although the
information stored in the long-term memory is not lost, we are unable to retrieve
it at a particular moment. A classic example is the tip of the tongue effect when
we are unable to remember a familiar name or word.

There are two main reasons for failure in memory retrieval. Encoding failure
prevents us from remembering information because it never made it into long-
term memory in the first place. Or the information may be stored in long-term
memory, but we can’t access it because we lack retrieval cues.

Retrieval cues- A retrieval cue is a trigger that helps us remember something.


When we create a new memory, we also retain elements of the situation in
which the event occurred. These elements will later serve as retrieval cues.
Information is more likely to be retrieved from long-term memory with the help
of relevant retrieval cues. Conversely, retrieval failure or cue-dependent
forgetting may occur when we can’t access memory cues.

Semantic cues - Semantic cues are associations with other memories. For
example, we might have forgotten everything about a trip we took years ago
until we remember visiting a friend in that place. This cue will allow
recollecting further details about the trip.

State-dependent cues - State-dependent cues are related to our psychological


state at the time of the experience, like being very anxious or extremely happy.
Finding ourselves in a similar state of mind may help us retrieve some old
memories.

Context-dependent cues - Context-dependent cues are environmental factors


such as sounds, sight, and smell. For instance, witnesses are often taken back to
the crime scene that contains environmental cues from when the memory was
formed. These cues can help recollect the details of the crime.

Theory #5: Consolidation Theory of Forgetting

While the above theories of forgetting concentrate principally on psychological


evidence, the consolidation theory is based on the physiological aspects of
forgetting. Memory consolidation is the critical process of stabilizing a memory
and making it less susceptible to disruptions. Once it is consolidated, memory is
moved from short term to a more permanent long-term storage, becoming much
more resistant to forgetting. The term consolidation was coined by Georg Elias
Muller and Alfons Pilzecker in 1900. The two German psychologists were also
the first to explain the theory of retroactive interference, newly learned material
interfering with the retrieval of the old one, in terms of consolidation.

Other Theories of Forgetting

These five theories are most frequently mentioned when discussing forgetting,
memory, and recall. Psychologists have devised other theories that may be
worth looking into. No one theory of forgetting covers all incidences of memory
loss and recall, so these theories are valid, too!
Motivated Theory of Forgetting

Friedrich Nietzsche was one of the first psychologists to suggest that people
intentionally forgot their memories. Typically, these memories are traumatic or
shameful. This theory really took off when Freud expanded upon this theory.
Freud spoke more about the idea that people unintentionally forgot their
memories. This process is called repression and is considered a defense
mechanism. Freud, however, believed he could recover repressed memories.
Today, psychologists mostly discredit this idea, as the mind can “change”
memories with leading questions and other methods. Still, this motivated theory
of forgetting is an important idea in the history of psychology.

Gestalt Theory of Forgetting

The Gestalt Theory of Forgetting attempts to explain how memories can be


forgotten through a process called distortion. This does not have to be an
intentional process, either. If memory is hazy or missing pieces of information,
the brain will fill in those pieces. The memory may feel accurate but is actually
distorted.

Mnemonics

Mnemonics are memory tricks that can help you remember long strings of
information, often in a particular order. You have been using mnemonic
devices since…well, before you can remember!

Mnemonics are strategies used to improve memory. They are often taught in
school to help students learn and recall information.

Examples of mnemonics include:

• Setting the ABCs to music to memorize the alphabet


• Using rhymes to remember rules of spelling like "i before e except after
c"
• Forming sentences out of the first letter of words in order (acrostics),
such as "Please Excuse My Dear Aunt Sally," to remember the order of
operations in algebra

You can also use mnemonic strategies to remember names, number sequences,
and even a grocery list. People learn in different ways. Tools that work for one
person may not be helpful for another. Fortunately, there are several ways to use
mnemonics.

Keyword Mnemonics

Studying a second (or third or fourth) language? Using the keyword mnemonic
method improves learning and recall, especially in the area of foreign language.

Here's how the keyword method works:

• First, you choose a keyword that somehow cues you to think of the
foreign word.
• Then, you imagine that keyword connected with the meaning of the word
you're trying to learn.
• The visualization and association should trigger the recall of the correct
word.1

For example, if you're trying to learn the Spanish word for cat, which
is gato, first think of a gate and then imagine the cat sitting on top of the gate.
Even though the "a" sound in gato is short and the "a" sound in gate is long, the
beginnings are similar enough to help you remember the association between
gate and cat and to recall the meaning of gato.

Chunking as a Mnemonic Strategy

Chunking or grouping information is a mnemonic strategy that works by


organizing information into more easily learned groups, phrases, words, or
numbers. Phone numbers, Social Security, and credit cards are organized using
chunking.

For example, memorizing the following number: 47895328463 will likely take
a fair amount of effort. However, if it is chunked like this: 4789 532 8463, it
becomes easier to remember.

Interestingly, chunking is one of several mnemonic strategies that have been


studied in people with mild Alzheimer's disease. Results from these studies
concluded that chunking can be helpful in improving verbal working memory in
the early stages of dementia.2

Musical Mnemonics
One way to successfully encode the information into your brain is to use music.
A well-known example is the "A-B-C" song, but there's no end to what you can
learn when it's set to music. You can learn the names of the countries of Africa,
science cycles, memory verses, math equations, and more.

If you search online, you'll find that there are some songs already created
specifically to help teach certain information, and for others, you'll have to
make up your own. And no, you don't have to be able to carry a tune or write
the music out correctly for this mnemonic method to work.

Music can be 33+0a helpful memory tool for people with mild cognitive
impairment.3

Letter and Word Mnemonic Strategies Acronyms and acrostics are typically the
most familiar type of mnemonic strategies.

Acronyms use a simple formula of a letter to represent each word or phrase that
needs to be remembered.

For example, think of the NBA, which stands for the National Basketball
Association.

Or, if you're trying to memorize four different types of dementia, you might use
this acronym: FLAV, which would represent frontotemporal, Lewy body,
Alzheimer's, and vascular. Notice that I ordered the list in such a way to more
easily form a "word," which you would not do if the list you need to memorize
is ordered.

An acrostic uses the same concept as the acronym except that instead of
forming a new "word," it generates a sentence that helps you remember the
information.

An often-used acrostic in math class is: Please Excuse My Dear Aunt Sally.
This acrostic mnemonic represents the order of operations in algebra and stands
for parentheses, exponents, multiplication, division, addition, and subtraction.4
Module 5 - Thinking and Concept Formation
What is a Concept?

Concepts represent how the Subject perceives or approaches reality. To explain


why people act the way they do and to forecast what they might do next, everyday
reasoning about people's thoughts and behavior frequently refers to concepts.
Concepts can be either abstract (e.g., air) or physical, like a balloon. One of the
most crucial aspects of concepts is that they serve as the basis for behavior rather
than reality itself. Both abstraction and generalization are necessary for a
concept; the former isolates the trait, and the latter acknowledges that it may be
ascribed to different things.

What is Concept Formation?

Concept formation is a higher-order mental activity that operates on information


processed and stored in memory after being experienced by our sensory organs.
This process involves categorizing the data conceptually and applying that
knowledge to planning, goal-setting, problem-solving, and reasoning.
According to the task demands, these categories are recalled from long-term
memory. It is believed that such information is processed by ventral neural
pathways that connect the temporal cortex to the visual cortex. Understanding
the requirements of task and goal fulfillment is greatly aided by developing
concepts and knowledge. Through categorizing, infants and young children
develop concepts about things, people, and actions. Infants, for instance, learn
to classify faces as familiar and unfamiliar quite early in their development.

Types of concepts

Three types of concepts are differentiated: Conjunctive, rational, and


disjunctive.

Conjunctive concepts are defined by the presence of at least two features,


which means that a conjunctive concept is a class of objects that have two or
more common features.

Rational concept is defined by the relationship between the features of an


object or between an object and its surroundings. This means that rational
concepts are based on how an object relates to something else, or how its
features relate to one another.

Disjunctive concepts are either/or: they have at least one of several


possible features. Disjunctive concepts are defined by the presence of at
least one of several possible features.

Elements of Concept Formation

Major elements of concept of formation are


The processes of grouping and differentiation are essential components of
concept formation. They are defined as follows
• Grouping entails the "chunking" of information into larger chunks.
Because the performer needs to focus on groupings of information
rather than each item separately, chunking makes the system work
more effectively. Grouping lessens the task's attentional demands
and enables people to focus their attention on other, more crucial
stimuli.
• Differentiation, on the other hand, describes the process through
which performers take in more detail from various stimuli as they get
used to them.

Process of Concept Formation

When two people first meet, it is their first chance to form initial impressions and
judgments about the other. The observer goes through a concept-formation
process in this situation. There are two opposing cognitive theories for concept
formation.

Attribute List Theory


It suggests that concepts are kept in semantic memory, the memory required for
language usage, in terms of their symbolic characteristics (attributes). Therefore,
when a person comes across something, such as a new mobile phone, they try to
categorize it in terms of previously held ideas (such as existing phones) with
which it has characteristics in common. In this classification procedure, the
features of the product stimuli are compared to those of other concepts stored in
semantic memory. The majority of applications of the multi-attribute attitude
paradigm implicitly make use of this concept structure model.
Prototype Theory
Rosch (1975) proposes a different paradigm based on the idea that concepts are
remembered by way of their "most typical" illustrations, or "prototypes."
According to research, people are quickest to recognize inputs similar to the
concept prototype. The concepts of an internalized product prototype that Rosch
discusses originate in Bartlett's preliminary studies on memory schemata. To put
it simply, the schema is an abstract analogical representation of concepts made
up of a fundamental collection of details that capture the essence or gist of the
stimuli. According to findings from their experiments, "during learning, the
subject gets knowledge not just about the prototype stimulus, but also about the
variability among instances of a particular class; individuals learn to identify the
best instance of the category, as well as something about the permissible
variability or distance among admissible stimuli." A Ford Fairmont sedan, for
example, may more closely resemble the prototype of the product concept
"automobile," because this group of products shares its attributes.

Categorization

It is the process in which ideas and objects


are recognized, differentiated and understood. Categorization implies that
objects are grouped into categories, usually for some specific purpose. Ideally, a
category illuminates a relationship between
the subjects and objects of knowledge. Categorization is fundamental
in language, prediction, inference, decision making and in all kinds of
interaction with the environment.

There are many categorization theories and techniques. In a broader historical


view, however, three general approaches to categorization may be identified:

• Classical categorization
• Conceptual clustering
• Prototype theory

Classical categorization

Comes to us first from Plato, who, in his Statesman dialogue, introduces the
approach of grouping objects based in their similar properties. This approach
was further explored and systematized by Aristotle in his Categories treatise,
where he analyzes the differences between classes and objects. Aristotle also
applied intensively the classical categorization scheme in his approach to the
classification of living beings (which uses the technique of applying successive
narrowing questions such as "Is it an animal or vegetable?", "How many feet
does it have?", "Does it have fur or feathers?", "Can it fly?"...), establishing this
way the basis for natural taxonomy.

Conceptual clustering

It is a modern variation of the classical approach, and derives from attempts to


explain how knowledge is represented. In this
approach, classes (clusters or entities) are generated by first formulating their
conceptual descriptions and then classifying the entities according to the
descriptions.
Conceptual clustering developed mainly during the 1980s, as a machine
paradigm for unsupervised learning. It is distinguished from ordinary data
clustering by generating a concept description for each generated category.
Categorization tasks in which category labels are provided to the learner for
certain objects are referred to as supervised classification, supervised learning,
or concept learning. Categorization tasks in which no labels are supplied are
referred to as unsupervised classification, unsupervised learning, or data
clustering. The task of supervised classification involves extracting information
from the labeled examples that allows accurate prediction of class labels of
future examples. This may involve the abstraction of a rule or concept relating
observed object features to category labels, or it may not involve abstraction
(e.g., exemplar models). The task of clustering involves recognizing inherent
structure in a data set and grouping objects together by similarity into classes. It
is thus a process of generating a classification structure.

Prototype theory

This theoretical view of the nature of concepts, known as the prototype view,
was proposed in the 1970s stemming from the work of Eleanor Rosch and
colleagues. The prototype view of
concepts denies the existence of necessary-and-sufficient feature lists (except
for a limited number of concepts such as mathematical ones), instead regarding
concepts as a different sort of abstraction (Medin & Smith, 1984). The
prototype view of concepts holds that prototypes of concepts include features
or aspects that are characteristic—that is, typical—of members of the category
rather than necessary and sufficient. No individual feature or aspects need to be
present in the instance for it to count as a member of the category, but the more
characteristic features or aspects an instance has, the more likely it is to be
regarded as a member of the category.

JUDGEMENT AND DECISION MAKING

JUDGEMENT

Judgement typically forms an important initial part of the decision- making


process. For example, someone deciding which car to buy might make
judgements about how much various cars would cost to run, how reliable they
would be and how much he/she would enjoy owing one.
Judgement researchers address the question “How do people integrate multiple,
incomplete, and sometimes conflicting cues to infer what is happening in the
external world?” (Hastie, 2001). In contrast, decision making involves choosing
among various options. Decision-making researchers address the question “How
do people choose what action to take to achieve labile [changeable], sometimes
conflicting goals in an uncertain world?” (Hastie, 2001).
There are close relationships between the areas of judgement and decision
making. More specifically, decision-making research covers all of the processes
involved in deciding on a course of action. In contrast, judgement research
focuses mainly on those aspects of decision making concerned with estimating
the likelihood of various events. In addition, judgements are evaluated in terms
of their accuracy, whereas decisions are evaluated on the basis of their
consequences.

Judgement theories

The support theory was proposed by Tversky and Koehler (1994) based in part
on the availability heuristic. The key assumption is that any given event will
appear more or less likely depending on how it is described. A more explicit
description of an event will typically be regarded as having a greater subjective
probability because it: draws attention to less obvious aspects; and overcomes
memory limitations.

Mandel (2005) found the overall estimated probability of a terrorist attack was
greater when participants were presented with explicit possibilities than when
they were not. Redelmeieret al. (1995) found this phenomenon in experts as well
as non-experts.
Sloman et al. (2004) obtained findings directly opposite to those predicted by
support theory. Thus, an explicit description can reduce subjective probability if
it leads us to focus on low-probability causes. Redden and Frederick (2011)
argued that providing an explicit description can reduce subjective probability by
making it more effortful to comprehend an event. This oversimplified theory
doesn’t account for these findings.

Gigerenzer and Gaissmaier (2011) argued that heuristics are often very valuable.
They focused on fast and frugal heuristics that involve rapid processing of
relatively little information. One of the key fast and frugal heuristics is the take-
the-best strategy,
which has three components:
- Search rule – search cues in order of validity.
- Stopping rule – stop when a discriminatory cue is found.
- Decision rule.
The most researched example is the recognition heuristic. If one of two objects
is recognised and the other is not, then we infer that the recognised object has the
higher value with respect to the criterion (Goldstein & Gigerenzer,
2002). Kruglanski and Gigerenzer (2011) argued that there is a two-step process
in deciding which heuristic to use:

• First, the nature of the task and individual memory limit the number of available
heuristics.
•Second, people select one of them based on the likely outcome of using it and
its processing demands.

There is good evidence that people often use fast and frugal heuristics. These
heuristics are fast and effective, and used particularly when individuals are under
time or cognitive pressure. The approach has several limitations. Too much
emphasis has been placed on using intuitions when humans have such a large
capacity for logical reasoning (Evans & Over, 2010). The use of the recognition
heuristic is more complex than assumed: people generally also consider why
they recognise an object and only then decide whether to use the recognition
heuristic (Newell, 2011). Other heuristic use is also more complex than claimed.
Far too little attention has been paid to the issue of the importance of the decision
that has to be made.

DECISION MAKING

Cognitive psychologists use the term decision making to refer to the mental
activities that take place in choosing among alternatives. decisions are made in
the face of some amount of uncertainty.
Winterfeldt and Edwards (1986a) defines Rational decision making “has to do
with selecting ways of thinking and acting to serve your ends or goals or moral
imperatives, whatever they may be, as well as the environment permits”.

Phases of decision making :

We can divide decision making tasks into five different categories/ phases

1 Setting Goals
When we try to understand why a person makes one decision rather than an-
other, it often turns out that the reasons have to do with the decision maker’s
goals for the decision (Bandura, 2001; Galotti, 2005). The idea in setting goals
is that the decision maker takes stock of his or her plans for the future, his or her
principles and values, and his or her priorities.

2 Gathering Information
Before making a decision, the decision maker needs information. Specifically,
she or he needs to know what the various options are. the decision maker needs
to gather some information about at least some of the options. In addition to
information about options, decision makers may need or want to gather
information about possible criteria to use in making their choice. If you’ve
never bought a computer before, you might talk with computer-savvy friends, or
people in your company’s IT department, to get information about what features
they consider important.

3Structuring the Decision


For complex decisions, decision makers need a way of organizing all their
infor- mation. This is especially true when there are a great number of options,
and when there are lots of considerations to be used in making the decision. the
decision maker needs to determine or invent a way of managing this
information. The way she or he does this is called decision structuring.

4Making a Final Choice


After gathering all the information he or she is going to gather, the decision
maker needs to select from among the final set of options. This process may
involve other decisions—such as deciding when to cease the “information-
gathering” phase of the main deci- sion or deciding which information is more
relevant or reliable.

5Evaluating
The aim here is to reflect on the process and identify those aspects that could be
improved, as well as those that ought to be used again for similar decisions in
the future.
Improving Decision Making

One of the major obstacles to improving the ways in which people gather and
integrate information is overconfidence. People who believe their decision
making is already close to optimal simply will not see the need for
any assis- tance, even if it is available and offered. A second obstacle to
improving decision making has to do with people’s feelings and expectations
about how decisions ought to be made. Cultural ex- pectations lead many of us
to trust our intuitions (or at least the intuitions of experts) over any kind of
judgment made with equations, computer programs, mathematical models, or
the like. Real improvement in reducing bias seems to require extensive practice
with the task, individual feedback about one’s performance, and some means of
making the statistical and/or probabilistic aspects of the decisions
clearer.contrary to our (strong) intuitions, it is often better, fairer, more rational,
and in the long run more humane to use decision aids than to rely exclusively on
human impressions or intuitions
Decision analysis (Keeney, 1982; von Winterfeldt & Edwards, 1986b) is an
emerging technology that helps people gather and integrate information in a
way similar to that used earlier in the MAUT analysis of choosing a major. It
uses human judges’ feelings, beliefs, and judgments of relevance but helps
ensure that integration of information is carried out in an unbiased way.

Understanding Problem Solving


On a daily basis, human beings encounter a variety of problems. The nature,
intensity and dimensions of these problems can vary greatly but problems are
an inevitable part of human life. Thus, in order to carry on with the smooth
functioning of ones’ life. It becomes important to master the art of problem
solving.

In cognitive psychology, problem solving can be understood as the thinking


that is directed towards solving a specific problem that involves both
formation of responses and selection among the possible responses. APA
defines problem solving as “the process by which individuals attempt to
overcome difficulties, achieve plans that move them from a starting situation
to a desired goal, or reach conclusions through the use of higher mental
functions such as reasoning and creative thinking” (American Psychological
Association, 2022).

Even when the nature of problems are diverse and may vary, every problem
includes three major components. These are:
a. Initial stage: which describes the situation at the beginning
of the problem,
b. Goal stage: when we actually solve the problem
c. Obstacles: which describes the roadblocks or restrictions which makes it
difficult for an individual to move from the initial stage to the goal stage.

One core aspect of the process of problem solving is understanding the


problem. Here, the term understanding in problem-solving research refers to
constructing a well organised and systematic mental representation of the
problem. This could be based on the information that came along with the
problem or previous experience or both. To completely understand a problem,
it is important to understand three important aspects of problem solving. These
are:

a. Paying attention to the relevant information


b. Using the appropriate method of representation
c. Situated cognition and embodied cognition

The following section will deal with all of these processes in detail.

A. Paying attention to relevant information


Problem solving is a complex task which involves the inputs from
multiple cognitive processes such as memory, attention, thinking etc.
This leads to a pool of information being created at the beginning and
during the process of problem solving. Thus, it is important to filter
out the unnecessary information and pay attention to those that are
relevant and important to solving the problem in hand. Another
challenge faced in the process of problem solving is focussing on the
important part of the problem. Researchers have found that effective
problem solvers read the description of a problem very carefully. They
pay particular attention to inconsistencies (Mayer & Hegarty, 1996).
Effective problem solvers also scan strategically, deciding which
information is most important (Nievelstein et al., 2011).

B. Using the appropriate method of representation


Problem representation refers to the way you translate the elements of the
problem into a different format. The different ways of representing a
problem includes- symbols, matrices, diagrams, and visual images. The
appropriate method of problem representation will depend on the nature
and purpose of the problem solving required.
C. Situated cognition and embodied cognition
Situated cognition approach explains that we use supportive
information from our immediate environment in order to create spatial
representation. For instance, while solving a problem, we tend to start
with the dimensions that are placed right above and right below and
then go for the ones that are situated on either side.
Embodied cognition approach explains that we use different parts of our
body and own motor activities in order to represent the abstract concepts.
For example, in case we want to talk about a wind mill and for some
reason the word is at the tip of our tongue. We shall use our hand
movements to describe the image of the wind mill and then it is highly
likely that the word itself will be popped up.

Problem Solving Strategies

Problem solving strategies are those methods employed to solve a


problem. Broadly, these strategies can be divided into two major
categories, namely, algorithms and heuristics.

Algorithms are systematic, yet long methods of problem solving which are sure
to yield a solution. However, the process of using an algorithm can be time
consuming and inefficient. One specific method of algorithm is known as
exhaustive search, wherein one searches for all the possible solutions to a
problem using a specific system.

Heuristics are generally known as rules of thumb. They are the quickly
available, seemingly appropriate solution to the problem. Though this method
is less time consuming, heuristics can sometimes lead to wrong solutions.
There is more research available in cognitive psychology on heuristics than on
algorithms. One reason is that we are wired to solve most of our everyday
problems, predominantly using heuristics rather than algorithms. Three of the
most widely used heuristics are the analogy, the means-ends heuristic, and the
hill-climbing heuristic.

a. Analogy
When someone uses a solution that was previously used to solve a
similar problem, it is referred to as analogy. Analogies are very widely
used by everyone in their day to day life. For instance, one study
reported that engineers created an average of 102
analogies during 9 hours spent on problem solving (Christensen &
Schunn, 2007). One interesting finding is that analogies are used when
people come up with breakthrough inventions. For instance, the Wright
brothers invented airplanes with the help of the analogy of birds flying
using their wings. Problem solvers must peel away the irrelevant,
superficial details in order to reach the core of the problem (Whitten &
Graesser, 2003). Researchers use the term problem isomorphs to refer to
a set of problems that have the same underlying structures and solutions,
but different specific details.

Emphasis of problem solving has to be shifted from the surface or


superficial features to the structural or core features. Research shows
that people are more likely to correctly solve a problem using analogy
when they tackle many structurally similar problems before tackling the
target problem at hand.

b. Means-end heuristics
This method requires the problem solvers to initially identify the
ends or solutions and then look for the means to achieving them. There
are two major components for the means-end heuristics. The first
component is when the problem solver divides the problem into sub-
problems. The second component is when the problem solver tries to
reduce the difference between the initial state and the goal state of each
of the sub problems. This strategy is seen as one of the most effective
and flexible methods of problem solving.

c. Hill-climbing heuristics
Hill climbing heuristics is one of the most straightforward
heuristics for problem solving. In this method, one continuously makes
choices to reach the goal using alternatives that seem appropriate at a
point of time. This is useful when one does not have enough information
about the various alternatives at a point of time, but is aware only of the
immediate step. The biggest drawback to this heuristic is that problem
solvers must consistently choose the alternative that appears to lead most
directly toward the goal. In doing so, they may fail to choose an indirect
alternative, which may have greater long-term benefits.

Factors that influence Problem Solving


Cognitive processes depend on bottom up processing and top-down processing.
Bottom-up processing focuses on the information about the stimulus, as
registered on our sensory receptors. In contrast, top-down processing
emphasizes on our concepts, expectations, and memory, which we have
acquired from past experience.
A blend of these two processes only help us solve a problem.

The different factors which influence the problem solving include:


1. Expertise- An individual with expertise demonstrates consistently
exceptional skill and performance on representative tasks for a particular
area (Ericsson, 2006; Ericsson & Towne, 2010; Ericsson et al., 2009)
Experts solve problems more efficiently than novices do. It is important to
look at the dimensions on which experts differ from the novices:
● Knowledge base - Experts may solve problems especially well if they
have had training in a variety of relevant settings and they know the
particular area well. Thus, their knowledge base influences the way
they solve a problem.
● Memory - Experts differ from novices with respect to their memory for
information related to their area of expertise (Bransford et al.2000;
Chi, 2006; Robertson, 2001). Experts have a good and very specific
memory for the elements in the problem.
● Problem Solving strategies - Experts more likely to use strategies such
as the means-ends heuristic effectively and emphasize structural
features when using the analogy approach.
● Speed and Accuracy - Experts may solve problems faster than novices
because they use parallel processing, rather than serial processing.
Parallel processing refers to handling two or more items at the same
time. In contrast, serial processing handles only one item at a time.
● Metacognitive Skills - Experts are better than novices at monitoring
their problem solving. They seem to be better at judging the difficulty
of a problem, and they are more skilled at allocating their time
appropriately when solving problems (Bransford et al., 2000)

2. Mental Set - It refers to the tendency to use the same solution from previous
problems, even though the problem could be solved by a different, easier
method. It is overactive in top-down processing. The classic experiment on
mental set is Abraham Luchins’s (1942) water-jar problem. In the
experiment, participants had to measure a specific amount of water using
three jars of different capacities. Luchins found that subjects kept using
methods they had applied in previous trials, even if a more efficient solution
for the current trial was available.
Mental set also involves:
● Fixed mindset in which an individual believes to possess a certain
amount of intelligence and other skills, and no amount of effort can
help him perform better. ● Growth mindset in which an individual
believes that he can cultivate intelligence and other skills. So,
challenge oneself to perform better.

3. Functional Fixedness - It refers to the way we think about physical objects.


In other words, it means that we tend to assign stable or ‘‘fixed’’ functions
to an object. As a result, we fail to think about the features of this object that
might be useful in helping us solve a problem (German & Barrett, 2005). It
hinders problem solving by thinking of an object in terms of its name or its
familiar function.
The classic study in functional fixedness is called Duncker’s candle
problem (Duncker, 1945). It showed how minds solve tasks differently.
He asked people to solve the problem of a candle dripping wax onto a
table using only what was in front of them; a box of thumb tacks and a
book of matches. Most people didn't see the creative way to solve the
problem, which was to tack the box to the wall and put the lit candle in
the box. Duncker found that people generally solved this much easier, as
they viewed the box as an object of use, instead of just a container to hold
the tacks.

4. Insight Versus Noninsight Problems - Insight problems initially seem


impossible to solve, however, they are solved when the answer appears
suddenly. On the other hand, noninsight problems are solved gradually,
using top-down processing such as by using your memory, reasoning skills,
and a routine set of strategies.
There are three components of insight:
● The nature of insight - Top-down processing may prevent individuals
from solving an insight problem however, noninsight problems—such
as straightforward algebra problems—typically do benefit from top-
down processing.
● Metacognition during problem solving - According to Metcalfe (1986)
the pattern of metacognitions differs for non insight and insight
problems. Specifically, people’s confidence builds gradually for
problems that do not require insight, such as standard high-school
algebra problems. In contrast, when people work on insight problems,
they experience a sudden leap in confidence when they are close to a
correct solution.
● Advice about problem solving - Top-down processing will be
especially useful when you approach a noninsight problem. A
different approach to solve an insight problem, because there are no
clear rules for these problems. In addition, it is helpful to draw
sketches, work with physical objects, or use gestures when you are
trying to solve an insight problem (Fioratou & Cowley, 2011;
Shapiro, 2011). As an insight problem forces you to search for the
answer ‘‘outside the box.”

Deductive reasoning

Deductive reasoning is the act of reaching a logically certain conclusion by


reasoning from one or more general statements about what is known. It
frequently entails applying what is known from one or more general statements
to a specific application of the general statement.

The foundation of deductive reasoning is logical propositions. A proposition is


essentially an assertion that can be true or false. "Cognitive psychology students
are bright," "Cognitive psychology students wear shoes," and "Cognitive
psychology students prefer peanut butter" are some examples. Premises are
propositions about which arguments are made in a logical argument. Cognitive
psychologists are particularly interested in propositions that can be linked in
ways that require people to develop reasoned conclusions. Deductive reasoning,
in other words, is useful because it allows people to connect multiple
propositions to derive conclusions. Cognitive psychologists are interested in
how people connect propositions in order to develop conclusions. Some of
these findings are sound, while others are not.

Much of the difficulty in reasoning stems from just comprehending the


language of problems. Some of the mental processes involved in language
comprehension, as well as the cerebral functioning that underpins them, are
also engaged in reasoning.

Types of deductive reasoning


Here are some types of deductive reasoning that people use.
· Conditional reasoning- Conditional reasoning is a sort of deductive
reasoning in which the reasoner must derive a conclusion based on an if-
then proposition. If antecedent condition p is met, then subsequent event
q occurs, according to the conditional if-then proposition. "Students that
study hard do well on their tests," for example. If you have established a
conditional proposition, you may reach a well-reasoned conclusion in
some cases. "If p, then q," is a common collection of conditional
propositions from which you can build a well-reasoned conclusion. As a
result, q." This inference demonstrates the validity of deductive
reasoning. That is, it is logically derived from the propositions upon
which it is founded.
· Syllogistic reasoning- In addition to conditional reasoning, syllogistic
reasoning, which is based on syllogisms, is a crucial component of
deductive reasoning. Syllogisms are deductive arguments in which
conclusions are drawn from two premises. Every syllogism has a main
premise, a minor premise, and a conclusion. Unfortunately, there are
situations when no logical conclusion can be drawn based on the two
stated premises. The categorical syllogism is probably the most well-
known type of syllogism. Categorical syllogisms, like other types of
syllogisms, have two
premises and a conclusion. The premises of a categorical syllogism
state anything about the words' category memberships. In fact, each
term represents all, none, or some members of a specific class or
category. Each premise, like other syllogisms, comprises two terms.
One of these must be the middle term, which is shared by both
premises. The categorical membership of the terms connects the first
and second terms in each premise. That is, one term is a member of
the class denoted by the other term. Regardless of how the premises
are phrased, they say that some (or all) of the members of the first
term's category are (or are not) members of the second term's
category. The reasoner must discover the category memberships of the
terms in order to assess whether the conclusion follows logically from
the premises. The following is an example of a categorical syllogism:
- All cognitive psychologists are pianists.
- All pianists are athletes.
- Therefore, all cognitive psychologists are athletes.

Inductive reasoning
Inductive reasoning is the act of making genialized conclusion based off of
specific scenarios. It is a method of logical thinking that combines observations
with experiential information to reach a conclusion. When you use a specific set
of data or existing knowledge from past experiences to make decisions, you're
using inductive reasoning.
Inductive reasoning is a bottom up approach.

Types of inductive reasoning


Here are some types of inductive reasoning that people use.
· Casual reasoning- causal reasoning seeks to make cause-effect
connections. Causal reasoning is a form of inductive reasoning we use all
the time without ever thinking about it. For eg. Based on prior
experience, you can determine that it rained if the street is wet in the
morning. Your initial assumption would be rain, but there may be other
factors at play—the city chose to wash the streets that morning.
· Analogical reasoning- Comparison is a part of analogous reasoning. It
must be valid that the two entities (schools, states, countries, and
corporations) are fundamentally similar in many ways.
· Statistical generalization- Inductive reasoning of this kind relies on
statistical evidence to reach conclusions. A type of inductive
generalization is statistical induction or statistical generalization.
Although this kind of reasoning gives context to an assumption, it's
crucial to keep your mind open to fresh information that can refute your
theory.
· Sign reasoning- it is a type of inductive reasoning in which conclusions
about phenomena are made based on things that happen before or
correlate with (but do not cause) something else.

Creativity
Creativity is a mental and social process involving the generation of
new ideas or concepts, or new associations of the creative mind between
existing ideas or concepts. An alternative conception of creativness is that it is
simply the act of making something new.

From a scientific point of view, the products of creative thought (sometimes


referred to as divergent thought) are usually considered to have both
originality and appropriateness.

Although intuitively a simple phenomenon, it is in fact quite complex. It has


been studied from the perspectives of behavioural psychology, social
psychology, psychometrics, cognitive science, artificial
intelligence, philosophy, history, economics, business, and management, among
others. The studies have covered everyday creativity, exceptional creativity and
even artificial creativity. Unlike many phenomena in science, there is no single,
authoritative perspective or definition of creativity. And unlike many
phenomena in psychology, there is no standardized measurement technique.

Two of the primary components of creativity include:1

1. Originality: The idea should be something new that is not simply an


extension of something else that already exists.
2. Functionality: The idea needs to actually work or possess some degree
of usefulness.

Types of Creativity

Experts also tend to distinguish between different types of creativity. The “four
c” model of creativity suggests that there are four different types:3

1. “Mini-c” creativity involves personally meaningful ideas and insights


that are known only to the self.
2. “Little-c” creativity involves mostly everyday thinking and problem-
solving. This type of creativity helps people solve everyday problems
they face and adapt to changing environments.
3. “Pro-C” creativity takes place among professionals who are skilled and
creative in their respective fields. These individuals are creative in their
vocation or profession but do not achieve eminence for their works.
4. “Big-C” creativity involves creating works and ideas that are considered
great in a particular field. This type of creativity leads to eminence and
acclaim and often leads to world-changing creations such as medical
innovations, technological advances, and artistic achievements.
Module VI: Language Formation

By definition, communication is behavior that affects the behavior of others by


the transmission of information. Language is a series of codes made up of
words and rules that allow humans to communicate. The structure of human
language is complex and intricate and each language spoken in the world has
different phonological systems, which is, by definition, the sounds that are
used and how they are related to one another.

The basic rules of language are covered here, including phonology,


morphology, semantics, syntax, and how speech sounds are divided.

1. Phonology
Phonology is the use of sounds to encode messages within a spoken
human language. Babies are born with the capacity to speak any language
because they can make sounds and hear differences in sounds that adults
would not be able to do. This is what parents hear as baby talk. The infant is
trying out all of the different sounds they can produce.
As the infant begins to learn the language from their parents, they begin to
narrow the range of sounds they produce to one's common to the language they
are learning, and by the age of
5, they no longer have the vocal range they had previously.

For example, a common sound that is used in Indian language is /dh. To


any native Indian there is a huge difference between /dh and /d, but for people
like me who cannot speak Hindi, not only can we not hear the difference, but it
is very difficult to even attempt to produce this sound. Another large variation
between languages for phonology is where in your mouth you speak from. In
English, we speak mostly from the front or middle of our mouths, but it is very
common in African to speak from the glottal, which is the deepest part of one's
throat. These sounds come out as deep growls, though they have great
significance in African culture.

2. Morphology
The definition of morphology is the study of the structure of words
formed together, or more simply put, the study of morphemes.
Morphemes are the smallest utterances with meaning.
Not all morphemes are words. Many languages use affixes, which carry
specific grammatical meanings and are therefore morphemes, but are not
words. For example, English-speakers do not think of the suffix “-ing” as a
word, but it is a morpheme. The creation of morphemes rather than words
also allowed anthropologists to more easily translate languages. For example,
in the English language, the prefix -un means "the opposite, not, or lacking"
which can distinguish the words "unheard" and "heard" apart from each
other.

3. Semantics
Semantics is the study of meaning. Some anthropologists have seen linguistics
as basic to a science of man because it provides a link between the biological
and sociocultural levels. Modern linguistics is diffusing widely in
anthropology itself among younger scholars, producing work of competence
that ranges from historical and descriptive studies to problems of semantic and
social variation. In the 1960's, Chomsky prompted a formal analysis of
semantics and argued that grammars needed to represent all of a speaker's
linguistic knowledge, including the meaning of words. Most semanticists
focused attention on how words are linked to each other within a language
through five different relations.
• synonymy - same meaning (ex: old and aged)
• homophony - same sound, different meaning (ex: would and wood) •

antonymy-opposite meaning (ex: tall and short)

• denotation - what words refer to in the "real" world (ex: having the word pig
refer to the actual animal, instead of pig meaning dirty, smelly, messy or
sloppy)
• connotation - additional meanings that derive from the typical contexts
in which they are used in everyday speech. (ex: calling a person a pig,
not meaning the animal but meaning that they are dirty, smelly, messy
or sloppy)

4. Syntax
The study of the arrangement and order of words, for example if the Subject
or the Object comes first in a sentence.

Syntax is the study of rules and principles for constructing sentences in


natural languages. Syntax studies the patterns of forming sentences and phrases
as well. It comes from ancient Greek (“syn”- means together and “taxis”
means arrangement.) Outside of linguistics, syntax is also used to refer to the
rules of mathematical systems, such as logic, artificial formal languages, and
computer programming language.

There are many theoretical approaches to the study of syntax. Noam Chomsky,
a linguist, sees syntax as a branch of biology, since they view syntax as the
study of linguistic knowledge as the human mind sees it. Other linguists take a
Platonistic view, in that they regard syntax to be the study of an abstract formal
system.

Some approaches involve-

• Generative Grammar: Noam Chomsky pioneered the generative approach to


syntax. The hypothesis is that syntax is a structure of the human mind. The
goal is to make a complete model of this inner language, and the model could
be used to describe all human language and to predict if any utterance would
sound correct to a native speaker of the language. It focuses mostly on the
form of the sentence rather than the communicative function of it. The
majority of generative theories assume that syntax is based on the constituent
structure of sentences.

• Categorial Grammar: An approach that attributes the syntactic structure to


the properties of the syntactic categories, rather than to the rules of grammar.

• Dependency Grammar: Structure is determined by the relations between


a word and its dependents rather than being based on constituent structure.

5. Speech sounds
Human Speech sounds are traditionally divided between vowels and
consonants, but scientific distinctions are much more precise. An important
distinction between sounds in many languages is the vibration of the glottis,
which is referred to as voicing.

It distinguishes such sounds as /s/ (voiceless; no vibrating) and /z/


(voiced; vibrating). The chart below mentions pulmonic consonants, which are
produced by releasing air from the lungs and somehow obstructing it on its
way out the mouth. The non-pulmonic Consonants are clicks, implosives
(similar to the 'glug glug' sound sometimes made to imitate a liquid being
poured or being drunk), and explosives. Co-articulation refers to sounds that
are produced in two areas at once (like /W/).

a. Phoneme
A phoneme is the smallest phonetic unit in a language that is capable
of conveying a distinction in meaning. For example, in English we can
tell that pail and tail are different words, so /p/ and /t/ are phonemes.
Two words differing in only one sound, like pail and tail are called a
minimal pair. The International Phonetic Association created the
International Phonetic Alphabet (IPA), a collection of standardized
representations of the sounds of spoken language.

When a native speaker does not recognize different sounds as being distinct,
they are called allophones. For example, in the English language we consider
the p in pin and the p in spin to have the same phoneme, which makes them
allophones. In Chinese, however, these two similar phones are treated
separately and both have a separate symbol in their alphabet. The minimum
bits of meaning that native speakers recognize are known as phonemes. It is
any small set of units, usually about 20 to 60 in number, and different for each
language, considered to be the basic
distinctive units of speech sound by which morphemes, words,
and sentences are represented.

b. Morpheme

Morphemes are "the smallest unit grammatical of language" that


has semantic meaning. In spoken language, morphemes are composed
of phonemes (the smallest unit of spoken language), but in written
language morphemes are composed of graphemes (the smallest unit of
written language).

A morpheme can stand alone, meaning in forms a word by itself, or be


a bound morpheme, where it has to attach to another bound morpheme or
a stand-alone morpheme in order to form a word. Prefixes and suffixes
are the simplest form of bound morphemes. For example, the
word "bookkeeper" has three morphemes: "book", "keep", and "-er".
This example illustrates the key difference between a word and a morpheme;
although a morpheme can be a standalone word, it can also need to be
associated with other units in order to make sense. Meaning that
one would not go around saying "-er" interdependently, it must be bound
to one or more other morphemes.

II) LANGUAGE ACQUISITION:

Language acquisition is one of the central topics in cognitive science.


Every theory of cognition has tried to explain it; probably no other topic has
aroused such controversy. Possessing a language is the quintessentially human
trait: all normal humans speak, no nonhuman animal does. Language is the
main vehicle by which we know about other people’s thoughts, and the two
must be intimately related.

Every time we speak, we are revealing something about language, so the


facts of language structure are easy to come by; these data hint at a system
of extraordinary complexity.

Nonetheless, learning a first language is something every child


does successfully, in a matter of a few years and without the need for
formal lessons. With language so close to the core of what it means to be
human, it is not surprising that children’s acquisition of language has received
so much attention.

In the past, debates about the acquisition of language centered on the


same theme as debates about the acquisition of any ability – the nature
versus nurture theme. However, current thinking about the language
acquisition has incorporated the understanding that acquiring language really
involves a natural endowment modified by environment (Bates and Goodman,
1999; Dehaene-Lambertz, Hertz-Painter & Dubois, 2006; Lightfoot, 2003;
Maratos, 2003). For example, the social environment, in which infants use
their social capacities to interact with others, provides one source of
information for language acquisition (Snow, 1999; Tomasello, 1999).

Thus, the approach to studying language acquisition now revolves


around discovering what abilities are innate and how the child’s environment
tempers these abilities. This process is aptly termed innately guided learning
(see Elman & associates, 1996; Jusczyk, 1997).

Two theories to explain Acquisition of language are as follows


A. Chomsky’s Innateness theory-

• According to Chomsky, children are born with an innate capacity


for learning human language. Humans are destined to speak.
Children discover the grammar of their language based on their own
inborn grammar. Certain aspects of language structure seem to be
preordained by the cognitive structure of the human mind. This accounts
for certain very basic universal features of language structure: every
language has
nouns/verbs, consonants and vowels. It is assumed that children are pre
programmed, hard-wired, to acquire such things.

• Every language is extremely complex, full of subtle distinctions


that speakers are not even aware of. Nevertheless, children master their
native language in 5 or 6 years regardless of their other talents and
general intellectual ability. Acquisition must certainly be more than
mere imitation; it also doesn’t seem to depend on levels of general
intelligence, since even a severely retarded child will acquire a native
language without special training. Some innate feature of the mind must be
responsible for the universally rapid and natural acquisition of language by
any young child exposed to speech.

• Chomsky concluded that children must have an inborn faculty


for language acquisition. According to this theory, the process is
biologically determined - the human species has evolved a brain whose
neural circuits contain linguistic information at birth. The child’s
natural predisposition to learn language is triggered by hearing speech
and the child’s brain is able to interpret what s/he hears according to the
underlying principles or structures it already contains. This natural
faculty has become known as the Language Acquisition Device
(LAD).

• Chomsky did not suggest that an English child is born knowing


anything specific about English, of course. He stated that all human
languages share common principles. (For example, they all have words
for things and
actions — nouns and verbs.) It is the child’s task to establish how
the specific language s/he hears expresses these underlying principles.
B. Piaget’s Cognitive theory-

• The Swiss psychologist Jean Piaget (1896-1980) placed acquisition


of language within the context of a child’s cognitive development. He
argued that a child has to understand a concept before s/he can acquire
the particular language form which expresses that concept. Cognitive
theory views language acquisition within the context of the child’s
broader intellectual development. Since the cognitive theory of
language acquisition is based on Piaget’s theory of cognitive development, a
brief description and understanding of this theory is must.

• Piaget suggested that children go through four separate stages in a fixed order
that is universal in all children. Piaget declared that these stages differ not only
in the quantity of information acquired at each, but also in the quality of
knowledge and understanding at that stage. He suggested that movement from
one stage to the next occurred when the child reached an appropriate level of
maturation and was exposed to relevant types of experiences.

• Without experience, children were assumed incapable of reaching


their highest cognitive ability. Piaget’s four stages are known as
the sensorimotor, preoperational, concrete operational, and formal
operational stages.
➢ The sensory motor stage in a child is from birth to approximately two years.
During this stage, a child has relatively little competence in representing the
environment using images, language, or symbols.
➢ The preoperational stage is from the age of two to seven years. The most
important development at this time is language. Children develop an internal
representation of the world that allows them to describe people, events, and
feelings. Children at this time use symbols, they can pretend when driving their
toy car across the couch that the couch is actually a bridge.
➢ The concrete operational stage lasts from the age of seven to twelve years
of age. The beginning of this stage is marked by the mastery of the principal
of conservation. Children develop the ability to think in a more logical
manner and they begin to
overcome some of the egocentric characteristics of
the preoperational period.
➢ The formal operational stage begins in most people at age twelve and
continues into adulthood. This stage produces a new kind of thinking that is
abstract, formal, and logical.
• Cognitive theory of language acquisition suggests that a child first becomes
aware of a concept, such as relative size, and only afterward do they acquire
the words and patterns to convey that concept. Simple ideas are expressed
earlier than more complex ones even if they are grammatically more
complicated— Conditional mood is one of the last. Conceptual development
might affect language development: if a child has not yet mastered a difficult
semantic distinction, he or she may be unable to master the syntax of the
construction dedicated to expressing it.

Chomsky’s Theory of Language Development

In contrast to the ideas proposed by operant conditioning and social cognitive


theory, both of which focus on interactions in the environment, Noam Chomsky
(1968) developed a theory that proposes that the human brain is innately wired to
learn language, a theory known as nativism. He believed that children could not
learn something as complex as human language as quickly as they do unless there
is already a grammatical structure for language hardwired in their brains before
they ever hear human language. He called this universal grammar. According
to this theory, hearing spoken language triggers the activation of this structure
and does more than just promote imitation. Chomsky believed that the language
that we usually hear is not enough on its own to explain the construction of all of
the rules of language that children quickly learn.
Children are born with an innate capacity for learning human language. Humans
are destined to speak. Children discover the grammar of their language based on
their own inborn grammar. Certain aspects of language structure seem to be
preordained by the cognitive structure of the human mind. This accounts for
certain very basic universal features of language structure: every language has
nouns/verbs, consonants and vowels. It is assumed that children are
preprogrammed, hard-wired, to acquire such things.
Yet no one has been able to explain how quickly and perfectly all children acquire
their native language. Every language is extremely complex, full of subtle
distinctions that speakers are not even aware of. Nevertheless, children master
their native language in 5 or 6 years regardless of their other talents and general
intellectual ability. Acquisition must certainly be more than mere imitation; it
also doesn’t seem to depend on levels of general intelligence, since even a
severely retarded child will acquire a native language without special training.
Some innate feature of the mind must be responsible for the universally rapid and
natural acquisition of language by any young child exposed to speech.
Chomsky concluded that children must have an inborn faculty for language
acquisition. According to this theory, the process is biologically determined -
the human species has evolved a brain whose neural circuits contain linguistic
information at birth. The child’s natural predisposition to learn language is
triggered by hearing speech and the child’s brain is able to interpret what s/he
hears according to the underlying principles or structures it already

contains. This natural faculty has become known as the Language Acquisition
Device (LAD). Chomsky’s ground-breaking theory remains at the centre of the
debate about language acquisition. However, it has been modified, both by
Chomsky himself and by others. Chomsky’s original position was that the LAD
contained specific knowledge about language. Dan Isaac Slobin has proposed that
it may be more like a mechanism for working out the rules of language.
Chomsky (1975) also proposed that language is modular; people have a set of
specific linguistic abilities that is separated from our other cognitive processes,
such as memory and decision making. Because language is modular, Chomsky
(2002, 2006) argued that young children learn complex linguistic structures many
years before they master other, simpler tasks, such as mental arithmetic. In
addition, Chomsky (1957, 2006) pointed out the difference between the deep
structure and the surface structure of a sentence. The surface structure is
represented by the words that are actually spoken or written. In contrast, the deep
structure is the underlying, more abstract meaning of a sentence. People use
transformational rules to convert deep structure into a surface structure that
they can speak or write. Two sentences may have very different surface
structures, but very similar deep structures. For example, ‘‘Sara threw the ball’’
and ‘‘The ball was thrown by Sara.’’ These two surface structures differ, none of
the words occupy the same position in both sentences. In addition, three of the
words in the second sentence do not even appear in the first sentence. However,
‘‘deep down,’’ speakers of English feel that the sentences have identical core
meanings. Chomsky (1957, 2006) also pointed out that two sentences may have
identical surface structures but very different deep structures; these are called
ambiguous sentences.
Children are often heard making grammatical errors such as “I sawed,” and
“sheeps” which they would not have learned from hearing adults communicate.
This shows the child using the LAD to get to grips with the rules of language.
Once the child has mastered this skill, they are only in need of learning new words
as they can then apply the rules of grammar from the LAD to form sentences.
Chomsky proposed that native-speaking children would become fluent by the age
of ten. He also argued that if children learn two languages from birth, they are
more likely to be fluent in both.
Limitations of Chomsky’s Theory

● He did not study real children. The theory relies on children being exposed
to language but takes no account of the interaction between children and
their carers. Nor does it recognise the reasons why a child might want to
speak, the functions of language.
● He has made a number of strong claims about language: in particular, he
suggests that language is an innate faculty – that is to say that we are born
with a set of rules about language in our minds, which he refers to as the
‘Universal Grammar’. Universal grammar is the basis upon which all human
languages build. Chomsky’s ideas have profoundly affected linguistics and
mind-science in general.
● Critics attacked his theories from the get-go and are still attacking,
paradoxically demonstrating his enduring dominance. … For example, in
his new book A Kingdom of Speech Tom Wolfe asserts that both Darwin and
“Noam Charisma” were wrong.
● Does not take into consideration children with conditions such as Down's
Syndrome that may affect or delay their language development.
● He overemphasised on grammar in sentence structure rather than how
children construct meaning from their sentences.

Language and Thought Psychology

Language is one of the systems through which we communicate, and it typically


involves communicating through sounds and written communication with the
use of symbols, but it can also involve our bodies (body language, how we
smile, move, and approach people are all forms up for interpretation in the game
of language).
Language is often closely related to the culture that uses it and reflects
culturally relevant ideas.

As we also tend to think using language, Sapir-Whorf theorised that the


language we use will affect how we see and think of the world. However, Piaget
highlights that children develop schemas before they are capable of speaking,
suggesting that cognitive processes do not depend on language.

Relationship Between Language and Thought

Different theories propose different relationships between language and


thought. Piaget's theory of cognitive development argues that children's ability
to use language and the content of their speech depends on their stage of
cognitive development.

In contrast, the Sapir-Whorf hypothesis proposes that the language we use to


communicate determines how we think of the world around us, affecting
cognitive processes like memory and perception.

Other Theories of Language and Thought

Other developmental conceptualisations of language include the theories of


Chomsky and Vygotsky. Chomsky focuses on how children acquire linguistic
abilities at such a young age. Vygotsky's theory highlights how language drives
further cognitive development in children.

Language and Thought Chomsky

Chomsky proposed that language acquisition is an innate ability. Children are


already born with the ability to acquire the rules that govern languages.
Grammatical rules are common to all languages even though they might differ
across them.

An innate ability to acquire grammatical structures of a language allows


children to quickly learn the language, even based on the limited linguistic input
they receive in infancy.

Language and Thought Vygotsky


According to Vygotsky's sociocultural theory of cognitive development, in early
development speech and thought are independent. The two processes merge
when speech is internalised. In Vygotsky's theory, language is considered to be
a cultural tool that plays a key role in development.

• Firstly, verbal guidance from adults supports children's learning and


development. Language allows adults to share their knowledge and
communicate with the child.
• Secondly, when language becomes internalised and develops into inner
speech, it allows children to guide themselves when making decisions,
problem-solving or regulating their behaviour.

You might also like