Cognitive Psychology Notes
Cognitive Psychology Notes
MODULE -1
Historical Background
Aristotle
• the Greek philosopher Aristotle (384–322 bce) examined topics such as
perception, memory, and mental imagery.
• He also discussed how humans acquire knowledge through experience
and observation (Barnes, 2004;Sternberg, 1999).
• Aristotle emphasized the importance of empirical evidence, or scientific
evidence obtained by careful observation and experimentation.
Wilhelm Wundt (1832 - 1920)
• Wundt proposed that psychology should study mental processes, using a
technique called introspection.
• Introspection meant that carefully trained observers would
systematically analyze their own sensations and report them as
objectively as possible, under standardized conditions.
Hermann Ebbinghaus (1850–1909)
• Was the first person to scientifically study human memory.
• He examined a variety of factors that might influence performance, such
as the amount of time between two presentations of a list of items.
• nonsense syllables. Eg. TVX, DRG, UAJ, etc.
Encoding
Storing
Retrieving
Ecology
• the branch of biology that deals with the relations of organisms to one
another and to their physical surroundings.
• the study of relationships between organisms and their physical and social
environments
Ecological Psychology
The ecological Psychology stresses the ways in which the environment and the
context shape the way cognitive processing occurs.
• the analysis of behavior settings with the aim of predicting patterns of
behavior that occur within certain settings.
• The focus is on the role of the physical and social elements of the setting
in producing the behavior.
• According to behavior-setting theory, the behavior that will occur in a
particular setting is largely prescribed by the roles that exist in that setting
and the actions of those in such roles, irrespective of the personalities,
age, gender, and other characteristics of the individuals present.
• In a place of worship, for example, one or more individuals have the role
of leaders (the clergy), whereas a larger number of participants function
as an audience (the congregation).
• the influences of both the functionalist and the Gestalt schools on the
ecological approach. The functionalists focused on the purposes served
by cognitive processes, certainly an ecological question. Gestalt
psychology’s emphasis on the context surrounding any experience is
likewise compatible with the ecological approach.
Contemporary Cognitive Psychology
• In 1879, the first psychology lab was opened, and this date is generally
thought of as the beginning of psychology as a separate science.
• The early work in psychology paved the way for the study of
contemporary cognitive psychology today.
• Contemporary cognitive psychology is an outgrowth of historical
cognitive psychology, with influences from Jean Piaget and Information
Processing theorists.
• Contemporary cognitive psychology can apply the methods of today,
such as brain imaging, and can also examine how the tenets of cognitive
psychology are applied to contemporary issues.
Cognitive Revolution
• Intellectual shift in psychology in the 1950s focusing on the internal
mental processes driving human behavior.
• The study of human thought became interdisciplinary by directing
attention to processing skills including language acquisition, memory,
problem-solving, and learning.
• This scientific approach to understanding how the brain works moved
away from behavioral psychology and embraced understanding the
processes that drive behavior.
• The events in the 1950s and 1960s as the cognitive revolution.
• But it is important to realize that although the revolution made it
acceptable to study the mind, the field of cognitive psychology continued
to evolve in the decades that followed.
MODULE 2
ATTENTION
Process of Attention
The process through which certain stimuli are selected from a group of others is
generally referred to as attention. Besides selection, attention also refers to
several other properties like alertness, concentration, and search.
Types of Attention
Selective attention
Have you ever been at a loud concert or a busy restaurant, and you are trying to
listen to the person you are with? While it can be hard to hear every word, you
can usually pick up most of the conversation if you're trying hard enough. This
is because you are choosing to focus on this one person's voice, as opposed to
say, the people speaking around you. Selective attention takes place when we
block out certain features of our environment and focus on one particular
feature, like the conversation you are having with your friend.
Divided attention
Do you ever do two things at once? If you're like most people, you do that a lot.
Maybe you talk to a friend on the phone while you're straightening up the
house. Nowadays, there are people everywhere texting on their phones while
they're spending time with someone. When we are paying attention to two
things at once, we are using divided attention.
Some instances of divided attention are easier to manage than others. For
example, straightening up the home while talking on the phone may not be hard
if there's not much of a mess to focus on. Texting while you are trying to talk to
someone in front of you, however, is much more difficult. Both age and the
degree to which you are accustomed to dividing your attention make a
difference in how adept at it you are.
Sustained attention
Are you someone who can work at one task for a long time? If you are, you are
good at using sustained attention. This happens when we can concentrate on a
task, event, or feature in our environment for a prolonged period of time. Think
about people you have watched who spend a lot of time working on a project,
like painting or even listening intently to another share their story.
Sustained attention is also commonly referred to as one's attention span. It takes
place when we can continually focus on one thing happening, rather than losing
focus and having to keep bringing it back. People can get better at sustained
attention as they practice it.
Executive attention
Do you feel able to focus intently enough to create goals and monitor your
progress? If you are inclined to do these things, you are displaying executive
attention. Executive attention is particularly good at blocking out unimportant
features of the environment and attending to what really matters. It is the
attention we use when we are making steps toward a particular end.
For example, maybe you need to finish a research project by the end of the day.
You might start by making a plan, or you might jump into it and attack different
parts of it as they come. You keep track of what you've done, what more you
have to do, and how you are progressing. You are focusing on these things in
order to reach the goal of a finished research paper. That is using your executive
attention.
Theory of Attention
Anne Treisman proposed her selective attention theory in 1964. His theory is
based on the earlier model by Broadbent. Treisman also believed that this
human filter selects sensory inputs on the basis of physical characteristics.
However, she argued that the unattended sensory inputs (the ones that were not
chosen by the filter and remain in the sensory buffer) are attenuated by the filter
rather than eliminated. Attenuation is a process in which the unselected sensory
inputs are processed in decreased intensity. For instance, if you selectively
attend to a ringing phone in a room where there's TV, a crying baby, and people
talking, the later three sound sources are attenuated or decreased in volume.
However, when the baby's cry goes louder, you may turn your attention to the
baby because the sound input is still there, not lost.
Cognitive neuropsychology
According to Michael Posner, the attentional system in the brain “is neither a
property of a single brain area nor of the entire brain” (Posner & Dehaene,
1994, p. 75).
In 2007, Posner teamed up with Mary Rothbart and they conducted a review of
neuroimaging studies in the area of attention to investigate whether the many
diverse results of studies conducted pointed to a common direction.
They found that what at first seemed like an unclear pattern of activation could
be effectively organized into areas associated with the three subfunctions of
attention: alerting, orienting, and executive attention.
• They found that what at first seemed like an unclear pattern of activation
could be effectively organized into areas associated with the three
subfunctions of attention: alerting, orienting, and executive attention.
• The researchers organized the findings to describe each of these functions
in terms of
– the brain areas involved,
– the neurotransmitters that modulate the changes,
– and the results of dysfunction within this system.
MODULE 3
PERCEPTION
Perception
Top-down
Bottom-up processing
Bottom-up processing states that we begin to perceive new stimuli through the
process of sensation and the use of our schemas is not required. James J. Gibson
(1966) argued that no learning was required to perceive new stimuli.
Perceptual Learning
Perceptual Development
Sensation
To understand what an infant can sense researchers often present two stimuli
and record the baby’s response.
• For example a baby is given a sweet tasting substance and a sour tasting
substance
• If the baby consistently responds differently to the two stimuli then the
infant must be able to distinguish between them.
Smell
Infants have a keen sense of smell and respond positively to pleasant smells and
negatively to unpleasant smells (Menella, 1997).
Taste
Vision
• Vision is the least mature of all the senses at birth because the fetus has
nothing to look at, so visual connections in the brain can’t form until
birth.
• Visual acuity is defined as the smallest pattern that can distinguished
dependably.
• Infants prefer to look at patterned stimuli instead of plain, non-
patterned stimuli
• To estimate an infant’s visual acuity, we pair gray squares with squares
that differ in the width of their stripes.
• Newborns can perceive few colors, but By 3-4 months newborns are able
to see the full range of colors (Kellman, 1998).
• In fact, by 3-4 months infants have color perception similar to
adults (Adams, 1995).
Hearing
• Hearing is the most mature sense at birth. In fact, some sounds trigger
reflexes even without conscious perception.
– The fetus most likely heard these sounds in the womb during last
trimester
– Sudden sounds startle babies-making them cry, some rhythmic
sounds, like a heartbeat/lullaby put a baby to sleep.
• Yes, infants in first days of life, turn their head toward source of sounds
and they can distinguish voices, language, and rhythm.
Auditory Threshold
Perceptual Constancies
• An important part of perceiving objects is that the same object can look
very different
• Infants master size constancy very early on
– They recognize that an object remains the same size despite its
distance from the observer
Depth Perception
• Infants are not born with depth perception, it must develop. The images
on the back of our eyes are flat and 2-dimensional
• To create a 3-D view of the world, the brain combines information from
the separate images of the two eyes, retinal disparity
• Visual experience along with development in the brain lead to the
emergence of binocular depth perception around 3-5 months of age.
Face Recognition
Handedness
• Young babies reach for objects without a preference for one hand over
the other
• The preference for one hand over the other becomes stronger and more
consistent during preschool years
– By the time children are ready to enter kindergarten, handedness is
well established and very difficult to reverse
– Handedness is determined by heredity and environmental factors
– Approximately 10% of children write left-handed
• At birth infants can turn their heads from side to side while lying on their
backs
– By 2-3 months they can lift their heads while lying on their
stomachs
– By 4 months infants can keep heads erect while being held or
supported in a sitting position
Crawling
• Begins as belly-crawling
– The “inchworm belly-flop” style
– Most belly crawlers then shift to hands-and-knees, or in some
cases, hands-and-feet
• Some infants will adopt a different style of locomotion in place of
crawling such as bottom-shuffling while some infants skip crawling
altogether
• Due to the “back-to-sleep” movement, infants spend less time on their
tummies which may limit their opportunity to learn how to propel
themselves
Walking – Stepping
• After infancy fine motor skills progress rapidly and older children become
more dexterous because these movements involve the use of small muscle
groups
• These consist of small body movements, especially of the hands and
fingers.
– such as drawing, writing your name, picking up a coin, buttoning
or zipping a coat.
Viewer-centered representation, is that the individual stores the way the object
looks to him or her.
Thus, what matters is the appearance of the object to the viewer (in this case,
the appearance of the computer to the author), not the actual structure of the
object.
The shape of the object changes, depending on the angle from which we look at
it.
A number of views of the object are stored, and when we try to recognize an
object, we have to rotate that object in our mind until it fits one of the stored
images.
Consider, for example, the computer on which this text is being written. It has
Depth cue
• any of a variety of means used to inform the visual system about the
depth of a target or its distance from the observer.
• Monocular cues require only one eye and include signals about the state
of the ciliary muscles, atmospheric perspective, linear perspective, and
occlusion of distant objects by near objects.
• Binocular cues require integration of information from the two eyes and
include signals about the convergence of the eyes and binocular disparity.
Binocular cues require integration of information from the two eyes and
include signals about the convergence of the eyes and binocular disparity.
Convergence
n. the rotation of the two eyes inward toward a light source so that the image
falls on corresponding points on the foveas. Convergence enables the
slightly different images of an object seen by each eye to come together and
form a single image. The muscular tension exerted is also a cue to the
distance of the object from the eyes.
Binocular Disparity
the slight difference between the right and left retinal images. When both
eyes focus on an object, the different position of the eyes produces a
disparity of visual angle, and a slightly different image is received by each
retina. The two images are automatically compared and, if sufficiently
similar, are fused, providing an important cue to depth perception. Also
called retinal disparity.
• Touch - tactile
• Sound - auditory
• Sight - visual
• Taste - gustatory
• Smell – olfactory
Just as there are palpable effects on experience, thought, and action of past
events that cannot be consciously remembered, there also appear to be similar
effects of events in the current stimulus environment, which cannot be
consciously perceived.
By the same token, implicit perception entails any change in the person’s
experience, thought, or action which is attributable to such an event, in the
absence of (or independent of) conscious perception of that event.
Learning
Definitions of Learning:
3. Crow & crow: “Learning is the acquisition of habits, knowledge & attitudes.
It involves new ways of doing things and it operates in individuals attempts to
overcome obstacles or to adjust to new situations. It represents progressive
changes in behaviour. It enables him to satisfy interests to attain goals.
NATURE OF LEARNING
1. Learning is Universal. Every creature that lives learns. Man learns most.
The human nervous system is very complex, so are human reactions and so are
human acquisition. Positive learning vital for children’s growth and
development.
3. Learning is from all Sides: Today learning is from all sides. Children learn
from parents, teachers, environment, nature, media etc.
10. Learning is not directly observable. The only way to study learning is
through some observable behaviour. Actually, we cannot observe learning; we
see only what precedes performance, the performance itself, and the
consequences of performance.
Process of Learning
4 Theories of learning
1. Classical Conditioning
2. Operant Conditioning
3. Cognitive Theory.
4. Social Learning Theory.
Classical Conditioning
• Then, when next Pavlov did was to accompany the offering of meat to
the dog along with ringing up of bell.
• He did this several times. Afterwards, he merely rang the bell without
presenting the meat. Now, the dog began to salivate as soon as the bell
rang.
• After a while, the dog would salivate merely at the sound of the bell,
even if no meat were presented. In effect, the dog had learned to
respond i.e. to salivate to the bell.
Pavlov concluded that the dog has become classically conditioned to salivate
(response) to the sound of the bell (stimulus). It will be seen that Classical
Conditioning learning can take place amongst animals based on stimulus-
response (SR) connections.
There are four major factors that affect the strength of a classically
conditioned response and the length of time required for classical conditioning.
For example, a tone that is always followed by food will elicit more
salivation than one that is followed by food only some of the time.
1. Human beings are more complex than dogs but less amenable to
simple cause-and-effect conditioning.
Operant Conditioning
For example, working hard and getting the promotion will probably cause the
person to keep working hard in the future.
Learning by insight
Edward Tolman (1886 – 1959) differed with the prevailing ideas on learning
1. Attention Process
2. Retention Process
3. Motor Reproduction Process
4. Reinforcement Process
1. Attention Process: People can learn from their models provided they
recognise and pay attention to the critical features. In practice, the
models that are attractive, repeatedly available or important to us tend
to influence us the most.
Self-efficacy
MEMORY
The multi-store model of memory (also known as the modal model) was
proposed by Richard Atkinson and Richard Shiffrin (1968) and is a structural
model. They proposed that memory consisted of three stores: a sensory
register, short-term memory (STM) and long-term memory (LTM).
SENSORY MEMORY
Sensory memory is the first stage of memory, the point at which information
enters the nervous system through the sensory systems—eyes, ears, nose,
tongue, and skin. The sensory register is a memory system that works for a
very brief period of time that stores a record of information received by
receptor cells until the information is selected for further processing or
discarded. Information is encoded into sensory memory as neural messages in
the nervous system. As long as those neural messages are traveling through
the system, it can be said that people have a “memory” for that information
that can be accessed if needed.
A. The sensory memory register is specific to individual senses:
1. Iconic memory for visual information
INTRODUCTION :-
Short Term memory (STM) is the place where small amounts of
information can be temporarily kept for more than a few seconds but
usually for less than one minute (Baddeley, Vallar, & Shallice, 1990).
Information in short-term memory is not stored permanently but rather
becomes available for us to process, and the processes that we use to make
sense of, modify, interpret, and store information in STM.
Psychologists distinguish STM from LTM on the basis of characteristics,
such as,
Researchers regard acoustic code as the dominant code used in STM (Neath
& Surprenant, 2003).
RETENTION DURATION AND FORGETTING -
∙ John Brown (1958) and Peterson and Peterson (1959), concluded from
their three consonant trigram counting study that if not rehearsed,
information is lost, decayed, or breaks apart, within about 20 seconds from
STM. That length of time is called the retention duration of the memory.
∙ However, other cognitive psychologists proposed a different mechanism,
called interference.
Some information can “displace” other information, making the former
hard to retrieve.
∙ Waugh and Norman (1965) in their study of probe digit task showed that
interference is the reason for the loss of information in STM rather than
decay.
∙ Keppel and Underwood (1962), found that forgetting in the Brown–
Peterson task doesn’t happen until after a few trials. They suggested that over
time, proactive interference builds up.
This term refers to the fact that material learned first can disrupt
retention of subsequently learned material.
∙ Wickens, Born, and Allen (1963) reasoned that the greater the similarity
among the pieces of information, the greater the interference.
∙ All the evidence might suggest that all cognitive psychologists agree that
only interference causes forgetting in STM, but it is not so. Reitman (1971,
1974) concluded that information really could decay if not rehearsed in STM.
RETRIEVAL OF INFORMATION -
∙ Saul Sternberg (1966, 1969), in a series of experiments, found serial,
exhaustive search as the way one retrieves information from STM. He
explained that the search process itself may be so rapid and have such
momentum it is hard to stop once it starts.
∙ An intriguing twist on the Sternberg study comes from DeRosa and Tkacz
(1976), who demonstrated that with certain kinds of stimuli, people
apparently search STM in a parallel way. This work suggests that STM treats
ordered, organized material differently from unorganized material.
Explicit Memory
Explicit memory usually refers to all the memories and information that can
be evoked consciously. The encoding of explicit memories is done in the
hippocampus, but they are stored
somewhere in the temporal lobe of the brain. The medial temporal lobe is
also involved in this type of memory and damage to MTL is linked to poor
explicit memory.
The other name used for explicit memory is declarative memory. Explicit or
declarative memory is divided into two types: episodic and semantic
memory.
WORKING MEMORY
In 1690, John Locke distinguished between contemplation, or holding an idea
in mind, and memory, or the power to revive an idea after it has disappeared
from the mind (Logie, 1996). The holding in mind is limited to a few concepts
at once and reflects what is now called working memory, as opposed to the
possibly unlimited store of knowledge from a lifetime that is now called long-
term memory. The term “working memory” became much more dominant in
the field after Baddeley and Hitch (1974) demonstrated that a single module
could not account for all kinds of temporary memory. Their thinking led to
an influential model (Baddeley, 1986) in which verbal-phonological and
visual-spatial representations were held separately, and were managed and
manipulated with the help of attention-related processes, termed the
central executive.
Working memory can be defined as the retention of a small amount of
information in a readily accessible form. It facilitates planning,
comprehension, reasoning, and problem-solving.
Working memory holds new information in place so the brain can work with
it briefly and connect it with other information. For example, in math class,
working memory lets children “see” in their head the numbers the teacher is
saying. They might not remember any of these
numbers by the next class or even 10 minutes later. But that’s fine since the
working memory has done its short-term job by helping them tackle the task
at hand.
Working memory capacity is the ability to hold onto and use specific
information for a certain amount of time. Working memory capacity has been
shown to increase with age; however there are key differences across
individuals. It's also important to understand that working memory capacity
is quite low and limited when compared to other kinds of memory, since a
certain amount of information can only be held for a certain amount of time.
Research has shown that the number of items we can hold at once in our
working memory is seven, plus or minus two items. According to Miller, our
ability to put new information into categories via the process of chunking
increases the number of items we can retain at once. This involves grouping
the information into meaningful pieces that can be remembered better. An
example of this would be if you were given a list of unrelated words to retain,
you would probably categorize them into respective chunks: for example, the
words "bird," "mouse," and even "bug" could be classified into the 'animals'
chunk, and "book," "pen," "house" into the ' inanimate items' chunk. Other
theories, such as those of Cowan, claim that we can only retain three or four
items at once.
Maintenance describes the ability to 'hold' information in the mind for a given
amount of time. This is a very important feature about working memory to
understand because, without it, we lose our ability to use and manipulate this
kind of short-lived memory and information. Biologically, maintenance
requires the activation of key brain regions associated with problem solving
and memory, in addition to many other areas that may aid in our ability to use
the information.
Rehearsal refers to the process of consistently repeating learned information
in a meaningful way, such as going through a list of groceries not just seeing
unrelated words, but trying to think of how they could contribute to future
meals. Rehearsal could also be giving meaning to recently learned or encoded
information, such as making mnemonics in order to remember a list of given
words in order. For example, in order to remember the planets in our solar
system in order, we can remember the mnemonic "My Very Educated Mother
Just Served Us Noodles", corresponding to Mercury, Venus, Earth, Mars,
Jupiter, Saturn, Uranus, and Neptune.
Maintenance and rehearsal are ways to increase the likelihood of
remembering something. This is because memories that are not rehearsed or
maintained degrade and become inaccurate over time.
Working Memory and Short term memory are often confused as the same but
working memory differs from short-term memory in that it assumes both the
storage and manipulation of information, and in the emphasis on its functional
role in complex cognition.
Metamemory
Metamemory refers to the processes and structures whereby people are able
to examine the content of their memories, either prospectively or
retrospectively, and make judgments or commentaries about them.
The simplest form of metamemory is knowing you have a memory without
remembering the details. If you’ve ever experienced the tip-of-the-tongue
phenomena (you can’t recall a memory but know you have it, like a movie’s
name), you’ve concluded you have a memory via your metamemory, not
actual memory. From a computer programming point of view, it’s knowing
there is (or isn’t) an item in a database without actually looking it up, like
looking at only the key or value in a pair.
All humans have the capacity to quickly figure out what they know and don’t
know, and their judgments are quite accurate. For example, without searching
the entire mind for all memory units, we can easily tell if we know the answer
to a question in an exam. We can look at the question paper and quickly assess
what we know without going through the details of what we know. This may
seem paradoxical at first, but it isn’t – how can we know what’s in a drawer
without checking it? Humans have multiple representations for memory that
indicate whether a memory exists or not and to what extent. For example,
artificially triggering a small set of neurons that are attached to a memory can
activate a larger network of neurons that hold that memory.
2. Task: Knowledge that a task demands memory and recalling ability vs.
familiarity or on-the-go analysis. This would prompt you to know how
you can do a task if it needs memory or analysis or both.
8. Locus: Knowledge about how much control one has over their memory
(little control vs. high control): This would contain beliefs about how
well you can improve your memory or how easily you can lose it with
your own actions (such as drinking or no exercise) vs. how easily you
believe genetics play a role or you can’t change your memory abilities
beyond a small limit.
Theories of metamemory
The competition hypothesis: The brain gets constant inputs from various
senses and within itself. These create competition within themselves and fight
for processing power in the brain. When the competition is high, metamemory
judgments about knowing something are inaccurate or low. This might
happen because high competition would lead to multiple cue
and target activations which create confusion. So low competition would
increase confidence in memory. It would be easy to say, “I know what love is,”
but when asked, “how is it different from compassion or lust?” one would find
it difficult to differentiate the 3. This extra activation of competing inputs
fighting for the brain’s resources would make recall worse.
The interaction hypothesis: This theory suggests cue-familiarity hypothesis
is the preferred mode for metamemory and the accessibility hypothesis occurs
only when cue-familiarity fails. In the car example, one would instantly know
what a car is. But when asked “what is a 4-wheel drive,” they would take time
to retrieve partial information about a 4-wheel drive, if it exists, and then
judge if they know or don’t know.
For instance, think about the concept “bachelor.” The defining features
here include “male,” “unmarried,” and “adult.” It is not possible for a 2-
year-old to be a bachelor (in our common use of the term), nor for a
woman, nor for a married man. Features such as “is young” or “lives in
own apartment” are also typically associated with bachelors, though not
necessarily in the way that “male” and “unmarried” are—these are the
characteristic features.
The feature comparison model can explain many findings that the
hierarchical network model could not. One finding it explains is the
typicality effect: Sentences such as “A robin is a bird” are verified more
quickly than sentences such as “A turkey is a bird” because robins, being
more typical examples of birds, are thought to share more characteristic
features with “bird” than do turkeys.
The feature comparison model also explains fast rejections of false
sentences, such as “A table is a fruit.” In this case, the list of features for
“table” and the list for “fruit” presumably share very few entries.
The feature comparison model also provides an explanation for a finding
known as the category size effect (Landauer & Meyer, 1972). This term
refers to the fact that if one term is a subcategory of another term, people
will generally be faster to verify the sentence with the smaller category.
That is, people are faster to verify the sentence “A collie is a dog” than to
verify “A collie is an animal,” because the set of dogs is part of the set of
animals. The feature comparison model explains this effect as follows.
It assumes that as categories grow larger (for example, from robin, to bird,
to animal, to living thing), they also become more abstract. With increased
abstractness, there are fewer defining features. Thus in the first stage of
processing there is less overlap between the feature list of a term and the
feature list of an abstract category.
• Connectionist Models
Connectionist models, also known as Parallel Distributed Processing
(PDP) models, are a class of computational models often used to model
aspects of human perception, cognition, and behaviour, the learning
processes underlying such behaviour, and the storage and retrieval of
information from memory. The approach embodies a
particular perspective in cognitive science, one that is based on the idea
that our understanding of behaviour and of mental states should be
informed and constrained by our knowledge of the neural processes that
underpin cognition. While neural network modelling has a history dating
back to the 1950s, it was only at the beginning of the 1980s that
the approach gained widespread recognition, with the publication of two
books edited by D.E. Rumelhart & J.L. McClelland (McClelland &
Rumelhart, 1986; Rumelhart & McClelland, 1986), in which the basic
principles of the approach were laid out, and its application to a number
of psychological topics were developed. Connectionist models of
cognitive processes have now been proposed in many different domains,
ranging from different aspects of language processing to cognitive control,
from perception to memory. Whereas the specific architecture of such
models often differs substantially from one application to another, all
models share a number of central assumptions that collectively
characterize the “connectionist ” approach in cognitive science. One of
the central features of the approach is the emphasis it has placed on
mechanisms of change. In contrast to traditional computational modelling
methods in cognitive science, connectionism takes it that understanding
the mechanisms involved in some cognitive process should be informed
by the manner in which the system changed over time as it developed and
learned.
Connectionist models take inspiration from the manner in which
information processing occurs in the brain. Processing involves the
propagation of activation among simple units (artificial neurons)
organized in networks, that is, linked to each other through weighted
connections representing synapses or groups thereof. Each unit
then transmits its activation level to other units in the network by means
of its connections to those units. The activation function, that is, the
function that describes how each unit computes its activation based on its
inputs, may be a simple linear function, but is more typically non-linear
(for instance, a sigmoid function).
Episodic memory
Episodic memory, holds memories of specific events in which you yourself
somehow participated. Episodic memory is memory for information about
one’s personal experiences. With episodic memory, the memories are encoded
in terms of personal experience. Recalling memories from the episodic system
takes the form of “Remember when . . ” In episodic memory, we hold onto
information about events and episodes that have happened to us directly. As
Tulving (1989) put it, episodic memory “enables people to travel back in time,
as it were, into their personal past, and to become consciously aware of
having witnessed or participated in events and happenings at earlier times”.
Organization of episodic memory is temporal; that is, one event will be
recorded as having occurred before, after, or at the same time as another.
Episodic memory has also been described as containing memories that are
temporally dated; the information stored has some sort of marker for when it
was originally encountered. Any of your memories that you can trace to a
single time are considered to be in episodic memory. If you recall your high
school graduation, or your first meeting with your first-year roommate, or the
time you first learned of an important event, you are recalling
episodic memories. Even if you don’t recall the exact date or even the year,
you know the information was first presented at a particular time and place,
and you have a memory of that presentation.
Explanation with an example
Gene, for example, survived a motor-cycle accident in 1981 (when he was 30
years old) that seriously damaged his frontal and temporal lobes, including
the left hippocampus. Gene shows anterograde amnesia and retrograde
amnesia. In particular, Gene cannot recall any specific past events, even with
extensive, detailed cues. That is, Gene cannot recall any birthday parties,
school days, or conversations. Schacter noted further that “even when detailed
descriptions of dramatic events in his life are given to him—the tragic
drowning of his brother, the derailment near his house, of a train carrying
lethal chemicals that required 240,000 people to evacuate their homes for a
week—Gene does not generate any episodic memories”
Gene recalls many facts (as opposed to episodes) about his past life. He knows
where he went to school; he knows where he worked. He can name former co-
worker’s; he can define technical terms he used at the manufacturing plant
where he worked before the accident. Gene’s memories, Schacter argued, are
akin to the knowledge we have of other people’s lives. You may know, for
example, about incidents in your mother’s or father’s lives that occurred before
your birth: where they met, perhaps, or some memorable childhood incidents.
You know about these events, although you do not have specific recall of
them. Similarly, according to Schacter, Gene has knowledge of some aspects
of his past (semantic memory), but no evidence of any recall of specific
happenings (episodic memory)
THEORIES OF FORGETTING
• Displacement theory
• Trace decay theory
• Interference theory
• Retrieval failure theory
• Consolidation theory
• Suppose you have just learned a seven-digit phone number when you are
given another number to memorize. Your short-term memory doesn’t have
the capacity to store both information. In order to recall the new phone
number, you’ll have to forget the first one. This isn’t always a conscious
process, but it can be. The moment you begin to focus on the new set of
numbers, the first one seems to “go away” or get confused with the new
numbers you are learning.
Serial Probe Task- In 1965, Waugh and Norman put both displacement theory
and trace decay theories to the test. They put participants through a “serial probe
task.” Each participant listened to a long list of letters. Later, the researchers
would yell out one of the letters from the list, and the participants would have to
name the letter listed after. What they found showed that displacement theory
could explain some instances of forgetting, but not all of them.The interesting
part of the results was that when the list was read at a faster pace, participants
completed the task more successfully.
The trace decay theory, however, doesn’t explain why many people can clearly
remember past events, even if they haven’t given them much thought before.
Neither does it take into account the role of all the events that have taken place
between the learning and the recall of the memory. Just like the serial probe task
suggests that displacement theory is not “enough,” decay theory fails to cover
all instances of forgetting and remembering.
The interference theory was the dominant theory of forgetting throughout the
20th century. It asserts that the ability to remember can be disrupted both by our
previous learning and by new information. In essence, we forget because
memories interfere with and disrupt one another. For example, by the end of the
week, we won’t remember what we ate for breakfast on Monday because we
had many other similar meals since then. The first study on interference was
conducted by German psychologist John A. Bergstrom in 1892. He asked
participants to sort two decks of word cards into two piles. When the location of
one of the piles changed, the first set of sorting rules interfered with learning the
new ones, and sorting became slower.
Proactive interferences take place when old memories prevent making new
ones. This often occurs when memories are created in a similar context or
include near-identical items. Remembering a new code for the combination lock
might be more difficult than we expect. Our memories of the old code interfere
with the new details and make them harder to retain.
Retroactive interferences occur when old memories are altered by new ones.
Just like with proactive interference, they often happen with two similar sets of
memories. Let’s say you used to study Spanish and are now learning French.
When you try to speak Spanish, the newly acquired French words may interfere
with your previous knowledge.
The retrieval failure theory was developed by the Canadian psychologist and
cognitive neuroscientist Endel Tulving in 1974. According to this theory,
forgetting often involves a failure in memory retrieval. Although the
information stored in the long-term memory is not lost, we are unable to retrieve
it at a particular moment. A classic example is the tip of the tongue effect when
we are unable to remember a familiar name or word.
There are two main reasons for failure in memory retrieval. Encoding failure
prevents us from remembering information because it never made it into long-
term memory in the first place. Or the information may be stored in long-term
memory, but we can’t access it because we lack retrieval cues.
Semantic cues - Semantic cues are associations with other memories. For
example, we might have forgotten everything about a trip we took years ago
until we remember visiting a friend in that place. This cue will allow
recollecting further details about the trip.
These five theories are most frequently mentioned when discussing forgetting,
memory, and recall. Psychologists have devised other theories that may be
worth looking into. No one theory of forgetting covers all incidences of memory
loss and recall, so these theories are valid, too!
Motivated Theory of Forgetting
Friedrich Nietzsche was one of the first psychologists to suggest that people
intentionally forgot their memories. Typically, these memories are traumatic or
shameful. This theory really took off when Freud expanded upon this theory.
Freud spoke more about the idea that people unintentionally forgot their
memories. This process is called repression and is considered a defense
mechanism. Freud, however, believed he could recover repressed memories.
Today, psychologists mostly discredit this idea, as the mind can “change”
memories with leading questions and other methods. Still, this motivated theory
of forgetting is an important idea in the history of psychology.
Mnemonics
Mnemonics are memory tricks that can help you remember long strings of
information, often in a particular order. You have been using mnemonic
devices since…well, before you can remember!
Mnemonics are strategies used to improve memory. They are often taught in
school to help students learn and recall information.
You can also use mnemonic strategies to remember names, number sequences,
and even a grocery list. People learn in different ways. Tools that work for one
person may not be helpful for another. Fortunately, there are several ways to use
mnemonics.
Keyword Mnemonics
Studying a second (or third or fourth) language? Using the keyword mnemonic
method improves learning and recall, especially in the area of foreign language.
• First, you choose a keyword that somehow cues you to think of the
foreign word.
• Then, you imagine that keyword connected with the meaning of the word
you're trying to learn.
• The visualization and association should trigger the recall of the correct
word.1
For example, if you're trying to learn the Spanish word for cat, which
is gato, first think of a gate and then imagine the cat sitting on top of the gate.
Even though the "a" sound in gato is short and the "a" sound in gate is long, the
beginnings are similar enough to help you remember the association between
gate and cat and to recall the meaning of gato.
For example, memorizing the following number: 47895328463 will likely take
a fair amount of effort. However, if it is chunked like this: 4789 532 8463, it
becomes easier to remember.
Musical Mnemonics
One way to successfully encode the information into your brain is to use music.
A well-known example is the "A-B-C" song, but there's no end to what you can
learn when it's set to music. You can learn the names of the countries of Africa,
science cycles, memory verses, math equations, and more.
If you search online, you'll find that there are some songs already created
specifically to help teach certain information, and for others, you'll have to
make up your own. And no, you don't have to be able to carry a tune or write
the music out correctly for this mnemonic method to work.
Music can be 33+0a helpful memory tool for people with mild cognitive
impairment.3
Letter and Word Mnemonic Strategies Acronyms and acrostics are typically the
most familiar type of mnemonic strategies.
Acronyms use a simple formula of a letter to represent each word or phrase that
needs to be remembered.
For example, think of the NBA, which stands for the National Basketball
Association.
Or, if you're trying to memorize four different types of dementia, you might use
this acronym: FLAV, which would represent frontotemporal, Lewy body,
Alzheimer's, and vascular. Notice that I ordered the list in such a way to more
easily form a "word," which you would not do if the list you need to memorize
is ordered.
An acrostic uses the same concept as the acronym except that instead of
forming a new "word," it generates a sentence that helps you remember the
information.
An often-used acrostic in math class is: Please Excuse My Dear Aunt Sally.
This acrostic mnemonic represents the order of operations in algebra and stands
for parentheses, exponents, multiplication, division, addition, and subtraction.4
Module 5 - Thinking and Concept Formation
What is a Concept?
Types of concepts
When two people first meet, it is their first chance to form initial impressions and
judgments about the other. The observer goes through a concept-formation
process in this situation. There are two opposing cognitive theories for concept
formation.
Categorization
• Classical categorization
• Conceptual clustering
• Prototype theory
Classical categorization
Comes to us first from Plato, who, in his Statesman dialogue, introduces the
approach of grouping objects based in their similar properties. This approach
was further explored and systematized by Aristotle in his Categories treatise,
where he analyzes the differences between classes and objects. Aristotle also
applied intensively the classical categorization scheme in his approach to the
classification of living beings (which uses the technique of applying successive
narrowing questions such as "Is it an animal or vegetable?", "How many feet
does it have?", "Does it have fur or feathers?", "Can it fly?"...), establishing this
way the basis for natural taxonomy.
Conceptual clustering
Prototype theory
This theoretical view of the nature of concepts, known as the prototype view,
was proposed in the 1970s stemming from the work of Eleanor Rosch and
colleagues. The prototype view of
concepts denies the existence of necessary-and-sufficient feature lists (except
for a limited number of concepts such as mathematical ones), instead regarding
concepts as a different sort of abstraction (Medin & Smith, 1984). The
prototype view of concepts holds that prototypes of concepts include features
or aspects that are characteristic—that is, typical—of members of the category
rather than necessary and sufficient. No individual feature or aspects need to be
present in the instance for it to count as a member of the category, but the more
characteristic features or aspects an instance has, the more likely it is to be
regarded as a member of the category.
JUDGEMENT
Judgement theories
The support theory was proposed by Tversky and Koehler (1994) based in part
on the availability heuristic. The key assumption is that any given event will
appear more or less likely depending on how it is described. A more explicit
description of an event will typically be regarded as having a greater subjective
probability because it: draws attention to less obvious aspects; and overcomes
memory limitations.
Mandel (2005) found the overall estimated probability of a terrorist attack was
greater when participants were presented with explicit possibilities than when
they were not. Redelmeieret al. (1995) found this phenomenon in experts as well
as non-experts.
Sloman et al. (2004) obtained findings directly opposite to those predicted by
support theory. Thus, an explicit description can reduce subjective probability if
it leads us to focus on low-probability causes. Redden and Frederick (2011)
argued that providing an explicit description can reduce subjective probability by
making it more effortful to comprehend an event. This oversimplified theory
doesn’t account for these findings.
Gigerenzer and Gaissmaier (2011) argued that heuristics are often very valuable.
They focused on fast and frugal heuristics that involve rapid processing of
relatively little information. One of the key fast and frugal heuristics is the take-
the-best strategy,
which has three components:
- Search rule – search cues in order of validity.
- Stopping rule – stop when a discriminatory cue is found.
- Decision rule.
The most researched example is the recognition heuristic. If one of two objects
is recognised and the other is not, then we infer that the recognised object has the
higher value with respect to the criterion (Goldstein & Gigerenzer,
2002). Kruglanski and Gigerenzer (2011) argued that there is a two-step process
in deciding which heuristic to use:
• First, the nature of the task and individual memory limit the number of available
heuristics.
•Second, people select one of them based on the likely outcome of using it and
its processing demands.
There is good evidence that people often use fast and frugal heuristics. These
heuristics are fast and effective, and used particularly when individuals are under
time or cognitive pressure. The approach has several limitations. Too much
emphasis has been placed on using intuitions when humans have such a large
capacity for logical reasoning (Evans & Over, 2010). The use of the recognition
heuristic is more complex than assumed: people generally also consider why
they recognise an object and only then decide whether to use the recognition
heuristic (Newell, 2011). Other heuristic use is also more complex than claimed.
Far too little attention has been paid to the issue of the importance of the decision
that has to be made.
DECISION MAKING
Cognitive psychologists use the term decision making to refer to the mental
activities that take place in choosing among alternatives. decisions are made in
the face of some amount of uncertainty.
Winterfeldt and Edwards (1986a) defines Rational decision making “has to do
with selecting ways of thinking and acting to serve your ends or goals or moral
imperatives, whatever they may be, as well as the environment permits”.
We can divide decision making tasks into five different categories/ phases
1 Setting Goals
When we try to understand why a person makes one decision rather than an-
other, it often turns out that the reasons have to do with the decision maker’s
goals for the decision (Bandura, 2001; Galotti, 2005). The idea in setting goals
is that the decision maker takes stock of his or her plans for the future, his or her
principles and values, and his or her priorities.
2 Gathering Information
Before making a decision, the decision maker needs information. Specifically,
she or he needs to know what the various options are. the decision maker needs
to gather some information about at least some of the options. In addition to
information about options, decision makers may need or want to gather
information about possible criteria to use in making their choice. If you’ve
never bought a computer before, you might talk with computer-savvy friends, or
people in your company’s IT department, to get information about what features
they consider important.
5Evaluating
The aim here is to reflect on the process and identify those aspects that could be
improved, as well as those that ought to be used again for similar decisions in
the future.
Improving Decision Making
One of the major obstacles to improving the ways in which people gather and
integrate information is overconfidence. People who believe their decision
making is already close to optimal simply will not see the need for
any assis- tance, even if it is available and offered. A second obstacle to
improving decision making has to do with people’s feelings and expectations
about how decisions ought to be made. Cultural ex- pectations lead many of us
to trust our intuitions (or at least the intuitions of experts) over any kind of
judgment made with equations, computer programs, mathematical models, or
the like. Real improvement in reducing bias seems to require extensive practice
with the task, individual feedback about one’s performance, and some means of
making the statistical and/or probabilistic aspects of the decisions
clearer.contrary to our (strong) intuitions, it is often better, fairer, more rational,
and in the long run more humane to use decision aids than to rely exclusively on
human impressions or intuitions
Decision analysis (Keeney, 1982; von Winterfeldt & Edwards, 1986b) is an
emerging technology that helps people gather and integrate information in a
way similar to that used earlier in the MAUT analysis of choosing a major. It
uses human judges’ feelings, beliefs, and judgments of relevance but helps
ensure that integration of information is carried out in an unbiased way.
Even when the nature of problems are diverse and may vary, every problem
includes three major components. These are:
a. Initial stage: which describes the situation at the beginning
of the problem,
b. Goal stage: when we actually solve the problem
c. Obstacles: which describes the roadblocks or restrictions which makes it
difficult for an individual to move from the initial stage to the goal stage.
The following section will deal with all of these processes in detail.
Algorithms are systematic, yet long methods of problem solving which are sure
to yield a solution. However, the process of using an algorithm can be time
consuming and inefficient. One specific method of algorithm is known as
exhaustive search, wherein one searches for all the possible solutions to a
problem using a specific system.
Heuristics are generally known as rules of thumb. They are the quickly
available, seemingly appropriate solution to the problem. Though this method
is less time consuming, heuristics can sometimes lead to wrong solutions.
There is more research available in cognitive psychology on heuristics than on
algorithms. One reason is that we are wired to solve most of our everyday
problems, predominantly using heuristics rather than algorithms. Three of the
most widely used heuristics are the analogy, the means-ends heuristic, and the
hill-climbing heuristic.
a. Analogy
When someone uses a solution that was previously used to solve a
similar problem, it is referred to as analogy. Analogies are very widely
used by everyone in their day to day life. For instance, one study
reported that engineers created an average of 102
analogies during 9 hours spent on problem solving (Christensen &
Schunn, 2007). One interesting finding is that analogies are used when
people come up with breakthrough inventions. For instance, the Wright
brothers invented airplanes with the help of the analogy of birds flying
using their wings. Problem solvers must peel away the irrelevant,
superficial details in order to reach the core of the problem (Whitten &
Graesser, 2003). Researchers use the term problem isomorphs to refer to
a set of problems that have the same underlying structures and solutions,
but different specific details.
b. Means-end heuristics
This method requires the problem solvers to initially identify the
ends or solutions and then look for the means to achieving them. There
are two major components for the means-end heuristics. The first
component is when the problem solver divides the problem into sub-
problems. The second component is when the problem solver tries to
reduce the difference between the initial state and the goal state of each
of the sub problems. This strategy is seen as one of the most effective
and flexible methods of problem solving.
c. Hill-climbing heuristics
Hill climbing heuristics is one of the most straightforward
heuristics for problem solving. In this method, one continuously makes
choices to reach the goal using alternatives that seem appropriate at a
point of time. This is useful when one does not have enough information
about the various alternatives at a point of time, but is aware only of the
immediate step. The biggest drawback to this heuristic is that problem
solvers must consistently choose the alternative that appears to lead most
directly toward the goal. In doing so, they may fail to choose an indirect
alternative, which may have greater long-term benefits.
2. Mental Set - It refers to the tendency to use the same solution from previous
problems, even though the problem could be solved by a different, easier
method. It is overactive in top-down processing. The classic experiment on
mental set is Abraham Luchins’s (1942) water-jar problem. In the
experiment, participants had to measure a specific amount of water using
three jars of different capacities. Luchins found that subjects kept using
methods they had applied in previous trials, even if a more efficient solution
for the current trial was available.
Mental set also involves:
● Fixed mindset in which an individual believes to possess a certain
amount of intelligence and other skills, and no amount of effort can
help him perform better. ● Growth mindset in which an individual
believes that he can cultivate intelligence and other skills. So,
challenge oneself to perform better.
Deductive reasoning
Inductive reasoning
Inductive reasoning is the act of making genialized conclusion based off of
specific scenarios. It is a method of logical thinking that combines observations
with experiential information to reach a conclusion. When you use a specific set
of data or existing knowledge from past experiences to make decisions, you're
using inductive reasoning.
Inductive reasoning is a bottom up approach.
Creativity
Creativity is a mental and social process involving the generation of
new ideas or concepts, or new associations of the creative mind between
existing ideas or concepts. An alternative conception of creativness is that it is
simply the act of making something new.
Types of Creativity
Experts also tend to distinguish between different types of creativity. The “four
c” model of creativity suggests that there are four different types:3
1. Phonology
Phonology is the use of sounds to encode messages within a spoken
human language. Babies are born with the capacity to speak any language
because they can make sounds and hear differences in sounds that adults
would not be able to do. This is what parents hear as baby talk. The infant is
trying out all of the different sounds they can produce.
As the infant begins to learn the language from their parents, they begin to
narrow the range of sounds they produce to one's common to the language they
are learning, and by the age of
5, they no longer have the vocal range they had previously.
2. Morphology
The definition of morphology is the study of the structure of words
formed together, or more simply put, the study of morphemes.
Morphemes are the smallest utterances with meaning.
Not all morphemes are words. Many languages use affixes, which carry
specific grammatical meanings and are therefore morphemes, but are not
words. For example, English-speakers do not think of the suffix “-ing” as a
word, but it is a morpheme. The creation of morphemes rather than words
also allowed anthropologists to more easily translate languages. For example,
in the English language, the prefix -un means "the opposite, not, or lacking"
which can distinguish the words "unheard" and "heard" apart from each
other.
3. Semantics
Semantics is the study of meaning. Some anthropologists have seen linguistics
as basic to a science of man because it provides a link between the biological
and sociocultural levels. Modern linguistics is diffusing widely in
anthropology itself among younger scholars, producing work of competence
that ranges from historical and descriptive studies to problems of semantic and
social variation. In the 1960's, Chomsky prompted a formal analysis of
semantics and argued that grammars needed to represent all of a speaker's
linguistic knowledge, including the meaning of words. Most semanticists
focused attention on how words are linked to each other within a language
through five different relations.
• synonymy - same meaning (ex: old and aged)
• homophony - same sound, different meaning (ex: would and wood) •
• denotation - what words refer to in the "real" world (ex: having the word pig
refer to the actual animal, instead of pig meaning dirty, smelly, messy or
sloppy)
• connotation - additional meanings that derive from the typical contexts
in which they are used in everyday speech. (ex: calling a person a pig,
not meaning the animal but meaning that they are dirty, smelly, messy
or sloppy)
4. Syntax
The study of the arrangement and order of words, for example if the Subject
or the Object comes first in a sentence.
There are many theoretical approaches to the study of syntax. Noam Chomsky,
a linguist, sees syntax as a branch of biology, since they view syntax as the
study of linguistic knowledge as the human mind sees it. Other linguists take a
Platonistic view, in that they regard syntax to be the study of an abstract formal
system.
5. Speech sounds
Human Speech sounds are traditionally divided between vowels and
consonants, but scientific distinctions are much more precise. An important
distinction between sounds in many languages is the vibration of the glottis,
which is referred to as voicing.
a. Phoneme
A phoneme is the smallest phonetic unit in a language that is capable
of conveying a distinction in meaning. For example, in English we can
tell that pail and tail are different words, so /p/ and /t/ are phonemes.
Two words differing in only one sound, like pail and tail are called a
minimal pair. The International Phonetic Association created the
International Phonetic Alphabet (IPA), a collection of standardized
representations of the sounds of spoken language.
When a native speaker does not recognize different sounds as being distinct,
they are called allophones. For example, in the English language we consider
the p in pin and the p in spin to have the same phoneme, which makes them
allophones. In Chinese, however, these two similar phones are treated
separately and both have a separate symbol in their alphabet. The minimum
bits of meaning that native speakers recognize are known as phonemes. It is
any small set of units, usually about 20 to 60 in number, and different for each
language, considered to be the basic
distinctive units of speech sound by which morphemes, words,
and sentences are represented.
b. Morpheme
• Piaget suggested that children go through four separate stages in a fixed order
that is universal in all children. Piaget declared that these stages differ not only
in the quantity of information acquired at each, but also in the quality of
knowledge and understanding at that stage. He suggested that movement from
one stage to the next occurred when the child reached an appropriate level of
maturation and was exposed to relevant types of experiences.
contains. This natural faculty has become known as the Language Acquisition
Device (LAD). Chomsky’s ground-breaking theory remains at the centre of the
debate about language acquisition. However, it has been modified, both by
Chomsky himself and by others. Chomsky’s original position was that the LAD
contained specific knowledge about language. Dan Isaac Slobin has proposed that
it may be more like a mechanism for working out the rules of language.
Chomsky (1975) also proposed that language is modular; people have a set of
specific linguistic abilities that is separated from our other cognitive processes,
such as memory and decision making. Because language is modular, Chomsky
(2002, 2006) argued that young children learn complex linguistic structures many
years before they master other, simpler tasks, such as mental arithmetic. In
addition, Chomsky (1957, 2006) pointed out the difference between the deep
structure and the surface structure of a sentence. The surface structure is
represented by the words that are actually spoken or written. In contrast, the deep
structure is the underlying, more abstract meaning of a sentence. People use
transformational rules to convert deep structure into a surface structure that
they can speak or write. Two sentences may have very different surface
structures, but very similar deep structures. For example, ‘‘Sara threw the ball’’
and ‘‘The ball was thrown by Sara.’’ These two surface structures differ, none of
the words occupy the same position in both sentences. In addition, three of the
words in the second sentence do not even appear in the first sentence. However,
‘‘deep down,’’ speakers of English feel that the sentences have identical core
meanings. Chomsky (1957, 2006) also pointed out that two sentences may have
identical surface structures but very different deep structures; these are called
ambiguous sentences.
Children are often heard making grammatical errors such as “I sawed,” and
“sheeps” which they would not have learned from hearing adults communicate.
This shows the child using the LAD to get to grips with the rules of language.
Once the child has mastered this skill, they are only in need of learning new words
as they can then apply the rules of grammar from the LAD to form sentences.
Chomsky proposed that native-speaking children would become fluent by the age
of ten. He also argued that if children learn two languages from birth, they are
more likely to be fluent in both.
Limitations of Chomsky’s Theory
● He did not study real children. The theory relies on children being exposed
to language but takes no account of the interaction between children and
their carers. Nor does it recognise the reasons why a child might want to
speak, the functions of language.
● He has made a number of strong claims about language: in particular, he
suggests that language is an innate faculty – that is to say that we are born
with a set of rules about language in our minds, which he refers to as the
‘Universal Grammar’. Universal grammar is the basis upon which all human
languages build. Chomsky’s ideas have profoundly affected linguistics and
mind-science in general.
● Critics attacked his theories from the get-go and are still attacking,
paradoxically demonstrating his enduring dominance. … For example, in
his new book A Kingdom of Speech Tom Wolfe asserts that both Darwin and
“Noam Charisma” were wrong.
● Does not take into consideration children with conditions such as Down's
Syndrome that may affect or delay their language development.
● He overemphasised on grammar in sentence structure rather than how
children construct meaning from their sentences.