0% found this document useful (0 votes)
443 views

All Notes

Uploaded by

aysha fariha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
443 views

All Notes

Uploaded by

aysha fariha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 412

COGNITIVE PSYCHOLOGY

MODULE 1: INTRODUCTION TO COGNITIVE PSYCHOLOGY

UNIT 1: HISTORY AND EMERGENCE OF COGNITIVE PSYCHOLOGY

Cognitive psychology is that branch of psychology concerned with how people acquire,
store, transform, use, and communicate information (Neisser, 1967). It deals with mental
life. Cognition is the mental activity which describes the acquisition, storage, transformation
and use of knowledge.

The Greek philosopher Aristotle (384–322 BC) examined topics such as perception, memory,
and mental imagery. He also discussed how humans acquire knowledge through experience
and observation (Barnes, 2004; Sternberg, 1999). Aristotle emphasized the importance of
empirical evidence, or scientific evidence obtained by careful observation and
experimentation. His emphasis on empirical evidence and many of the topics he studied are
consistent with twenty-first-century cognitive psychology.

Aristotle’s concept was supported by a number of philosophers but opposed by other


including his student Plato. As a result, two concepts on the nature of mind and knowledge
was formed, viz,

NATURE OF
MIND AND
KNOWLEDGE

EMPIRICISM NATIVISM

Empiricism is the view that knowledge comes from an individual’s own experience—ie, from
the empirical information that people collect from their senses and experiences. Empiricists
recognize individual differences in genetics but emphasize human nature’s malleable, or
changeable, aspects. Empiricists believe people are the way they are, and have the
capabilities they have, largely because of previous learning. One mechanism by which such
learning is thought to take place is through the mental association of two ideas. Aristotle,
George Berkeley, David Hume, and later, James Mill supported this view.

Nativism, by contrast, emphasizes the role of constitutional factors—of native ability—over


the role of learning in the acquisition of abilities and tendencies. Nativists attribute
differences in individuals’ abilities less to differences in learning than to differences in
original, biologically endowed capacities

and abilities. Nativists often suggest that some cognitive functions come built in, as part of
our legacy as human beings. Plato, Rene Descartes and Immanuel Kant supported this view.

A major advancement to the rise of cognitive psychology was set by Wilhelm Wundt in 1879
when he constructed the first laboratory of psychology at Leipzig, Germany to study the
science of mind. He also promoted to the development of the structuralist school of
psychology. He wanted to discover the laws and principles that explained our immediate
conscious experience. In particular, Wundt thought any conscious thought or idea resulted
from a combination of sensations that could be defined in terms of exactly four properties:
mode (for example, visual, auditory, tactile, olfactory), quality (such as colour, shape,
texture), intensity, and duration. Wundt proposed that psychology should study mental
processes, using a technique called introspection. Wundt is a pioneer in the study of many
cognitive phenomena, he was the first to approach cognitive questions scientifically and the
first to try to design experiments to test cognitive theories.

Another important German psychologist, named Hermann Ebbinghaus (1850–1909),


focused on factors that influence human memory. He constructed more than 2,000
nonsense syllables (for instance, DAK) and tested his own ability to learn these stimuli.
Meanwhile, in the United States, similar research was being conducted by psychologists
such as Mary Whiton Calkins (1863 -1930). For example, Calkins reported a memory
phenomenon called the recency effect (Madigan & O’Hara, 1992). The recency effect refers
to the observation that our recall is especially accurate for the final items in a series of
stimuli.

Another crucial figure in the history of cognitive psychology was an American named
William James. William James considered the subject matter of psychology to be our
experience of external objects. He assumed that the way the mind works has a great deal to
do with its function—the purposes of its various operations. Hence the term functionalism
was applied to his approach. Fellow functionalists such as John Dewey and Edward L.
Thorndike, for example, shared James’s conviction that the most important thing the mind
did was to let the individual adapt to her or his environment.

Further, a radical turn happened with the advent of behaviourism in 20 th century. According
to the behaviourist approach, psychology must focus on objective, observable reactions to
stimuli in the environment (Pear, 2001). Behaviourism contributed significantly to
contemporary cognitive psychology research methods. Eminent behaviourists like Edward
Tolman who explained about latent learning and cognitive map and Wolfgang Kohler
elucidated about the process of insight learning.

Another important event in 1932 occurred when Sir Frederick Bartlett rejected the popular
view at the time that memory and forgetting can be studied by means of nonsense syllables,
as had been advocated by Ebbinghaus during the previous century. He explained about
reconstructive memory and schema.

An additional school that facilitated the development of the field of cognitive psychology
was Gestalt psychology. Gestalt psychologists, who studied mainly perception and problem
solving, believed an observer did not construct a coherent perception from simple,
elementary sensory aspects of an experience but instead apprehended the total structure of
an experience as a whole. In other words, they believed that whole is more than sum of its
parts. They value the unity of psychological phenomena. Gestalt psychologists provided the
framework for the concept of problem solving in psychology.

Later, the concept of genetic epistemology propounded by Jean Piaget helped to shape
modern cognitive psychology. Piaget sought to describe the intellectual structures
underlying cognitive experience at different developmental points through an approach he
called genetic epistemology. Piaget’s observations of infants and children convinced him
that a child’s intellectual structures differ qualitatively from those of a mature adult. He
substantiated about some of the most important terms in cognitive psychology like mental
schema, accommodation, assimilation and organization. Piaget believed that children in
different stages of cognitive development used different mental structures to perceive,
remember, and think about the world.

The investigations into individual differences in human cognitive abilities by Sir Francis
Galton and his followers is another milestone in the emergence of cognitive psychology.
Galton (1883/1907) studied a variety of cognitive abilities, in each case focusing on ways of
measuring the ability and then noting its variation among different individuals. Among the
abilities he studied (in both laboratory and naturalistic settings) was mental imagery. His
invention of tests and questionnaires to assess mental abilities inspired later cognitive
psychologists to develop similar measures.

Despite the early attempts to define and study mental life, psychology, especially American
psychology, came to embrace the behaviourist tradition in the first five decades of the
1900s. A number of historical trends, both within and outside academia, came together in
the years during and following World War II to produce what many psychologists think of as
a “revolution” in the field of cognitive psychology. This cognitive revolution, a new series of
psychological investigations, was mainly a rejection of the behaviourist assumption that
mental events and states were beyond the realm of scientific study or that mental
representations did not exist. The major by-product of this revolution is the person–
machine system. It is the idea that machinery operated by a person must be designed to
interact with the operator’s physical, cognitive, and motivational capacities and limitations.

At about the same time, developments in the field of linguistics, the study of language,
made clear that people routinely process enormously complex information. Work by linguist
Noam Chomsky revolutionized the field of linguistics, and both linguists and psychologists
began to see the central importance of studying how people acquire, understand, and
produce language.

Another strand of the cognitive revolution came from developments in neuroscience, the
study of the brain-based underpinnings of psychological and behavioural functions. A major
debate in the neuroscience community had been going on for centuries, all the way back to
Descartes, over the issue of localization of function.

Thus, is the history and emergence of cognitive psychology.

UNIT 2: COGNITIVE PSYCHOLOGY- AN INTERDISCIPLINARY FIELD

Cognitive psychology has had an enormous influence on the discipline of psychology. Most
cognitive psychologists prior to the 1980s did indeed conduct research in artificial
laboratory environments, often using tasks that differed from daily cognitive activities. A by-
product of the cognitive revolution is that scholars from categorically discreet areas—
including linguistics, computer science, developmental psychology, and cognitive
psychology—got together and focused on their common interests such as the structure and
process of cognitive abilities. Collectively, these individuals created a united front to defeat
behaviourism. The various interdisciplinary fields in cognitive psychology are elaborated
below.
Cognitive neuroscience

Cognitive neuroscience combines the research techniques of cognitive psychology with


various methods of assessing the structure and function of the brain. Researchers have
discovered which structures in the brain are activated when people perform a variety of
cognitive tasks. Psychologists now use neuroscience techniques to explore the kind of
cognitive processes that we use in our interactions with other people. This is a new
discipline called social cognitive neuroscience. Cognitive psychologists use various
neuroscience techniques like brain lesions, Positron Emission Tomography, Functional
Magnetic Resonance Imaging, Event-related Potential Technique and Single-celled
Recording Technique.

Artificial intelligence

Artificial intelligence (AI), a branch of computer science, seeks to explore human cognitive
processes by creating computer models that accomplish the same tasks that humans do
(Boden, 2004; Chrisley, 2004). Researchers in artificial intelligence have tried to explain how
you recognize a face, create a mental image, write a poem, as well as hundreds of additional
cognitive accomplishments (Boden, 2004; Farah, 2004; Thagard, 2005). Cognitive
psychologists use various artificial intelligence techniques in their researches like computer
metaphor, pure AI, computer stimulation and the parallel distributed processing approach.

Cognitive science

Cognitive psychology is part of a broader field known as cognitive science. Cognitive science
is a contemporary field that tries to answer questions about the mind. As a result, cognitive
science includes three disciplines we’ve discussed so far—cognitive psychology,
neuroscience, and artificial intelligence. It also includes philosophy, linguistics,
anthropology, sociology, and economics (Sobel, 2001; Thagard, 2005). It is a field that have
contributed most to cognitive psychology.

UNIT 3: CONTRIBUTIONS OF VARIOUS SCHOOLS OF PSYCHOLOGY TO COGNITIVE


PSYCHOLOGY (brief)
Structuralism

Structuralists defined psychology as the science that studies about the conscious
experience. The major contribution of structuralism to cognitive psychology is the technique
of introspection. Introspection (objective introspection) is the process of examining and
measuring one’s own thoughts and mental activities. Wundt was influenced by John Locke’s
view (tabula rasa).

Functionalism

William James developed the first psychological laboratory in America at Harvard University
(1890) and the school of functionalism. James considered the subject matter of psychology
to be our experience of external objects. Perhaps James’s most direct link with modern
cognitive psychology is his view of memory, which comprises structure and process of
memory.

Behaviourism

Skinner believed in the existence of images, thoughts and the like and agreed they were
proper objects of study, but he objected to treating mental events and activities as
fundamentally different from behavioural events and activities. Functional analysis was one
technique introduced by Skinner to analyse the relationship between the stimuli and
behaviours. Behaviourism has important contributions towards the field of cognitive
psychology. The concept of cognitive map and latent learning developed by Edward Tolman
has widened the field of cognitive psychology.

Gestalts

Gestalt ideas are part of the study of cognitive psychology. Gestalts believe that whole is
more than sum of its parts. They studied mainly perception and problem-solving which are
some the main concepts in cognitive psychology. This school has also contributed to
concepts of learning, memory and thought-processes. The Gestalt psychologists rejected
structuralism, functionalism, and behaviourism as offering incomplete accounts of
psychological and, in particular, cognitive experiences. They believed that the mind imposes
its own structure and organization on stimuli and, in particular, organizes perceptions into
wholes rather than discrete parts.

Humanism

The school of humanism has contributed roughly to the field of cognitive psychology. The
qualities of creativity, competence and problem-solving abilities of human beings are
elucidated in this principle.

UNIT 4: INTRODUCTION TO MODELS OF COGNITIVE PSYCHOLOGY: INFORMATION


PROCESSING, CONNECTIONISM.

Information Processing Approach

During the 1950s, communication science and computer science began to develop and gain
popularity. Researchers then began speculating that human thought processes could be
analysed from a similar perspective (Leahey, 2003; MacKay, 2004). Two important
components of the information-processing approach are that (a) a mental process can be
compared with the operations of a computer, and (b) a mental process can be interpreted
as information progressing through the system in a series of stages, one step at a time.

Central to the information processing approach is the idea that cognition can be thought of
as information passing through a system. Here, brain is considered as the hardware and
cognitive processes as the software. Information-processing approach has four major
assumption:

1. Humans as symbol manipulators.


Information-processing theorists assume that people are general-purpose symbol
manipulators ie., they can perform astonishing cognitive acts by applying only a few
mental operations to symbols. Information is then stored symbolically, and the way
it is coded and stored greatly affects how easy it is to use it later.
2. Data- information from environment and processes.
Memory stores where information is held for possible later use and the different
processes that operate on the information at different points or that transfer it from
store to store is considered here. Recognition, detection, recoding and retrieval are
some processes in human memory storage.
3. Human thought- system of interrelated capabilities.
Different individuals have different cognitive capacities- different attention spans,
memory capacities and language skills, to name a few. Information-processing
theorists try to find the relationships between these capacities to explain how
individuals go about performing specific cognitive tasks.
4. Humans as active information seekers and scanners.
Information is stored in the memory of humans in three different ways. Sensory
memory is a memory system that retains representations of sensory input for brief
periods of time. Short-term memory is the memory system that holds information
we are processing at the moment. Our third memory system, long-term memory
allows us to retain vast amounts of information for very long periods of time.

This approach is rooted in structuralism. They hold that information-processing is


sequential. They rely on computer metaphor. Finally, one of the major information-
processing approach models is the Atkinson-Shiffrin model or Modal model of memory.

Connectionist Approach

The connectionist approach or parallel distributed processing (PDP) is another paradigm of


cognitive psychology. Connectionism depicts cognition as a network of connections among
simpler processing units (McClelland, 1988). Connectionism seeks to replace the computer
metaphor of the information-processing framework with a brain metaphor. Connectionist
approach assume that cognitive processes occur in parallel ie., many at the same time. The
connectionist approach is also called neural network model because the processing units
are sometimes compared to neurons.

Each unit is connected to other units in a large network. Each unit has some level of
activation at any particular moment in time. The exact level of activation depends on the
input to that unit from both the environment and other units to which it is connected.
Connections between two units have weights, which can be positive or negative. A
positively weighted connection causes one unit to excite, or raise the level of activation of
units to which it is connected; a negatively weighted connection has the opposite effect,
inhibiting or lowering the activation of connected units.

The connectionist framework allows for a wide variety of models that can vary in the
number of units hypothesized, number and pattern of connections among units, and
connection of units to the environment. All connectionist models share the assumption,
however, that there is no need to hypothesize a central processor that directs the flow of
information from one process or storage area to another. Instead, different patterns of
activation account for the various cognitive processes (Dawson, 1998). Knowledge is not
stored in various storehouses but within connections between units. Learning occurs when
new connective patterns are established that change the weights of connections between
units.

Connectionist approach has some major assumptions:

1. Cognitive system is made up of billions of interconnected nodes/neurons that form


together the complex networks.
2. Nodes within a network can be activated and the pattern of activation corresponds
conscious experience. Stronger the connection, activation takes place
simultaneously.
3. Networks operate in parallel.
4. Processing of a single task is distributed throughout the brain and not in one specific
location.
5. Connections between two nodes is modelled on the way neurons interact. The effect
of one neuron on the other can be excitatory or inhibitory.
6. Connectionist approach hypothesizes that knowledge is not stored in various
storehouses but within connections between units. Learning occurs when new
connections are established.
The fundamental premise of connectionism is that individual neurons do not transmit large
amounts of symbolic information. Instead they compute by being appropriately connected
to large numbers of similar

units. This is in sharp contrast to the conventional computer model of intelligence. Feldman
and Ballard (1982), in an early description of connectionism, argued that this approach is
more consistent with the way the brain functions than an information-processing approach.

Comparison

Information- Processing model Connectionist approach


• Computer metaphor • Brain metaphor
• Drawn from structuralism • Drawn from functionalism
• Serial processing • Parallel processing
• Explains cognition at symbolic • Explains cognition at subsymbolic level
level • Based on cognitive neuropsychology and
• Based on computer science cognitive neuroscience
• Assumptions • Assumptions
• Presence of a central processor • No central processor, only weighted
like brain. connections.
• Compared to computer system. • Compared to nervous system.
• Cognition is thought of as • Cognition as a network of connections
information passing through a among simpler processing units.
system.

UNIT 5: LIMITATIONS OF LABORATORY STUDIES AND IMPORTANCE OF ECOLOGICAL


VALIDITY

Many experiments fail to fully capture real-world phenomena in the experimental task or
research design. The laboratory setting or the artificiality or formality of the task may
prevent research participants from behaving normally. The kinds of tasks amenable to
experimental study may not be those most important or most common in everyday life.
Experiments sometimes risk studying phenomena that relate only to people’s real-world
experience

Observational studies have the advantage that the things studied occur in the real world
and not just in an experimental laboratory. Psychologists call this property ecological
validity. The ecological approach propounded by J. J. Gibson (1979) holds that all cognitive
activities are shaped by the culture and by the context in which they occur. This tradition
relies less on laboratory experiments or computer stimulations and more on naturalistic
observation and field studies to explore cognition. Gibson held that perception consists of
the direct acquisition of information from the environment. This makes it necessary to study
cognitive activities in a natural setting.
MODULE 2: ATTENTION

UNIT 1: MODEL OF ATTENTION: FUNCTIONS OF EXECUTIVE, PRECONSCIOUS AND


CONSCIOUS PROCESSING, ALERTING MECHANISM. (IPA MODEL)

Attention is a concentration of mental activity / that allows you to take in a limited portion
of the vast stream of information / available from both your sensory world and your
memory (Fernandez-Duque & Johnson, 2002; Styles, 2006; Ward, 2004). In other words,
attention is the concentration of mental effort on sensory or mental events. Many
contemporary ideas about attention are based on the premise that an information-
processing system’s capacity to handle the flow of input is determined by the limitations of
that system.

Alerting Mechanism

Several regions of the brain are responsible for attention, including some structures that are
below the surface of the cerebral cortex (Just et al., 2001; Posner & Rothbart, 2007b).
Michael Posner and Mary Rothbart (2007a, 2007b) propose that three systems in the cortex
manage different aspects of attention:

(1) the orienting attention network,

(2) the executive attention network, and

(3) the alerting attention network.

This third system, the alerting attention network, is responsible for making you sensitive
and alert to new stimuli; it also helps to keep you alert and vigilant for long periods of time
(Posner & Rothbart, 2007a, 2007b).

UNIT 2: SELECTIVE ATTENTION: FEATURES OF BOTTOM-UP AND TOP-DOWN PROCESSING


The term selective attention refers to the fact that we usually focus our attention on one or
a few tasks or events rather than on many. In other words, selective attention is concerned
with the selection of a large number of stimuli into our awareness at any given time.
Selective attention has two main aspects, viz, top-down and bottom-up processing.

The term bottom-up (or data-driven) processing essentially means that the perceiver starts
with small bits of information from the environment that he combines in various ways to
form a percept. Here, attention is given only to the information in the distal stimulus.
Bottom-up processing emphasizes the importance of the stimulus in object recognition.
Specifically, the physical stimuli from the environment are registered on the sensory
receptors. The combination of simple, bottom-level features allows us to recognize more
complex, whole objects. In other words, bottom-up processing is the perception that
consists of the progression of recognizing / and processing information from individual
components of a stimuli / and moving to the perception of the whole. Context effects and
expectation effects are the two limitations of bottom-up processing.

In top-down (also called theory-driven or conceptually driven) processing, the perceiver’s


expectations, theories, or concepts guide the selection and combination of the information
in the pattern recognition process. They are directed by expectations derived from context
or past learning or both. Our expectations at the higher (or top) level of visual processing
will work their way down and guide our early processing of the visual stimulus in top-down
processing. Top-down processing is especially strong when stimuli are incomplete or
ambiguous. Top-down processing is also strong when a stimulus is registered for just a
fraction of a second. In simple words, we often proceed in accordance with what our past
experience tells us to expect, and therefore we don’t always analyse every feature of most
stimuli we encounter. This is known as top-down processing.

Top-down and bottom-up processing occur simultaneously and interact with each other.

Bottom-up processing Top-down processing


• process stimuli on the basis of
• process fundamental
expectation and past
characteristics of a stimuli.
experiences.
• aka., data driven processing
• aka., theory driven or
conceptually driven processing.

UNIT 3: AUTOMATICITY, MULTI TASKING AND DIVISION OF ATTENTION.

Automaticity, multi-tasking and division of attention are the phenomena that take place
through automatic processing.

Highly practiced activities become automatic and thereby require less attention to perform
than do new or slightly practiced activities. This is the basis of automatic processing. Posner
and Snyder (1975) offered three criteria for cognitive processing to be called automatic
processing:

(1) It must occur without intention;

(2) it must occur without involving conscious awareness; and

(3) it must not interfere with other mental activity.

For example, while driving a person has to drive, watch out the traffic rules and talk to the
companion. Schneider and Shiffrin (1977) examined automatic processing of information

under well-controlled laboratory conditions. They asked participants to search for certain
targets, either letters or numbers, in different arrays of letters or numbers, called frames.
For example, a participant might be asked to search for the target J in an array of letters: B
M J K T. Previous work had suggested that when people search for targets of one type (such
as numbers) in an array of a different type (such as letters), the task is easy. Numbers
against a background of letters seem to “pop out” automatically. In fact, the number of
nontarget characters in an array, called distractors, makes little difference if the distractors
are of a different type from the targets. So, finding a J among the stimuli 1, 6, 3, J, 2 should
be about as easy as finding a J among the stimuli 1, J, 3. Finding a specific letter against a
background of other letters seems much harder.

Schneider and Shiffrin (1977) had two conditions in their experiment. In the varied-mapping
condition, the set of target letters or numbers, called the memory set, consisted of one or
more letters or numbers, and the stimuli in each frame were also letters or numbers.
Targets in one trial could become distractors in subsequent trials. In the consistent-mapping
condition, the target memory set consisted of numbers and the frame consisted of letters,
or vice versa. Stimuli that were targets in one trial were never distractors in other trials. The
task in this condition was expected to require less capacity. In addition, Schneider and
Shiffrin (1977) varied three other factors to manipulate the attentional demands of the task.
They were frame size (the number of letters and numbers presented in each display), frame
time (the length of time each array was displayed) and memory set (the number of targets
the participant was asked to look for in each trial).

The results showed that in the consistent-mapping condition, participants’ performance


varied only with the frame time, not with frame size and memory set. In the varied-mapping
condition, participants’ performance in detecting the target depended on all three variables:
frame size, frame time and memory set.

Thus, Schneider and Shiffrin (1977) explained that Automatic processing, they asserted, is
used for easy tasks and with familiar items. It operates in parallel (meaning it can operate
simultaneously with other processes) and does not strain capacity limitations.

Note: Attending to more than one act at a time is known as division of attention. Multi-
tasking is an example of division of attention.

[Refer controlled processing in Galotti]

Automatic Processing Controlled Processing


• parallel process • serial process
• not intentional • requires attention
• without conscious awareness • under conscious control
• does not interfere mental activity • mental capacity-limited
• used for easy and familiar tasks • used for difficult and unfamiliar
tasks
• Similarity: divided attention • Similarity: divided attention

UNIT 4: MAJOR CONCEPTS IN ATTENTION- BOTTLE NECK & SPOTLIGHT CONCEPTS, EARLY
AND LATE SELECTION.

Bottle neck model

Early concepts in attention emphasized that people are extremely limited in the amount of
information that they can process at any given time. Bottleneck theories proposed a similar
narrow passageway in human information processing. In other words, this bottleneck limits
the quantity of information to which we can pay attention. Thus, when one message is
currently flowing through a bottleneck, the other messages must be left behind. Therefore,
if the amount of information available at any given time exceeds capacity, the person uses
an attentional filter to let some information through and block the rest. Only material that
gets past the filter can be analyzed later for meaning. Researchers proposed many
variations of this bottleneck theory prominent being Broadbent’s filter model (1958) and
Treisman’s attenuation theory (1964). This model held that attention is a serial process.

Moray (1959) discovered one of the most famous, called the “cocktail party effect”.

Spotlight Approach
Attention is a spotlight that highlights whatever information the system is currently focused
on (Johnson & Dark, 1986). Accordingly, psychologists are concerned less with determining
what information can’t be processed (bottle neck metaphor) than with studying what kinds
of information people choose to focus on (spotlight metaphor).

Spotlight approach holds that attention can be moved from one area to another, ie., can be
directed and redirected to various kinds of incoming information. Attention, like a spotlight,
has fuzzy boundaries.

[Refer Galotti]

Early and Late Selection Theory

Deutsch and Deutsch (1963) proposed a theory, called the late-selection theory. Later
elaborated and extended by Norman (1968), this theory holds that all messages are
routinely processed for at least some aspects of meaning—that selection of which message
to respond to this happens “late” in processing. In late-selection theory, recognition of
familiar objects proceeds unselectively and without any capacity limitations. Note that
filter theory hypothesizes a bottleneck—a point at which the processes a person can bring
to bear on information are greatly limited—at the filter. Late-selection theory also describes
a bottleneck but locates it later in the processing, after certain aspects of the meaning have
been extracted. All material is processed up to this point, and information judged to be
most “important” is elaborated more fully. This elaborated material is more likely to be
retained; unelaborated material is forgotten. A message’s “importance” depends on many
factors like context, personal significance of certain kinds of content and level of alertness.
At low levels of alertness only very important messages capture attention. At higher levels
of alertness, less important messages can be processed.

Pashler (1998) argues that the bulk of the evidence suggests it is undeniably true that
information in the unattended channel sometimes receives some processing for meaning.
At the same time, it appears true that most results thought to demonstrate late selection
could be explained in terms of either attentional lapses (to the attended message) or special
cases of particularly salient or important stimuli. In any event, it seems unlikely that
unattended messages are processed for meaning to the same degree as are attended
messages.

UNIT 5: THEORIES OF ATTENTION FILTER MODEL-BROADBENT, ATTENUATION THEORY-


TREISMAN, MULTIMODE THEORY-JOHNSTON &HAINZ, RESOURCE & CAPACITY ALLOCATION
MODEL-KAHNEMAN, SCHEMA THEORY-NEISSER.

Broadbent’s Filter Model

Donald Broadbent (1958) proposed the filter model of attention. He states that there are
limits on how much information a person can attended to at any given time. Therefore, if
the amount of information available at any given time exceeds capacity, the person uses an
attentional filter to let some information through and block the rest. Only material that gets
past the filter can be analysed later for meaning. It is a single-channel theory, is based on
the idea that information processing is restricted by channel capacity. Broadbent argued
that messages traveling along a specific nerve can differ either;

(a) according to which of the nerve fibres they stimulate or

(b) according to the number of nerve impulses they produce.

Thus, when several nerve fibres fire at the same time, several sensory messages may arrive
at the brain simultaneously. In Broadbent’s model these would be processed through a
number of parallel sensory channels. Further processing of information would then occur
only after the signal was attended to and passed on through a selective filter into a limited-
capacity channel. Broadbent (1958) believed that what is limited is the amount of
information we can process at any given time. Two messages that contain little information,
or that present information slowly, can be processed simultaneously. Broadbent postulated
that, in order to avoid an overload in this system, the selective filter could be switched to
any of the sensory channels. In an early experiment, Broadbent (1954) used the dichotic
listening task to test his theory.

Other investigators soon reported results that contradicted filter theory. Moray (1959)
discovered one of the most famous, called the “cocktail party effect”: Shadowing
performance is disrupted when one’s own name is embedded in either the attended or the
unattended message.

Filter theory predicts that all unattended messages will be filtered out—that is, not

processed for recognition or meaning—which is why participants in dichotic listening tasks


can recall little information about such messages. The cocktail party effect shows something
completely different: People sometimes do hear their own name in an unattended message
or conversation, and hearing their name will cause them to switch their attention to the
previously unattended message. Moray (1959) concluded that only “important” material
can penetrate the filter set up to block unattended messages. Presumably, messages such
as those containing a person’s name are important enough to get through the filter and be
analyzed for meaning.

Treisman (1960) discovered a phenomenon that argues against this alternative


interpretation of the cocktail party effect. She played participants two messages, each
presented to a different ear, and asked the participants to shadow one of them. At a certain
point in the middle of the messages, the content of the first message and the second
message was switched so that the second continued the first and vice versa. Immediately
after the two messages “switched ears,” many participants repeated one or two words from
the “unattended ear.” In the example shown, for instance, a participant shadowing message
1 might say, “At long last they came to a fork in the road but did not know which way to go.
The trees on the left side of refers to the relationships . . . ,” with the italicized words
following the meaning of the first part of message 1 but coming from the unattended
channel (because they come after the switch point). If participants processed the
unattended message only when their attentional filter “lapsed,” it would be very difficult to
explain why these lapses always occurred at the point when the messages switched ears. To
explain this result, Treisman reasoned that participants must be basing their selection of
which message to attend to at least in part on the meaning of the message— a possibility
that filter theory does not allow for.

The issue of whether information from the unattended channel can be recognized was
taken up by Wood and Cowan (1995). They showed that the attentional shift to the
unattended message was unintentional and completed without awareness. Indeed, A. R. A.
Conway, Cowan, and Bunting (2001) showed that a lower working-memory capacity means
less ability to actively block the unattended message. In other words, people with low
working-memory spans are less able to focus. Given her research findings, psychologist
Anne Treisman (1960) proposed a modified filter theory, one she called attenuation theory.

Treisman’s Attenuation Theory

Anne Treisman (1960) proposed a modified filter theory, called attenuation theory. She held
that some meaningful information in unattended messages might still be available, even if
hard to recover. Incoming messages are subjected to three kinds of analysis. In the first, the
message’s physical properties, such as pitch or loudness, are analyzed. The second analysis
is linguistic, a process of parsing the message into syllables and words. The third kind of
analysis is semantic, processing the meaning of the message.

Some meaningful units (such as words or phrases) tend to be processed quite easily. Words
that have subjective importance (such as your name) or that signal danger (“Fire!” “Watch
out!”) have permanently lowered thresholds; that is, they are recognizable even at low
volumes. You might have noticed yourself that it is hard to hear something whispered
behind you, although you might recognize your name in whatever is being whispered.
Words or phrases with permanently lowered thresholds require little mental effort by the
hearer to be recognized. Thus, according to Treisman’s theory, the participants in Moray’s
experiments heard their names because recognizing their names required little mental
effort.

Only a few words have permanently lowered thresholds. However, the context of a word in
a message can temporarily lower its threshold. But the primed stimuli ie., especially which is
ready to be recognized even if were to occur in the unattended channel, little effort would
be needed to hear and process it (Little effort is need to hear and process a primed stimuli).

MacKay (1973) showed that we might assume that at least some meaningful aspects of the
unattended message are processed. Pashler (1998), however, noted that the effect
reported by MacKay (1973) is greatly diminished if the message on the unattended channel
consists of a series of words instead of just one. This raises the possibility that if the
unattended message consists of one word only, the physical sound of that word temporarily
disrupts the attention being paid to the attended message, thus perhaps briefly “resetting”
the attentional filter.

According to Treisman (1964), people process only as much as is necessary to separate the
attended from the unattended message. If the two messages differ in physical
characteristics, then we process both messages only to this level (attenuating filter level)
and easily reject the unattended message. If the two messages differ only semantically, we
process both through the level of meaning (hierarchy of analysers) and select which
message to attend to base on this analysis. Processing for meaning takes more effort,
however, so we do this kind of analysis only when necessary. Messages not attended to are
not completely blocked but rather weakened in much the way that turning down the
volume weakens an audio signal from a stereo. Parts of the message with permanently
lowered thresholds (“significant” stimuli) can still be recovered, even from an unattended
message.

Broadbent’s Filter Theory Treisman’s Attenuation Theory

• Allows only one kind of • Allows many different kinds


analysis. [based on physical of analyses of all messages.
property] • Holds that unattended
• Holds that unattended messages are weakened but
messages are blocked and the information they contain
filtered out. is still available.
• Processing is based on the • Processing is determined by
meaning/importance. threshold.

The Schema Theory

Ulric Neisser (1976) offered a completely different conceptualization of attention, called


schema theory. He argued that we don’t filter, attenuate, or forget unwanted material.
Instead, we never acquire it in the first place. Neisser compared attention to apple picking.
The material we attend to is like apples we pick off a tree—we grasp it. Unattended material
is analogous to the apples we don’t pick.

Neisser believes, with unattended information: It is simply left out of our cognitive
processing. Neisser and Becklen (1975) performed a relevant study of visual attention. They
created a “selective looking” task by having participants watch one of two visually
superimposed films. One film showed a “hand game,” two pairs of hands playing a familiar
hand-slapping game many of us played as children. The second film showed three people
passing or bouncing a basketball, or both. Participants in the study were asked to “shadow”
(attend to) one of the films and to press a key whenever a target event (such as a hand slap
in the first film or a pass in the second film) occurred.

Neisser and Becklen (1975) found, first, that participants could follow the correct film rather
easily, even when the target event occurred at a rate of 40 per minute in the attended film.
Participants ignored occurrences of the target event in the unattended film. Participants
also failed to notice unexpected events in the unattended film. For example, participants
monitoring the ballgame failed to notice that in the hand game film, one of the players
stopped hand slapping and began to throw a ball to the other player. Neisser (1976)
believed that skilled perceiving rather than filtered attention explains this pattern of
performance.

Neisser and Becklen (1975, pp. 491–492) argued that once picked up, the continuous and
coherent motions of the ballgame (or of the hand game) guide further pickup; what is seen
guides further seeing. It is implausible to suppose that special “filters” or “gates,” designed
on the spot for this novel situation, block the irrelevant material from penetrating deeply
into the “processing system.” The ordinary perceptual skills of following visually given
events “are simply applied to the attended episode and not to the other.”

Simply put Schema theory states that all knowledge is organized into units, and within these
units of knowledge, or schemata (plural), is stored information. A schema, then, is
generalized description or a conceptual system for understanding knowledge-how
knowledge is represented and how it is used. According to this theory, schemata represent
knowledge about concepts, objects and the relationships they have with other objects,
situations, events, and sequences of events, actions and sequences of actions.

Resource and Capacity Allocation Model


Daniel Kahneman (1973) presented a slightly different model for what attention is. He
viewed attention as a set of cognitive processes for categorizing and recognizing stimuli. The
more complex the stimulus, the harder the processing, and therefore the more resources
are engaged. However, people have some control over where they direct their mental
resources: They can often choose what to focus on and devote their mental effort to.

Essentially, this model depicts the allocation of mental resources to various cognitive tasks.
[An analogy could be made to an investor depositing money in one or more of several
different bank accounts—here, the individual “deposits” mental capacity to one or more of
several different tasks. Many factors influence this allocation of capacity, which itself
depends on the extent and type of mental resources available.] The availability of mental
resources, in turn, is affected by the overall level of arousal, or state of alertness.

Kahneman (1973) argued that one effect of being aroused is that more cognitive resources
are available to devote to various tasks. Paradoxically, however, the level of arousal also
depends on a task’s difficulty. This means we are less aroused while performing easy tasks,
such as adding 2 and 2, than we are when performing more difficult tasks, such as
multiplying a Social Security number by pi. We therefore bring fewer cognitive resources to
easy tasks, which, fortunately, require fewer resources to complete.

Arousal thus affects our capacity (the sum total of our mental resources) for tasks. But the
model still needs to specify how we allocate our resources to all the cognitive tasks that
confront us. Look again at Figure 4-5, this time at the region labeled “allocation policy.”
Note that this policy is affected by an individual’s enduring dispositions (for example, your
preference for certain kinds of tasks over others), momentary intentions (your vow to find
your meal card right now, before doing anything else!), and evaluation of the demands on
one’s capacity (the knowledge that a task you need to do right now will require a certain
amount of your attention).

This model predicts that we pay more attention to things we are interested in, are in the
mood for, or have judged important. In Kahneman’s (1973) view, attention is part of what
the layperson would call “mental effort.” The more effort expended, the more attention
we are using.
A related factor is alertness as a function of time of day, hours of sleep obtained the night
before, and so forth. Sometimes we can attend to more tasks with greater concentration. At
other times, such as when we are tired and drowsy, focusing is hard. Effort is only one factor
that influences performance on a task. Greater effort or concentration results in better
performance of some tasks—those that require resource-limited processing, performance
of which is constrained by the mental resources or capacity allocated to it (Norman &
Bobrow, 1975). Taking a midterm is one such task. Performance is said to be data limited,
meaning that it depends entirely on the quality of the incoming data, not on mental effort
or concentration.

The Multi-mode Theory

The Multimode theory combines both physical and semantic inputs into one theory. Within
this model attention is assumed to be flexible system following different depths of
perceptual analysis. The feature that gathers awareness is dependent upon the person’s
needs at the time. Switching from physical and semantic features as a basis for selection
yields costs and benefits. Stimulus information will be attended to via an early selection
through sensory analysis, then as it increases in complexity; semantic analysis is involved,
compensating for attentions limited capacity. Shifting from early to late selection models
reduces the significance of stimuli rendering one’s attention, though it increases breadth of
attention. Researchers found that semantic selection requires greater attention resources
that physical selection. Johnston and Heinz (1978) have proposed the Multimode theory
into three stages; Stage 1 is the initial stage where sensory representations of stimuli are
recognized which corresponds to Broadbent’s filter theory. Stage 2 is the stage where
semantic representations (meanings) are constructed and this corresponds to the Deutsch
and Deutsch model of attention. The final stage is the stage where both sensory and
semantic representations enter consciousness. As a result, this stage is called the
consciousness. It is also suggested that more processing requires more mental effort. When
the messages are selected on the basis of stage one processing (early selection), less mental
effort is required than when the selection is based on stage three processing (late selection).

Note: Attention is a flexible system that allows selection of one over another.

SENSORY •STAGE 1
REPRESENTATIONS

SEMANTIC •STAGE 2
REPRESENTATIONS

CONSCIOUSNESS •STAGE 3

Yerkes- Dodson Law


Yerkes-Dodson law suggests that the level of arousal beyond which performance begins to
decline is a function of task difficulty. This theory was developed by Robert M.
Yerkes and John Dillingham Dodson in 1908. In other words, the law states that when tasks
are simple, a higher level of arousal lead to better performance; when tasks are difficult,
lower level of arousal lead to better performance. Optimal arousal leads to optimal
performance and strong anxiety leads to impaired performance. For example, students who
experience test anxiety (a high level of arousal) may seek out ways to reduce that anxiety to
improve test performance.

There exist large individual differences with respect to preferred arousal level. So, although
arousal theory provides useful insights into the nature of motivation, the fact that we
cannot readily predict what will constitute an optimal level of arousal does limit its
usefulness to a degree.
MODULE 3: SENSATION AND PERCEPTION

UNIT 1: THEORIES OF PERCEPTION: (TOP DOWN & BOTTOM UP VIEWS): GESTALT


APPROACH, GIBSON- AFFORDANCE THEORY, MARR & NISHIHARA- COMPUTATIONAL
APPROACH, GREGORY- INFERENTIAL THEORY, NEISSER-SCHEMA THEORY.

Gibson- Affordance Theory

James Gibson (1979) proposed the affordance theory which is also known as the theory of
direct perception. He propounded this theory as an opposition to the constructivist
approach to perception and associationism. According to Gibson’s theory of direct
perception, the information in our sensory receptors, including the sensory context, is all we
need to perceive anything. As the environment supplies us with all the information we need
for perception, this view is sometimes also called ecological perception. In other words, we
do not need higher cognitive processes or anything else to mediate between our sensory
experiences and our perceptions. Existing beliefs or higher-level inferential thought
processes are not necessary for perception.

Gibson rejected the idea that perceivers construct mental representations from memories
of past encounters with similar objects and events. Instead, Gibson believed that the
perceiver does very little work, mainly because the world offers so much information,
leaving little need to construct representations and draw inferences. He proposed that
perception consists of the direct acquisition of information from the environment.
According to this view, called direct perception, the light hitting the retina contains highly
organized information that requires little or no interpretation. In the world we live in,
certain aspects of stimuli remain invariant (or unchanging), despite changes over time or in
our physical relationship to them.

Gibson (1979) believed that we use this contextual information directly. In essence, we are
biologically tuned to respond to it. According to Gibson, we use texture gradients as cues for
depth and distance. Those cues aid us to perceive directly the relative proximity or distance
of objects and of parts of objects. He called the information offered by the environment to
the organism as affordances. J. J. Gibson (1979) claimed that affordances of an object are
also directly perceived. J. J. Gibson (1950) became convinced that patterns of motion
provide a great deal of information to the perceiver. His work with selecting and training
pilots in World War II led him to thinking about the information available to pilots as they
landed their planes. He developed the idea of optic flow. Thus, information given from the
environment are of four types that help in direct perception, viz,

• Optic flow of the pattern


• texture gradient
• horizon ratio
• direct perception

Gibson’s model sometimes is referred to as an ecological model (Turvey, 2003). This


reference is a result of Gibson’s concern with perception as it occurs in the everyday world
(the ecological environment) rather than in laboratory situations, where less contextual
information is available. Ecological constraints apply not only to initial perceptions but also
to the ultimate internal representations (such as concepts) that are formed from those
perceptions (Hubbard, 1995; Shepard, 1984). Continuing to wave the Gibsonian banner was
Eleanor Gibson (1991, 1992), James’ wife. She conducted landmark research in infant
perception.

Direct perception may also play a role in interpersonal situations when we try to make sense
of others’ emotions and intentions (Gallagher, 2008). Neuroscience also indicates that direct
perception may be involved in person perception. Fodor and Pylyshyn (1981) argued that
the theory is not helpful in explaining perception. They charged that Gibson failed to specify
just what kinds of things are invariant and what kinds are not.

Criticisms

• Individual sensory capacity differences [every individual have different sensory


capacity which leads to difference in perception].
• When the environment changes, perception also changes [not all perceptions are
new as it may become difficult for the perceiver to take appropriate actions].
• There are chances of cognitive analysis differences in illusions.

Marr & Nishihara- Computational Approach


The computational approach to perception was put forward by David Marr and H. K.
Nishihara (1978). This theory incorporates both top-down and bottom-up processing in their
concept to explain the phenomenon of perception. This model is very technical and
mathematical by nature. Marr proposed that perception proceeds in terms of several
different, special purpose computational mechanisms, such as a module to analyze color,
another to analyze motion, and so on. Each operates autonomously, without regard to the
input from or output to any other module, and without regard to real-world knowledge.

He held that visual system works on three levels,

1. Computational Level
It is the top level of the visual system in which the performance of the device is
characterized by mapping from one kind of information to another, the abstract
properties of this mapping are defined precisely, and its appropriateness and
adequacy for the task at hand are demonstrated. In other words, it is a form of task
analysis of a cognitive system. In this level the perceiver identifies specific
information and general constraints upon any solution to that problems.
2. Algorithm Level
It is the center of the visual system where the choice of representation for the input
and output and the algorithm to be used to transform one into the other. In other
words, this level deals the method of information-processing task. The perceiver
identifies the input and output information to transform the input into output
information. It involves the process of encoding information.
Computational • detects edges and boundaries
and separates figure from
level
background.

• edges are worked out by


analysing the data and separate
Algorithmic level image into contours and areas of
similarity for object recognition.

•concerned with whether


Implementation we have the relevant
hardware neurons and
level connections necessary
for the processing.

3. Implementation Level
At the extreme are the details of the way algorithm and representation are realized
physically. In other words, implementation level finds the physical realization for the
algorithm level. It helps identify neural structures that realizes the basic
representational states to which the algorithm applies and transform those
representational states according to the algorithm.

Marr believed that visual perception proceeds by constructing four different mental
representations, or sketches.

a) Gray scale sketch


It intends to represent the intensity value at each point in the image. At this stage
colour information is not processed and images are in grey scale.
b) Primal sketch
It depicts areas of relative brightness and darkness in a two-dimensional image as
well as localized geometric structure. This allows the viewer to detect boundaries
between areas but not to “know” what the visual information “means”. The primal
sketch consists of a set of blobs oriented in various directions.

• detects edges,
RAW PRIMAL texture
differences;
SKETCH made of tiny
dots and pixels.

• connect edges to
FULL form shape—
textures of similarity
PRIMAL to fill in the
form(based on
SKETCH Gestalt principles).

c) 2.5 D Sketch
Once a primal sketch is created, the viewer uses it to create a more complex
representation, called a 21⁄2-D (two-and-a-half-dimensional) sketch. Using cues such
as shading, texture, edges, and others, the viewer derives information about what
the surfaces are and how they are positioned in depth relative to the viewer’s own
vantage point at that moment. 2.5 D sketch is in a viewer-centered coordinate
frame.
d) 3D Sketch
Marr believed that both the primal sketch and the 21⁄2-D sketch rely almost
exclusively on bottom-up processes. Information from real-world knowledge or
specific expectations (that is, top-down knowledge) is incorporated when the viewer
constructs the final, 3-D sketch of the visual scene. This sketch involves both
recognition of what the objects are and understanding of the “meaning” of the visual
scene. This representation describes shapes and their spatial organization in an
object-centered coordinate frame.

Merits

• It provides basic information of life-span visual processing.


• It helps understand perception and information processes on a computational level.
Limitations

• The theory is a higher level of explanation on perception.


• Unfamiliar objects cannot be easily processed according to the processes of this
theory.

Primitives [Refer]

Gregory- Inferential Theory

Sensory data

Perceptual hypothesis

Knowledge stored in the brain

Inferences about sensory information or data

The inferential theory was developed by Richard Gregory and Irwin Rock (1970). They
regarded perception as a constructive approach based on top-down processing. According
to this theory, people recognize objects by generating a perceptual hypothesis like a
researcher who selects the hypothesis that is substantiated by the evidences. According to
him, sensory information is incomplete and in order to complete it we use of stored
knowledge. Inferential theory provides a contradictory explanation to Gibson’s theory of
affordance.

In this way we are actively constructing our perception of reality based on our environment
and stored information. Stimulus information from our environment is frequently
ambiguous so to interpret it, we require higher cognitive information either from past
experiences or stored knowledge in order to makes inferences about what we perceive.
Helmholtz called it the ‘likelihood principle’. A lot of information reaches the eye, but much
is lost by the time it reaches the brain (Gregory estimates about 90% is lost).

Limitations

• Making inferences based on stored information can lead to mistakes as it can be


biased by familiarity.
• Fails to explain the process behind generation of perceptual hypothesis, decision of
right hypothesis, ruling out wrong ones etc….
• Illusions may prove that sensory data can be wrongly perceived.

Neisser-Schema Theory

The schema theory of Ulric Neisser (1976) is also known as the interactive theory of
perception. Neisser’s (1976) perceptual cycle model (PCM) structures the interaction
between a person’s internal schemata (or mental templates) and the environment in which
they work. Neisser's model of perception as a cyclical process in which top-down processing
and bottom-up processing drive each other in turn. To be purely data-driven we'd need to
be mindless automatons; to be purely theory-driven we'd need to be disembodied
dreamers. An active schema sets up relevant expectations for a particular context and if the
sensory data breaks these expectations this may modify the schema or trigger a more
relevant one. He agrees with Gibson that sensory information (in context) is sufficient for
perception.
Neisser held that the process of perception is cyclical in order to understand the world.
There are three cyclical processes involved in perceptual cycle:

1) Schemata is modified
2) Attention is modified
3) Available information is modified

In this perceptual cycle expectations direct our perceptual exploration. Perceptual


exploration is facilitated by our various sense organs and locomotive actions. The perceiver
scans their perceptual field and take in the necessary information leaving out the rest.
Perceptual exploration provides us with information that modify our schema.

There are two ways of modifying our schema to facilitate perceptual exploration. They are:

a) Corrective aspect
It refers to the fact that information from the world can be used to inform that
wrong anticipatory schema has been called up- schema is either modified or
changed.
b) Elaborative aspect
It refers to the fact that coherency and depth of schema built up with use.

This model is grounded in ecological validity.

UNIT 2: THEORIES OF PATTERN RECOGNITION: BIEDERMAN-GEON THEORY, NEISSER-VIEW


BASED APPROACH. SELFRIDGE--PANDEMONIUM MODEL, ELEANOR GIBSON, &LEWIN-
DISTINCTIVE FEATURES

Biederman-Geon Theory

Irving Biederman (1987) proposed a theory of object perception that uses a type of featural
analysis that is also consistent with some of the Gestalt principles of perceptual
organization. This theory is also known as the Recognition- by- components (RBC) theory.
The recognition- by- components theory explains our ability to perceive 3-D objects with the
help of simple geometric shapes. It proposes that all complex forms are composed of geons.
For example, a cup is composed of two geons: a cylinder (for the container portion) and an
ellipse (for the handle). (See Figure 8 for examples of geons and objects.) Geon theory, as
espoused by Biederman proposes that the recognition of an object, such as a telephone, a
suitcase, or even more complex forms, consists of recognition by components (RBC) in which
complex forms are broken down into simple forms.

Biederman proposed that when people view objects, they segment them into simple
geometric components, called geons. Biederman proposed a total of 36 such primitive
components (visual primitives). He believed; we can construct mental representations of a
very large set of common objects. We divide the whole into the parts, or geons (named for
“geometrical ions”; Biederman, 1987, p. 118). We pay attention not just to what geons are
present but also to the arrangement of geons. The geons also can be recomposed into
alternative arrangements. You know that a small set of letters can be manipulated to
compose countless words and sentences. The geons are simple and are viewpoint-invariant
(i.e., distinct from various viewpoints). One test of geon theory developed by Biederman is
in the use of degraded forms.

This recognition of components is carried out in two stages as elaborated below:

i. Edge extraction
In this stage we try to extract core information from retinal image of the geons.
ii. Encoding non-accidental features
Here, the detected geons are matched with the memory to form a meaningful
perception.

Stimuli on retina

Edge information of minute features is extracted


Each information is detected to

Features are matched with memory

Limitations

• Does not adequately explain how we recognize particular features.


• Fails to explain the effects of prior expectations and environmental context on some
phenomena of pattern perception.
• Not all perception researchers accept the notion of geons as fundamental units of
object perception (Tarr and Bulthoff, 1995).

Neisser-View Based Approach

The view-based approach of object or pattern recognition by Ulric Neisser (1967) holds that
objects are recognized holistically through the comparison with a stored analogy. These are
viewer dependent approaches to perception. One major View based approach is the
template matching theory.

Template theories suggest that we have stored in our mind’s myriad sets of templates.
Templates are highly detailed models for patterns we potentially might recognize. We
recognize a pattern by comparing it with our set of templates. We then choose the exact
template that perfectly matches what we observe (Selfridge & Neisser, 1960). It holds that a
great number of templates have been created by our life experience, each template being
associated with a meaning. In other words, the process of perception thus involves
comparing incoming information to the templates we have stored, and looking for a match.
If a number of templates match or come close, we need to engage in further processing to
sort out which template is most appropriate. Notice that this model implies that somewhere
in our knowledge base we’ve stored millions of different templates—one for every distinct
object or pattern we can recognize. So in order to form a template either should be
modified or stimuli should be modified to suit the barin.

Merits

It seems apparent that to recognize a shape, a letter, or some visual forms, some contact
with a comparable internal form is necessary. The objects in the external reality need to be
recognized as matching a memory in the long-term memory.

Limitations

• Storing, organizing, and retrieving so many templates in memory would be unwieldy.


• Does not explain the process of new template formation.
• People recognize many patterns as more or less the same thing, even when the
stimulus patterns differ greatly.
• Fail to explain some aspects of the perception of letters.

Selfridge--Pandemonium Model

The pandemonium model by Oliver Selfridge (1959) is a type of feature matching theory. It
attempts to match features of a pattern to features stored in memory, rather than to match
a whole pattern to a template or a prototype. The word “pandemonium” refers to a very
noisy, chaotic place and hell. In it, metaphorical “demons” with specific duties receive and
analyze the features of a stimulus (Selfridge, 1959). In Oliver Selfridge’s Pandemonium
Model, there are four kinds of demons:

DEMONS

IMAGE FEATURE COGNITIVE DECISION


DEMONS DEMONS DEMONS DEMONS
The “image demons” receive a retinal image and pass it on to “feature demons.” Each
feature demon (aka., sub-demons) calls out when there are matches between the stimulus
and the given feature. These matches are yelled out at demons at the next level of the
hierarchy, the “cognitive (thinking) demons.” The cognitive demons in turn shout out
possible patterns stored in memory that conform to one or more of the features noticed by
the feature demons. A “decision demon” listens to the pandemonium of the cognitive
demons. It decides on what has been seen, based on which cognitive demon is shouting the
most frequently (i.e., which has the most matching features). Although Selfridge’s model is
one of the most widely known.

This theory provides a hierarchical model of object recognition by incorporating the process
of feature detection and prototype matching.

(Refer diagram in text)

Merit

• Flexible theory

Limitation

• Fail to explain the role of environment and experience in feature matching.

Other feature matching theories

Although Selfridge’s model is one of the most widely known, other feature models have
been proposed. Most also distinguish not only different features but also different kinds of
features, such as global versus local features. Local features constitute the small-scale or
detailed aspects of a given pattern. There is no consensus as to what exactly constitutes a
local feature. Nevertheless, we generally can distinguish such features from global features,
the features that give a form its overall shape.

Globally, the stimuli in panels (a) and (b) form the letter H. In panel (a), the local features
(small Hs) correspond to the global ones. In panel (b), comprising many local letter Ss, they
do not. In one study, participants were asked to identify the stimuli at either the global or
the local level (Navon, 1977). When the local letters were small and positioned close
together, participants could identify stimuli at the global level (the “big” letter) more quickly
than at the local level. When participants were required to identify stimuli at the global
level, whether the local features (small letters) matched the global one (big letter) did not
matter. They responded equally rapidly whether the global H was made up of local Hs or of
local Ss. However, when the participants were asked to identify the “small” local letters,
they responded more quickly if the global features agreed with the local ones. In other
words, they were slowed down if they had to identify local (small) Ss combining to form a
global (big) H instead of identifying local (small) Hs combining to form a global (big) H. This
pattern of results is called the global precedence effect (see also Kimchi, 1992). Experiments
have showed that global information dominates over local information even in infants
(Cassia, Simion, Milani, & Umiltà, 2002).

In contrast, when letters are more widely spaced, as in panels (a) and (b) of the effect is
reversed. Then a local precedence effect appears. That is, the participants more quickly
identify the local features of the individual letters than the global ones, and the local
features interfere with the global recognition in cases of contradictory stimuli (Martin,
1979). So, when the letters are close together at the local level, people have problems
identifying the local stimuli (small letters) if they are not concordant with the global stimulus
(big letter). When the letters on the local level are relatively far apart from each other, it is
harder for people to identify the global stimulus (big letter) if it is not concordant with the
local stimuli (small letters). Other limitations (e.g., the size of the stimuli) besides special
proximity of the local stimuli hold as well, and other kinds of features also influence
perception.

Eleanor Gibson, &Lewin-Distinctive Features

Several feature-analysis theories propose a more flexible approach, in which a visual


stimulus is composed of a small number of characteristics or components (Gordon, 2004).
Each characteristic is called a distinctive feature. They argue that we store a list of
distinctive features for each letter. For example, the distinctive features for the letter R
include a curved component, a vertical line, and a diagonal line. When you look at a new
letter, your visual system notes the presence or absence of the various features. It then
compares this list with the features stored in memory for each letter of the alphabet.
Eleanor Gibson (1969) propose that the distinctive features for each alphabet letters remain
constant, whether the letter is handwritten, printed, or typed. These models can also
explain how we perceive a wide variety of two-dimensional patterns.

The distinct feature approach of Eleanor Gibson and Lewin (1975) is an object-centered view
of pattern recognition where environment has no influence. Here brain neural cells in the
cerebral cortex activate from past experience to recognize the distinctive features of stimuli.
Distinctive features are defining characteristics of letters like the slanting line of R that
distinguish it from P. These distinctive features remain the same regardless of
fond/orientation. Neurological evidence also supports the identification of distinctive
features. This model is also known as PB model.

Limitations

• This theory may explain how letters are recognised but cannot explain how we
recognize complex real-world objects like a horse.
• Studies show that sometimes we notice distinctive feature of a whole figure but may
miss the same if features are presented in isolation. In real world we can see an
object only as a whole.
• Could not explain complexity of pattern recognition.
• Which brain cell detector for which stimuli not specified.

Tarr & Bulthoff Theory

Tarr and Bulthof (1995) hold that object recognition can be conceived as a continuum. There
are two approaches for pattern recognition according to them.

• Part based mechanism is used when we need to distinguish among two different
base levels (e.g., Hammer and sparrow). Orientation and perspective do not matter.
• View based approach can be used for making subtle discriminations among
subordinate level category (e.g., Finch and sparrow). Orientation and perspective
matter.

Prototype matching model


[Refer Gallotti]

UNIT 3: THEORIES OF PAIN PERCEPTION: SPECIFICITY, PATTERN AND GATE CONTROL


THEORIES. PAIN THRESHOLD AND PAIN MANAGEMENT.

Pain is defined as an unpleasant and emotional experience associated with or without actual
tissue damage. Pain perception is influenced by avariety of environmental factors including
the context of the stimuli, social responses and contingencies and cultural or ethnic
background.

PAIN

ACUTE CHRONIC
PAIN PAIN
Acute pain is a sharp pain of short duration with easily identified cause. Often it is localized
in a small area before spreading to neighboring areas. Usually it is treated by medications.
Chronic pain is the intermittent or constant pain with different intensities. It lasts for longer
periods. It is somewhat difficult to treat chronic pain and it needs professional expert care.
A number of theories have been postulated to describe mechanisms underlying pain
perception.

Specificity Theory of Pain

The Specificity Theory refers to the presence of dedicated pathways for each somatosensory
modality. The fundamental tenet of the Specificity Theory is that each modality has a
specific receptor and associated sensory fiber (primary afferent) that is sensitive to one
specific stimulus (Dubner et al. 1978). For instance, the model proposes that non-noxious
mechanical stimuli are encoded by low-threshold mechanorecepetors, which are associated
with dedicated primary afferents that project to “mechanoreceptive” second-order neurons
in the spinal cord or brainstem (depending on the source of the input). These second-order
neurons project to “higher” mechanoreceptive areas in the brain. Similarly, noxious stimuli
would activate a nociceptor, which would project to higher “pain” centers through a pain
fiber. These ideas have been emerging over several millennia but were experimentally
tested and formally postulated as a theory in the 19th century by physiologists in Western
Europe.

Pattern Theory of Pain

In an attempt to overhaul theories of somaesthesis (including pain), J. P. Nafe postulated a


“quantitative theory of feeling” (1929). This theory ignored findings of specialized nerve
endings and many of the observations supporting the specificity and/or intensive theories of
pain. The theory stated that any somaesthetic sensation occurred by a specific and
particular pattern of neural firing and that the spatial and temporal profile of firing of the
peripheral nerves encoded the stimulus type and intensity. Lele et al. (1954) championed
this theory and added that cutaneous sensory nerve fibers, with the exception of those
innervating hair cells, are the same. To support this claim, they cited work that had shown
that distorting a nerve fiber would cause action potentials to discharge in any nerve fiber,
whether encapsulated or not. Furthermore, intense stimulation of any of these nerve fibers
would cause the percept of pain (Sinclair 1955; Weddell 1955).

Gate Control Theory of Pain

Psychologist Ronald Melzack and the anatomist Patrick Wall proposed the gate control
theory for pain in 1965 to explain the pain suppression. According to them, the pain stimuli
transmitted by afferent pain fibers are blocked by gate mechanism located at the posterior
gray horn of spinal cord. If the gate is opened, pain is felt. If the gate is closed, pain is
suppressed.

Mechanism of Gate Control at Spinal Level

1. When pain stimulus is applied on any part of body, besides pain receptors, the receptors
of other

sensations such as touch are also stimulated.


2. When all these impulses reach the spinal cord through posterior nerve root, the fibers of
touch sensation (posterior column fibers) send collaterals to the neurons of pain pathway,
i.e. cells of marginal nucleus and substantia gelatinosa.

3. Impulses of touch sensation passing through these collaterals inhibit the release of
glutamate and substance P from the pain fibers.

4. This closes the gate and the pain transmission is blocked.

Role of Brain in Gate Control Mechanism

According to Melzack and Wall, brain also plays some important role in the gate control
system of the spinal cord as follows:

1. If the gates in spinal cord are not closed, pain signals reach thalamus through lateral
spinothalamic tract.

2. These signals are processed in thalamus and sent to sensory cortex.

3. Perception of pain occurs in cortical level in context of the person’s emotional status and
previous experiences.

4. The person responds to the pain based on the integration of all these information in the
brain. Thus, the brain determines the severity and extent of pain.

5. To minimize the severity and extent of pain, brain sends message back to spinal cord to
close the gate by releasing pain relievers such as opiate peptides.

6. Now the pain stimulus is blocked and the person feels less pain.

Significance of Gate Control

Thus, gating of pain at spinal level is similar to presynaptic inhibition. It forms the basis for
relief of pain through rubbing, massage techniques, application of ice packs, acupuncture
and electrical analgesia. All these techniques relieve pain by stimulating the release of
endogenous pain relievers (opioid peptides), which close the gate and block the pain signals.
Limitations of Pain Theories

• Did not account for neurons in the central nervous system (CNS) that respond to
both non-nociceptive and nociceptive stimuli (e.g., wide-dynamic range neurons).
• Focus on cutaneous pain and do not address issues pertaining to deep tissue,
visceral, or muscular pains.
• These models are focused on acute pain and do not address mechanisms of
persistent pain or the chronification of pain.
• Oversimplifications and flaws in the presentation.

Pain Threshold and Pain Management


Pain threshold is the minimum intensity at which a person begins to perceive, or sense, a
stimulus as being painful.

Pain tolerance, is the maximum amount, or level, of pain a person can tolerate or bear.

The threshold for pain can differ between men and women, and can fluctuate
based on many other factors.

Managing pain without medicines

Many non-medicine treatments are available to help you manage your pain. A combination
of treatments and therapies is often more effective than just one.

Some non-medicine options include:

• heat or cold – use ice packs immediately after an injury to reduce swelling. Heat
packs are better for relieving chronic muscle or joint injuries

• physical therapies – such as walking, stretching, strengthening or aerobic exercises


may help reduce pain, keep you mobile and improve your mood. You may need to
increase your exercise very slowly to avoid over-doing it

• massage – this is better suited to soft tissue injuries and should be avoided if the
pain is in the joints. There is some evidence that suggests massage may help manage
pain, but it is not recommended as a long-term therapy

• relaxation and stress management techniques – including meditation and yoga

• cognitive behaviour therapy (CBT) – this form of therapy can help you learn to
change how you think and, in turn, how you feel and behave about pain. This is a
valuable strategy for learning to self-manage chronic pain

• acupuncture – a component of traditional Chinese medicine. Acupuncture involves


inserting thin needles into specific points on the skin. It aims to restore balance
within the body and encourage it to heal by releasing natural pain-relieving
compounds (endorphins). Some people find that acupuncture reduces the severity of
their pain and enables them to maintain function. Scientific evidence for the
effectiveness of acupuncture in managing pain is inconclusive

• transcutaneous electrical nerve stimulation (TENS) therapy – minute electrical


currents pass through the skin via electrodes, prompting a pain-relieving response
from the body. There is not enough published evidence to support the use of TENS
for the treatment of some chronic pain conditions. However, some people with
chronic pain that are unresponsive to other treatments may experience a benefit.

Pain medicines

Many people will use a pain medicine (analgesic) at some time in their lives.

The main types of pain medicines are:

• paracetamol – often recommended as the first medicine to relieve short-term pain

• aspirin – for short-term relief of fever and mild-to-moderate pain (such as period
pain or headache)

• non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen – these medicines


relieve pain and reduce inflammation (redness and swelling)

• opioid medications, such as codeine, morphine and oxycodone – these medicines are
reserved for severe or cancer pain

• local anaesthetics

• some antidepressants

• some anti-epileptic medicines.

How pain medicines work

Pain medicines work in various ways. Aspirin and other NSAIDs are pain medicines that help
to reduce inflammation and fever. They do this by stopping chemicals called prostaglandins.
Prostaglandins cause inflammation, swelling and make nerve endings sensitive, which can
lead to pain.

Prostaglandins also help protect the stomach from stomach acid, which is why these
medicines can cause irritation and bleeding in some people.
Opioid medicines work in a different way. They change pain messages in the brain, which is
why these medicines can be addictive.

Choosing the right pain medicine

The right choice of medicine for you will depend on:

• the location, intensity, duration and type of pain

• any activities that ease the pain or make it worse

• the impact your pain has on your lifestyle, such as how it affects your appetite or
quality of sleep

• your other medical conditions

• other medicines you take.

Managing your medicines effectively

Always follow instructions for taking your medications safely and effectively. By doing so:
• your pain is more likely to be well managed
• you are less likely to need larger doses of medication
• you can reduce your risk of side effects.
Medications for chronic pain are best taken regularly. Talk to your doctor or pharmacist if
your medicines are not working or are causing problems, such as side effects. These are
more likely to occur if you are taking pain medicines for a long time.

It is important to use a variety of strategies to help reduce pain. Do not rely on medicines
alone. People can lower the levels of pain they feel by:

• staying active
• pacing their daily activity so as to avoid pain flares (this involves finding the balance
between under- and overdoing it)

• avoiding pain triggers

• using coping strategies.

Side effects of pain medicines

Some of the side effects of common pain medicines include:

• paracetamol – side effects are rare when taken at the recommended dose and for a
short time. Paracetamol can cause skin rash and liver damage if used in large doses
for a long time

• aspirin – the most common side effects are nausea, vomiting indigestion and
stomach ulcer. Some people may experience more serious side effects such as an
asthma attack, tinnitus (ringing in the ears), kidney damage and bleeding

• non-steroidal anti-inflammatory drugs (NSAIDs) – can cause headache, nausea,


stomach upset, heartburn, skin rash, tiredness, dizziness, ringing in the ears and
raised blood pressure. They can also make heart failure or kidney failure worse, and
increase the risk of heart attack, angina, stroke and bleeding. NSAIDs should always
be used cautiously and for the shortest time possible.

• opioid pain medicines such as morphine, oxycodone and codeine – commonly cause
drowsiness, confusion, falls, nausea, vomiting and constipation. They can also reduce
physical coordination and balance. Importantly, these medicines can lead to
dependence and slow down breathing, resulting in accidental fatal overdose.

Precautions when taking pain medicines

Treat over-the-counter pain medicines with caution, just like any other medication. It’s
always good to discuss any medication with your doctor or pharmacist.

General suggestions include:


• Don’t self-medicate with pain medicines during pregnancy – some can reach the
fetus through the placenta and potentially cause harm.

• Take care if you are elderly or caring for an older person. Older people have an
increased risk of side effects. For example, taking aspirin regularly for chronic pain
(such as arthritis) can cause a dangerous bleeding stomach ulcer.

• When buying over-the-counter pain medicines, speak with a pharmacist about any
prescription and complementary medicines you are taking so they can help you
choose a pain medicine that is safe for you.

• Don’t take more than one over-the-counter medicine at a time without consulting
your doctor or pharmacist. It is easier than you think to unintentionally take an
overdose. For example, many ‘cold and flu’ medicines contain paracetamol, so it is
important not to take any other paracetamol-containing medicine at the same time.

• See your doctor or healthcare professional for proper treatment for sport injuries.
Don’t use pain medicines to ‘tough it out’.

• Consult your doctor or pharmacist before using any over-the-counter medicine if you
have a chronic (ongoing) physical condition, such as heart disease or diabetes.

Managing pain that cannot be easily relieved

Sometimes pain will persist and cannot be easily relieved. It’s natural to feel worried, sad or
fearful when you are in pain. Here are some suggestions for how to handle persistent pain:

• Focus on improving your day-to-day function, rather than completely stopping the
pain.

• Accept that your pain may not go away and that flare-ups may occur. Talk yourself
through these times.

• Find out as much as you can about your condition so that you don't fret or worry
unnecessarily about the pain.
• Enlist the support of family and friends. Let them know what support you need; find
ways to stay in touch.

• Take steps to prevent or ease depression by any means that work for you, including
talking to friends or professionals.

• Don’t increase your pain medicines without talking to your doctor or pharmacist
first. Increasing your dose may not help your pain and might cause you harm.

• Improve your physical fitness, eat healthy foods and make sure you get all the rest
you need.

• Try not to allow the pain to stop you living your life the way you want to. Try gently
reintroducing activities that you used to enjoy. You may need to cut back on some
activities if pain flare-ups occur, but increase slowly again as you did before.

• Concentrate on finding fun and rewarding activities that don't make your pain
worse.

• Seek advice on new coping strategies and skills from a healthcare professional such
as an occupational therapist or psychologist.

UNIT 4: THEORIES OF CONSTANCIES AND ILLUSIONS; (IN DEPTH).

UNIT 5: CLASSICAL AND MODERN PSYCHOPHYSICS: CLASSICAL PSYCHOPHYSICAL METHODS


(IN DETAIL), BRIEF DISCUSSION OF- FETCHER’S CONTRIBUTIONS, WEBBER’S LAW, STEVEN’S
POWER LAW, SIGNAL DETECTION THEORY AND ROC CURVE.

Psychophysics is the study of quantitative relationship between the physical stimulus and
the resulting psychological sensation. The psychophysical scaling methods investigate the
quantitative relationship between subjective measurement of these stimuli and the
objective measurement by the physical scales. In other words, psychophysical scaling
methods intend to discover some definite quantitative relation between the physical
stimulus and the resulting sensation by manipulators of the physical stimulus dimensions.

CLASSICAL PSYCHOPHYSICAL METHODS

Method of Limits

The method of limits is a popular method of determining threshold. The method was so
named by Kraepelin (1891) because a series of stimulus ends when the subject has reached
that limit where he/she changes their judgement. For computing threshold by this method,
two modes of presenting stimulus are usually adopted – the increasing mode and the
decreasing mode. The increasing mode is called the ascending series and the decreasing
mode is called the descending series. For computing DL, the Co is varied in possible small
steps in the ascending and descending series and the subject is required to say in each step
whether the Co is smaller (-), equal to (=), or larger (+) than the St. For computing RL, no St is
needed and the subject simply reports whether or not he has detected change in the
stimulus presented in the ascending and descending series. In computing both the DL and
the RL the stimulus sequence is varied with a minimum change in its magnitude in each
presentation. Hence, Guilford (1954) prefers to call this method the method of minimal
changes.

The thresholds thus found in each series are also called transition points above which the
subject changes his response in the ascending series and below which he also changes his
response in the descending series. Thus, several alternate ascending and descending series
are taken until the experimenter is well satisfied with the relative uniformity of the different
individual thresholds.

There are chances of variability in the subject’s performance due to some variable errors
like changes in motivation, interest, attention etc. besides these variable errors the RL may
also be affected by two constant errors –

1. Error of habituation
Sometimes called the error of perseverance.
It may be defined as a tendency of the subject to go on saying “Yes” in a descending
series or “No” in an ascending series.
Consequence: Inflate the mean of the ascending series over the mean of the
descending series.
2. Error of anticipation
Sometimes called error of expectation.
It is the opposite of error of habituation and accordingly maybe defined as the
tendency to expect a change from “Yes” to “No” in the descending series and “No”
to “Yes” in the ascending series before the change in stimulus is apparent.
Consequence: Inflate the mean of the descending series over the mean of the
ascending series.

The primary purpose of giving ascending and descending series is to cancel out these two
types of constant errors.

Method of Constant Stimuli

Also known as the method of frequency and method of right and wrong cases.

In this method a number of fixed or constant stimuli are presented to the subject several
times in a random order. The method of constant stimuli can also be employed for
determining the RL or DL. For determining RL the different values of the stimulus are
presented to the subject in a random order and he has to report each time whether he
perceives or does not perceive the stimulus. Though the different values of stimulus are
presented irregularly, the same values are presented throughout the experiment a large
number of times, usually from 50-200 times each, in a predetermined order unknown to the
subjects. The mean of the reported values of the stimulus becomes the index of RL. The
procedure involved is known as the method of constant stimuli. For calculating DL, in each
presentation the two stimuli (St and Co) are presented to the subject simultaneously or in
succession (Guilford, 1954). On each trial the subject is required to say whether one
stimulus is “greater” or “less” than the other. The procedure involved is known as the
method of constant stimulus differences and not the method of constant stimuli.
This method is also called the method of right or wrong cases because in each case the
subject has to report whether he perceives the stimulus (right) or he does not perceive the
stimulus (wrong).

Advantages

• Error of habituation and expectation is decreased.


• Presence of two-category judgement.
• No neutral category.
• Can be graphically represented.

Method of Adjustment

Also known as method of average error, method of reproduction or method of equivalent


stimuli.

Oldest method of psychophysics.

In this method the subject is provide with a St and a Co. The Co is either greater or lesser in
intensity than the St. The perceiver is required to adjust the Co until it appears to him to be
equivalent to the St.

The difference between St and Co defines the error in each judgement. A large number of
such judgements are obtained and the arithmetic mean of those judgement is calculated.
Hence, the name ‘method of average error or mean error’ is given. The obtained mean is
the value of PSE (point of subjective equality). The difference between the S t and PSE
indicates the presence of the constant error (CE). If the PSE (average adjustment) is larger
than the St, CE is positive and indicates overestimation of the standard stimulus. On the
other hand, if the PSE is smaller than St, CE is negative and indicates underestimation of the
standard stimulus.

This method is mostly used for obtaining data from Muller-Lyer Illusion experiment. The two
common constant error that can be committed in the method of adjustment and the
Muller-Lyer experiment are:
1) Movement error
It occurs when the subject has a certain bias for one of the two movements, i.e.,
inward movement and outward movement, which unduly helps him in making the
feather-headed line equal to the arrow-headed line.
2) Space error
It occurs when the subject has a certain bias for one of the two spaces in the visual
field, viz, right and left, which helps in adjusting the feather-headed line.

MODERN PSYCHOPHYSICAL METHODS

[Refer A. K. Singh, pg no. 311]

Fetcher’s Contributions, Webber’s Law, Steven’s Power Law

[Refer A. K. Singh, pg no. 307- 310]

Signal Detection Theory (SDT)

Signal-detection theory was one of the first theories to suggest an interaction between the
physical sensation of a stimulus and cognitive processes such as decision making. In other
words, it is a common method used in psychophysical research which allows researchers to
test human abilities related to differentiating between signal and noise. Signal-detection
theory (SDT) is a framework to explain how people pick out the few important stimuli when
they are embedded in a wealth of irrelevant, distracting stimuli. SDT often is used to
measure sensitivity to a target’s presence. When we try to detect a target stimulus (signal),
there are four possible outcomes.

• First, in hits (also called “true positives”), the lifeguard correctly identifies the
presence of a target (i.e., somebody drowning).
• Second, in false alarms (also called “false positives”), he or she incorrectly identifies
the presence of a target that is actually absent (i.e., the lifeguard thinks somebody is
drowning who actually isn’t).
• Third, in misses (also called “false negatives”), the lifeguard fails to observe the
presence of a target (i.e., the lifeguard does not see the drowning person).
• Fourth, in correct rejections (also called “true negatives”), the lifeguard correctly
identifies the absence of a target (i.e., nobody is drowning, and he or she knows that
nobody is in trouble).

In SDT, threshold is determined by two factors as elaborated below:

a) Sensitivity measure
It depends on the intensity of stimuli and sensitivity of the observer. When stimulus
intensity increase, sensitivity measure decreases.
b) Decision-making criterion (β)
It depends on the probability of stimulus and pay-off (reward and punishment).

This is based on top-down processing. Cache trails are used to reduce practice effect and to
avoid error of habituation.

Signal detection theory uses a signal detection matrix to explain process and outcomes of
signal detection. Usually, the presence of a target is difficult to detect. Thus, we make
detection judgments based on inconclusive information with some criteria for target
detections. The number of hits is influenced by where you place your criteria for considering
something a hit.

For example, it might occur with highly sensitive screening tests where positive results lead
to further tests. Thus, overall sensitivity to targets must reflect a flexible criterion for
declaring the detection of a signal. If the criterion for detection is too high, then the doctor
will miss illnesses (misses). If the criterion is too low, the doctor will falsely detect illnesses
that do not exist (false alarms). Sensitivity is measured in terms of hits minus false alarms.
Conservative and strict strategies can help in increasing correct responses and liberal
strategies increase incorrect responses.

Signal-detection theory can be discussed in the context of attention, perception, or


memory:

• attention—paying enough attention to perceive objects that are there;


• perception—perceiving faint signals that may or may not be beyond your perceptual range
(such as a very high-pitched tone);

• memory—indicating whether you have/have not been exposed to a stimulus before, such
as whether the word “champagne” appeared on a list that was to be memorized.

Researchers use measures from signal-detection theory to determine an observer’s


sensitivity to targets in various tasks.

Receiver Operating Characteristic (ROC) Curve

ROC curve depicts data from signal detection experiments. It shows the relationship
between the probability of hit (true positive) and false alarm (false positive). In this curve,
the experimenter manipulates the criterion because the experimenter manipulates the pay-
off.

Originally developed for detecting enemy airplanes and warships during the World War II,
the receiver operating characteristic (ROC) has been widely used in the biomedical field
since the 1970s in, for example, patient risk group classification, outcome prediction and
disease diagnosis. Today, it has become the gold standard for evaluating/comparing the
performance of a classifier(s).

A ROC curve is a two-dimensional plot that illustrates how well a classifier system works as
the discrimination cut-off value is changed over the range of the predictor variable. The x
axis or independent variable is the false positive rate for the predictive test. The y axis or
dependent variable is the true positive rate for the predictive test. Each point in ROC space
is a true positive/false positive data pair for a discrimination cut-off value of the predictive
test. If the probability distributions for the true positive and false positive are both known, a
ROC curve can be plotted from the cumulative distribution function. In most real
applications, a data sample will yield a single point in the ROC space for each choice of
discrimination cut-off. A perfect result would be the point (0, 1) indicating 0% false positives
and 100% true positives. The generation of the true positive and false positive rates requires
that we have a gold standard method for identifying true positive and true negative cases.
To better understand a ROC curve, we will need to review the contingency table or
confusion matrix. A confusion matrix (also known as an error matrix) is a contingency table
that is used for describing the performance of a classifier/classification system, when the
truth is known.

Limitations

1. To calculate AUC, sensitivity and specificity values are summarized over all possible cut-
off values, and this can be misleading because only one cut-off value is used in making
predictions.

2. Different study populations might have different patient characteristics; a ROC model
developed using data generated from one population might not be directly transferred to
another population. A training and a validation set approach can be used to evaluate the
performance of a classifier.

3. Depending on disease prevalence and costs associated with misclassification, the optimal
classifier might vary from one situation to another.

4. ROC curves are most useful when the predictors are continuous.
MODULE 4: MEMORY

UNIT 1: ENCODING: THEORIES AND MODELS OF MEMORY: JAMES - TWO STORE MODEL,
ATKINSON &SHIFRIN (3STORE) - INFORMATION PROCESSING APPROACH, CRAIK
LOKHART&TULVING- LEVELS OF PROCESSING, ZINCHENKO- LEVELS OF RECALL.

James - Two Store Model

Early interest in a dualistic model of memory began in the late 1890 when William James
distinguished between immediate memory, which he called primary memory, and indirect
memory, which he called secondary memory. James based much of his depiction of the
structure of memory on introspection, and he viewed secondary memory as the dark
repository of information once experienced but no longer easily accessible.

According to James, primary memory, closely related but not identical to what is now called
short-term memory (STM), never left consciousness and gave a faithful rendition of events
just perceived. Secondary memory, or long-term memory (LTM), was conceptualized as
paths, etched into the brain tissue of people but with wide individual differences. For James,
memory was dualistic in character, both transitory and permanent. However, little scientific
evidence was presented to distinguish operationally between the two systems. The
relationship between primary

memory and secondary memory were described by Waugh and Norman (1965). In their
early model, an item enters primary memory and then may be held there by rehearsal or
may be forgotten. With rehearsal, the item enters secondary memory and becomes part of
the permanent memory.

James’s dualistic memory model made good sense from an introspective standpoint. It also
seems valid from the standpoint of the structural and processing features of the brain.
Later, evidence for two memory states would come from physiological studies.
Performance by animals in learning trials is poorer when the trials are followed immediately
by electroconvulsive shock. That this is the case (while earlier learning is unaffected)
suggests that transfer from primary memory to secondary memory may be interfered with
(Weiskrantz, 1966). Furthermore, there is a large body of behavioral evidence— from the
earliest experiments on memory to the most recent reports in the psychological literature—
that supports a dualistic theory. A primacy and recency effect for paired associates was
discovered by Mary Whiton Calkins, a student of William James. When a person learns a
series of items and then recalls them without attempting to keep them in order, the primacy
and recency effect is seen whereby items at the beginning (primacy) of the list and the end
(recency) of the list are recalled best. This effect is consistent with a dual memory concept.

While the primacy-recency effect is pretty robust, there is a notable exception to this called
the von Restorff effect in which a letter in the middle of a list is novel, relative to the other
list items. For example, imagine a list of 20 digits, with the letter A located in the middle of
the list. Most if not all people will remember this middle item. Because primacy and recency
effects had been known for a long time, their incorporation into a two-process model of
memory seemed a logical step. In such a model, information gathered by our sensory
system is rapidly transferred to a primary memory store and is either replaced by other
incoming information or held there by rehearsal. With a lot of other information coming in,
as in list learning, information held in STM is bumped out by new information. Take, for
example, how items from a list might be entered into STM. Since rehearsal is required to
transfer information into LTM, the first items on the list will have more rehearsal time and a
greater opportunity to be transferred. As the middle items from the list come in, they
compete with each other and bump each other out. While items at the end of the list aren’t
rehearsed as long, they are still retained in STM at the time of recall given the recency in
which they were learned. We can trace the storage capacity of STM by identifying the point
at which the recent curve begins to emerge. The number of items in that span is rarely
larger than eight, thereby lending support to a dual memory model that includes a STM
system that has limited capacity.

Atkinson &Shiffrin (3store) - Information Processing Approach

Richard Atkinson and Richard Shiffrin (1968) proposed an alternative model that
conceptualized memory in terms of three memory stores. This modal model of memory,
assumes that information is received, processed, and stored differently for each kind of
memory (Atkinson & Shiffrin, 1968; Waugh & Norman, 1965). The three memory stores are
sensory store, short-term store and long-term store.

• a sensory store, capable of storing relatively limited amounts of information for very
brief periods;
• a short-term store, capable of storing information for somewhat longer periods but
of relatively limited capacity as well; and
• a long-term store, of very large capacity, capable of storing information for very long
periods, perhaps even indefinitely (Richardson-Klavehn & Bjork, 2003).

The model differentiates among structures for holding information, termed stores, and the
information stored in the structures, termed memory. Today, cognitive psychologists
commonly describe the three stores as sensory memory, short-term memory, and long-term
memory. Also, Atkinson and Shiffrin were not suggesting that the three stores are distinct
physiological structures. Rather, the stores are hypothetical constructs— concepts that are
not themselves directly measurable or observable but that serve as mental models for
understanding how a psychological phenomenon works.

Atkinson and Shiffrin argued that memories in short-term memory are fragile, and they
could be lost within about 30 seconds unless they are repeated. In addition, Atkinson and
Shiffrin proposed control processes, or intentional strategies—such as rehearsal—that
people may use to improve their memory (Hassin, 2005; Raaijmakers & Shiffrin, 2002). The
original form of this model focused on the role of short-term memory in learning and
memory. The model did not explore how short-term memory is central when we perform
other cognitive tasks (Roediger et al., 2002). The Atkinson-Shiffrin model played a central
role in the growing appeal of the cognitive approach to psychology.

Atkinson and Shiffrin make an important distinction between the concepts of memory and
memory stores; they use the term “memory” to refer to the data being retained, while
“store” refers to the structural component that contains the information. Simply indicating
how long an item has been retained does not necessarily reveal where it is located in the
structure of memory. In their model, this information in the short-term store can be
transferred to the long-term store, while other information can be held for several minutes
in the short-term store and never enter the long-term store. The short-term store was
regarded as the working system, in which entering information decays and disappears
rapidly. Information in the short-term store may be in a different form than it was originally
(e.g., a word originally read by the visual system can be converted and represented
auditorially). Information contained in the long-term store was envisioned as relatively
permanent, even though it might be inaccessible because of interference of incoming
information. The function of the long-term store was to monitor stimuli in the sensory
register (and thus controlling information entering the short-term store) and to provide
storage space for information in the short-term store.

This Atkinson-Shiffrin model emphasizes the passive storage areas in which memories are
stored; but it also alludes to some control processes that govern the transfer of information
from one store to another.
Craik Lokhart & Endel Tulving- Levels of Processing

In 1972, Fergus Craik and Robert Lockhart wrote an article about the depth-of processing
approach. This article became one of the most influential publications in the history of
research on memory (Roediger, Gallo, & Geraci, 2002). The levels-of processing approach
argues that deep, meaningful kinds of information processing led to more permanent
retention than shallow, sensory kinds of processing. (This theory is also called the depth-of-
processing approach.) The levels-of-processing approach predicts that your recall will be
relatively accurate when you use a deep level of processing. For instance, you used deep
processing when you considered a word’s meaning (e.g., whether it would fit in a sentence).
The levels-of-processing approach predicts that your recall will be relatively poor when you
use a shallow level of processing. For example, you will be less likely to recall a word when
you considered its physical appearance (e.g., whether it is typed in capital letters) or its
sound (e.g., whether it rhymes with another word).

The fundamental assumption is that retention and coding of information depend on the
kind of perceptual analysis done on the material at encoding. In other words, The major
hypothesis emerging from Craik and Lockhart’s (1972) paper was that deeper levels of
processing should produce better recall. Some kinds of processing, done at a superficial or
“shallow” level, do not lead to very good retention. Other kinds of “deeper” (more
meaningful or semantic) processing improve retention. According to the levels-of-processing
view, improvement in memory comes not from rehearsal and repetition but from greater
depth of analysis of the material. Craik and Tulving (1975) performed a typical levels-of-
processing investigation. Participants were presented with a series of questions about
particular words. Each word was preceded by a question, and participants were asked to
respond to the questions as quickly as possible; no mention was made of memory or
learning. Any learning that is not in accord with the participant’s purpose is called incidental
learning.

In one experiment, three kinds of questions were used. One kind asked the participant
whether the word was printed in capital letters. Another asked if the target word rhymed
with another word. The third kind asked if the word fit into a particular sentence (for
example, “The girl placed the _____ on the table”). The three kinds of questions were meant
to induce different kinds of processing. To answer the first kind of question, you need look
only at the typeface (physical processing). To answer the second, you need to read the word
and think about what it sounds like (acoustic processing). To answer the third, you need to
retrieve and evaluate the word’s meaning (semantic processing). Presumably, the “depth”
of the processing needed is greatest for the third kind of question and least for the first kind
of question. As predicted, Craik and Tulving (1975) found that on a surprise memory test
later, words processed semantically were remembered best, followed by words processed
acoustically. However, the experiment gave rise to an alternative explanation: Participants
spent more time answering questions about sentences than they did questions about
capital letters.
Craik and Tulving (1975) found that people were about three times as likely to recall a word
if they had originally answered questions about its meaning rather than if they had originally
answered questions about the word’s physical appearance. Numerous reviews of the
research conclude that deep processing of verbal material generally produces better recall
than shallow processing (Craik, 1999, 2006; Lockhart, 2001; Roediger & Gallo, 2001).

Deep levels of processing encourage recall because of two factors: distinctiveness and
elaboration. Distinctiveness means that a stimulus is different from other memory traces.
The second factor that operates with deep levels of processing is elaboration, which
requires rich processing in terms of meaning and interconnected concepts (Craik, 1999,
2006; Smith, 2006).

There are no distinct boundaries between one level and the next. The emphasis in this
model is on processing as the key to storage. The level at which information is stored will
depend, in large part, on how it is encoded. Moreover, the deeper the level of processing,
the higher, in general, is the probability that an item may be retrieved (Craik & Brown,
2000).

Other research demonstrates that deep processing also enhances our memory for faces. For
instance, people recognize more photos of faces if they had previously judged whether the
person looked honest, rather than judging a more superficial characteristic, such as the
width of the person’s nose (Bloom & Mudd, 1991; Sporer, 1991). People also recall faces
better if they have been instructed to pay attention to the distinctions between faces
(Mäntylä, 1997).

Craik and Lockhart (1972) viewed memory as a continuum of processes, from the “transient
products of sensory analyses to the highly durable products of semantic . . . operations”.
Baddeley (1978) presented a thorough critique of the levels-of-processing

approach. First, he argued that without a more precise and independent definition of
“depth of processing,” the usefulness of the theory was very limited. Second, he reviewed
studies that showed, under certain conditions, greater recall of information processed
acoustically than semantically. Finally, he described ways in which the modal view of
memory could explain the typical levels-of-processing findings.
Nonetheless, the levels-of-processing approach did help to reorient the thinking of memory
researchers, drawing their attention to the importance of the way material is encoded. The
approach has helped cognitive psychologists think about the ways in which people approach
learning tasks. It has reinforced the idea that the more “connections” an item has to other
pieces of information (such as retrieval cues), the easier it will be to remember, a point that
fits nicely with the idea of encoding specificity.

Zinchenko- Levels of Recall

Zinchenko (1962, 1981), a Russian psychologist, held that the deeper the level of processing
encouraged by the question, the higher the level of recall achieved.

UNIT 2: WORKING MEMORY MODELS: BADDELEY& HITCH (DECLARATIVE) & ANDERSON’S


ACT* MODEL (PROCEDURAL).

Baddeley & Hitch- Working Memory Model (Declarative)

Working memory can be conceptualized as a type of workbench in which new and old
information are constantly being transformed, combined, and updated. Working memory
challenges, the view that STM is simply another “box” in the head—a simple processing
station along the way to either being lost or sent on to LTM. The concept of working
memory also challenges the idea that the capacity of STM is limited to about seven items.
Baddeley argues that the span of memory is determined by the speed with which we
rehearse information. In the case of verbal material, he proposed that we have a
phonological loop that contains the phonological store and articulatory process in which we
can maintain as much information as we can rehearse in a fixed duration

Working memory holds only the most recently activated, or conscious, portion of long-term
memory, and it moves these activated elements into and out of brief, temporary memory
storage (Dosher, 2003). Alan Baddeley has suggested an integrative model of memory
(Baddeley, 1990a, 1990b, 2007, 2009). It synthesizes the working-memory model with the
LOP framework. Essentially, he views the LOP framework as an extension of, rather than as
a replacement for, the working-memory model. Alan Baddeley and Graham Hitch (1974)
performed a series of experiments to test LOP model. The general design was to have
participants temporarily store a number of digits (thus absorbing some of the STS storage
capacity) while simultaneously performing another task, such as reasoning or language
comprehension. These tasks were also thought to require resources from STS—specifically,
the control processes mentioned earlier. The hypothesis was that if the STS capacity is taken
up by stored digits, fewer resources are available for other tasks, so performance on other
tasks suffers.

From the results they concluded that a common system does seem to contribute to
cognitive processes such as temporarily storing information, reasoning, and comprehending
language. Filling up STM with six digits does hurt performance on a variety of cognitive
tasks, suggesting that this system is used in these tasks. However, the memory loads used,
thought to be near the limit of STM capacity, do not totally disrupt performance. Because
researchers think STM has a capacity of about seven items, plus or minus two, the six-digit
memory load should have essentially stopped any other cognitive activity. Baddeley and
Hitch (1974) therefore argued for the existence of what they called working memory (WM).
They see WM as consisting of a limited-capacity “workspace” that can be divided between
storage and control processing.

Baddeley originally suggested that working memory comprises five elements:

• The visuospatial sketchpad,

The visuospatial sketchpad, briefly holds some visual images. It is similar to the
phonological loop but is responsible for visual and spatial tasks, which might include
remembering sizes and shapes or the speed and direction of moving objects. The
sketchpad is also involved in the planning of spatial movements such as exiting a
burning building. This sketchpad allows you to look at a complex scene and gather
visual information about objects and landmarks. It also allows you to navigate from
one location to another (Logie & Della Sala, 2005). Incidentally, the visuospatial
sketchpad has been known by a variety of different names, such as visuo-spatial
scratchpad, visuo-spatial working memory, and short-term visual memory (Cornoldi
& Vecchi, 2003; Hollingworth, 2004). The visuospatial sketchpad allows you to store
a coherent picture of both the visual appearance of the objects and their relative
positions in a scene (Cornoldi & Vecchi, 2003; Hollingworth, 2004, 2006; Logie &
Della Salla, 2005). The visuospatial sketchpad also stores visual information that you
encode from verbal stimuli (Baddeley, 2006; Pickering, 2006a). For example, when a
friend tells a story, you may find yourself visualizing the scene. The capacity of the
visuospatial sketchpad is limited.

• The phonological loop,

The phonological loop briefly holds inner speech for verbal comprehension and for
acoustic rehearsal. We use the phonological loop for a number of everyday tasks,
including sounding out new and difficult words and solving word problems. There
are two critical components of this loop. One is phonological storage, which holds
information in memory. The other is subvocal rehearsal, which is used to put the
information into memory in the first place. When subvocal rehearsal is inhibited, the
new information is not stored. This phenomenon is called articulatory suppression.
Articulatory suppression is more pronounced when the information is presented
visually versus aurally (e.g., by hearing). The amount of information that can be
manipulated within the phonological loop is limited. Thus, we can remember fewer
long words compared with short words (Baddeley, 2000b). Without this loop,
acoustic information decays after about 2 seconds. There is only a limited amount of
information and one determinant is the time to vocalize the word. Researchers also
report that the relationship between pronunciation time and recall accuracy holds
true, whether you actually pronounce the words aloud or use subvocalization,
pronouncing the words silently.

• The central executive,

The third element is a central executive, which both coordinates attentional


activities and governs responses. The central executive is critical to working memory
because it is the gating mechanism that decides what information to process further
and how to process this information. It decides what resources to allocate to
memory and related tasks, and how to allocate them. It is also involved in higher-
order reasoning and comprehension and is central to human intelligence. The
phonological loop and visuospatial sketchpad are regulated by the central executive,
which coordinates attentional activities and governs responses. The central
executive acts much like a supervisor who decides which issues deserve attention,
which will be ignored, and what to do if systems go awry.

According to the working-memory model, the central executive integrates


information from the phonological loop, the visuospatial sketchpad, the episodic
buffer, and from long-term memory. The central executive also plays a major role in
focusing attention, planning strategies, transforming information, and coordinating
behavior (Baddeley, 2001; Reuter-Lorenz & Jonides, 2007). The central executive is
therefore extremely important and complex. However, it is also the least understood
component of working memory (Baddeley, 2006; Bull & Espy, 2006). In addition, the
central executive is responsible for suppressing irrelevant information (Baddeley,
2006; Engle & Conway, 1998; Hasher et al., 2007). In your everyday activities, your
central executive helps you decide what to do next. It also helps you decide what not
to do, so that you do not become sidetracked from your primary goal.

Characteristics

1. The phonological loop and the visuospatial sketchpad both have specialized
storage systems but central executive does not store information.
2. The central executive plays a critical role in the overall functions of working
memory.
3. It decides which issues deserve attention and which should be ignored,
selects strategies, figuring out how to tackle a problem and plays an
important role when we try to solve mathematical problems.
4. It cannot make numerous decisions at the same time, and it cannot work
effectively on two simultaneous projects.
• Subsidiary “slave systems,” and

The fourth element is a number of other “subsidiary slave systems” that perform
other cognitive or perceptual tasks (Baddeley, 1989).

• The episodic buffer.


Baddeley (2000) updated his model to include the episodic buffer. The episodic
buffer is a limited capacity system that combines information from LTM and the
visuospatial sketchpad and phonological loop with the central executive. The
episodic buffer is a limited-capacity system that is capable of binding information
from the visuospatial sketchpad and the phonological loop as well as from long-term
memory into a unitary episodic representation. This component integrates
information from different parts of working memory—that is, visual-spatial and
phonological—so that they make sense to us. This incorporation allows us to solve
problems and re-evaluate previous experiences with more recent knowledge.

In other words, the episodic buffer serves as a temporary storehouse where we can
gather and combine information from the phonological loop, the visuospatial
sketchpad, and long-term memory.

As Baddeley (2006) explains, his original theory had proposed that the central
executive plans and coordinates various cognitive activities. However, the theory
had also stated that the central executive did not actually store any information.
Baddeley therefore proposed the episodic buffer as the component of working
memory where auditory, visual, and spatial information can be combined with the
information from long-term memory. This arrangement helps to solve the
theoretical problem of how working memory integrates information from different
modalities (Morrison, 2005).

This episodic buffer actively manipulates information so that you can interpret an
earlier experience, solve new problems, and plan future activities. For instance,
suppose that you are thinking about an unfortunate experience that occurred
yesterday, when you unintentionally said something rude to a friend. You might
review this event and try to figure out whether your friend seemed offended;
naturally, you’ll need to access some information from your long-term memory
about your friend’s customary behavior. You’ll also need to decide whether you do
have a problem, and, if so, how you can plan to resolve the problem.
Because the episodic buffer is new, we do not have details about how it works and
how it differs from the central executive. However, Baddeley (2000a, 2006) proposes
that it has a limited capacity—just as the capacities of the phonological loop and the
visuospatial sketchpad are limited.

Furthermore, this episodic buffer is just a temporary memory system, unlike the
relatively permanent long-term memory system. Some of the material in the
episodic buffer is verbal (e.g., the specific words you used) and some is visuospatial
(e.g., your friend’s facial expression and how far apart you were standing). The
episodic buffer therefore allows you to temporarily store and integrate information
from both the phonological loop and the visuospatial sketchpad (Gathercole et al.,
2006; Styles, 2006; Towse & Hitch, 2007). This episodic buffer allows us to create a
richer, more complex representation of an event. This complex representation can
then be stored in our long-term memory.

Shortly after the working memory model was introduced, researchers concentrated on
finding out more about the phonological loop, the visuospatial sketchpad, and the nature of
the central executive using conventional psychological measures. Lately, however, cognitive
neuroscience measures have been applied to the model with considerable success. Cabeza
and Nyberg (1997) have shown that the phonological loop is related to bilateral activation of
the frontal and parietal lobes as measured by PET scans. And, in a study done by Haxby,
Ungerleider, Horwitz, Rapoport, and Grady (1995), the visuospatial sketchpad activates
different areas of the cortex. Here it was found that shorter intervals activate the occipital
and right frontal lobes while longer intervals implicate areas of the parietal and left frontal
lobes. Increasingly, observations made possible by brain-imaging technology are being
applied to models of memory, and more and more parts of the puzzle of memory are being
solved.

The revised model acknowledges that information from the systems is integrated.

Anderson’s Act* Model (Procedural)


Collins and Loftus’s (1975) model have been superseded by more complex theories that
attempt to explain broader aspects of general knowledge (Rogers & McClelland, 2004). One
among the major two is Anderson’s ACT theories.

John Anderson et al., of Carnegie Mellon University have constructed a series of network
models called ACT-R (Anderson, 1983, 2000; Anderson & Schooler, 2000; Anderson &
Schunn, 2000; Anderson et al., 2004). ACT-R is an acronym for “Automatic Components of
Thought- Rational”. It attempts to account for all of cognition (Anderson et al., 2005). In his
ACT model, John Anderson synthesized some of the features of serial information-
processing models and some of the features of semantic-network models. In ACT,
procedural knowledge is represented in the form of production systems. Declarative
knowledge is represented in the form of propositional networks.

This theory explains all of cognition including memory, learning, spatial cognition, language,
reasoning, and decision making. The model focuses primarily on declarative knowledge. The
network model devised by Collins and Loftus (1975) focuses on networks for individual
words. Anderson, in contrast, designed a model based on larger units of meaning. According
to Anderson (1990), the meaning of a sentence can be represented by a propositional
network, or pattern of interconnected propositions. Anderson (1985) defined a proposition
as being the smallest unit of knowledge that can be judged to be either true or false.
Anderson and his co-authors define a proposition as the smallest unit of knowledge that can
be judged either true or false. According to the model, each of the following three
statements is a proposition:

1. Susan gave a cat to Maria.

2. The cat was white.

3. Maria is the president of the club.

These three propositions can appear by themselves, but they can also be combined into a
sentence, such as the following:

Susan gave a white cat to Maria, who is the president of the club.

Propositions are abstract; they do not represent a specific set of words. Anderson suggests
that each of the concepts in a proposition can be represented by its own individual network.
Anderson’s model of semantic memory makes some additional proposals.

Similar to Collins and Loftus’s (1975) model, the links between nodes become stronger as
they are used more often.

Practice is vitally important in developing more extensive semantic memory (Anderson &
Schooler, 2000).

The model assumes that, at any given moment, as many as ten nodes are represented in
your working memory.

In addition, the model proposes that activation can spread.

Anderson argues that the limited capacity of working memory can restrict the spreading.
Also, if many links are activated simultaneously, each link receives relatively little activation.
As a consequence, this knowledge will be retrieved relatively slowly (Anderson, 2000).

Anderson and his colleagues are currently conducting research, using functional magnetic
resonance imaging to examine how changes in learning are reflected in selected regions of
the cortex and the subcortex (Anderson et al., 2005; Anderson et al., 2004).
Procedural Knowledge within ACT-R

Such knowledge is represented in production systems rather than in semantic networks.


Knowledge representation of procedural skills occurs in three stages: cognitive, associative,
and autonomous (Anderson, 1980).

Our progress through these stages is called proceduralization (Anderson et al., 2004;
Oellinger et al., 2008). Proceduralization is the overall process by which we transform slow,
explicit information about procedures (“knowing that”) into speedy, implicit,
implementations of procedures (“knowing how”). One means by which we make this
transformation is through composition. During this stage, we construct a

single production rule that effectively embraces two or more production rules. It thus
streamlines the number of rules required for executing the procedure. For example,
consider what happens when we learn to drive a standard-shift car. We may compose a
single procedure for what were two separate procedures. One was for pressing down on the
clutch. The other was for applying the brakes when we reach a stop sign.

These multiple processes are combined together into the single procedure of driving.
Another aspect of proceduralization is “production tuning.” It involves the two
complementary processes of generalization and discrimination. We learn to generalize
existing rules to apply them to new conditions. For example, we can generalize our use of
the clutch, the brakes, and the accelerator to a variety of standard-shift cars.

Finally, we learn to discriminate new criteria for meeting the conditions we face. For
example, if we drive a car with a different number of gears or with different positions for
the reverse gear, we must discriminate the relevant information about the new gear
positions from the irrelevant information about the old gear positions.

An alternative approach to understanding knowledge representation in humans has been to


study the human brain itself. Much of the research in psychobiology has offered evidence
that many operations of the human brain do not seem to process information step-by-step,
bit-by-bit. Rather, the human brain seems to engage in multiple processes simultaneously. It
acts on myriad bits of knowledge all at once. Such models do not necessarily contradict
step-by-step models. First, people seem likely to use both serial and parallel processing.
Second, different kinds of processes may be occurring at different levels. Thus, our brains
may be processing multiple pieces of information simultaneously. They combine into each
of the steps of which we are aware when we process information step by step.

UNIT 3: STORAGE: LONG –TERM MEMORY: FEATURES AND DISTINCTIONS OF: EPISODIC AND
SEMANTIC MEMORY, DECLARATIVE AND PROCEDURAL MEMORY, IMPLICIT AND EXPLICIT
MEMORY, AUTOBIOGRAPHICAL MEMORY, PROSPECTIVE MEMORY, FLASH BULB MEMORY.

Long –Term Memory

• Long-term memory has a large capacity; it contains our memory for experiences and
information that we have accumulated over a lifetime.
• PET studies show that the frontal area of the brain is involved in deep processing of
information, such as determining whether a word describes a living or non-living
thing.
• Some brain regions are essential in the formation of memories. These regions
include the hippocampus and the adjacent cortex and thalamus, as indicated
through the study of clinical patients who suffer damage in these areas. However,
the hippocampus itself does not provide permanent long-term storage of memories.
• Many permanent long-term memories are stored and processed in the cerebral
cortex. It is well established that sensory information is passed along to specific brain
regions. Information from the eyes and ears, for example, is passed to the visual
cortex and auditory cortex, respectively. It is likely that long-term memories for
these types of sensory experiences are also stored in or near these areas.

[Refer BSc Semester 2 notes]

Note:

The term permastore refers to the very long-term storage of information, such as
knowledge of a foreign language (Bahrick, 1984a, 1984b; Bahrick et al., 1993) and of
mathematics (Bahrick & Hall, 1991).

Features and Distinctions of: Episodic and Semantic Memory


Endel Tulving (1972) proposed two types of explicit memory, viz, semantic and episodic
memory. Semantic memory holds information that has entered our general knowledge
base. Information recalled here is generic in nature. Store general information about
language and world. Organization of semantic memory is arranged more on the basis of
meanings and meaning relationships among different pieces of information. For example:
arithmetic facts, historical dates and past tense forms of various verbs.

It is a mental thesaurus, organized knowledge a person possesses about words and other
verbal symbols, their meaning and referents, about relations among them, and about rules,
formulas, and algorithms for the manipulation of these symbols, concepts, and relations.
Semantic memory does not register perceptible properties of inputs, but rather cognitive
referents of input signals. (Tulving, 1993; p. 217)

Semantic memory influences most of our cognitive activities. It includes lexical or language
knowledge (e.g., “The word justice is related to the word equality”). In addition, semantic
memory includes conceptual knowledge (e.g., “A square has four sides”). Categories and
concepts are essential components of semantic memory.

A category is a set of objects that belong together. For example, the category called “fruit”
represents a certain category of food items; your cognitive system treats these objects as
being equivalent (Markman & Ross, 2003). Psychologists use the term concept to refer to
our mental representations of a category (Wisniewski, 2002). For instance, you have a
concept of “fruit,” which refers to your mental representation of the objects in that
category.

Semantic memory allows to code the objects being encountered. Even though the objects
are not identical, you can combine together a wide variety of similar objects by using a
single, one-word concept (Milton & Wills, 2004; Wisniewski, 2002; Yamauchi, 2005). This
coding process greatly reduces the storage space, because many objects can all be stored
with the same label (Sternberg & Ben-Zeev, 2001).

Concepts allows make numerous inferences when encountered with new examples from a
category. Semantic memory allows us to combine similar objects into a single concept.
There are four approaches to the process of semantic memory. They include the feature
comparison model, the prototype approach, the exemplar approach, and network models.
Most theorists in the area of semantic memory believe that each model may be at least
partly correct. The present report deals with the hierarchical or network model of semantic
memory.

Episodic memory focuses on your memories for events that happened to you; it allows you
to travel backward in subjective time to reminisce about earlier episodes in your life.
Episodic memory includes your memory for an event that occurred ten years ago, as well as
a conversation you had 10 minutes ago. In other words, episodic memory stores personally
experienced events or episodes (information about our personal experiences). According to
Tulving, we use episodic memory when we learn lists of words or when we need to recall
something that occurred to us at a particular time or in a particular context. In either case,
we have personally experienced the learning as associated with a given time. Episodic
memory is merely a specialized form of semantic memory (Tulving, 1984, 1986).

Episodic memory has also been described as containing memories that are temporally
dated; the information stored has some sort of marker for when it was originally
encountered. Tulving (1972, 1983, 1989) described episodic and semantic memory as
memory systems that operate on different principles and hold onto different kinds of
information.

Distinction between semantic and episodic memory.

Semantic Memory Episodic Memory

• Stores general knowledge. • Stores personally experienced


• According to HERA (hemispheric events or objects.
encoding/retrieval asymmetry) • According to HERA (hemispheric
model, there is greater activation encoding/retrieval asymmetry)
in the left pre-frontal hemisphere model, there is greater activation in
for tasks requiring retrieval from the right pre-frontal hemisphere for
semantic memory (posterior episodic retrieval tasks (anterior
cerebral cortex). (Tulving, 1989) cerebral cortex).
• Stable and permanent. • Dynamic and susceptible to
• Organization of information is forgetting.
based on the meaning • Organization of information is
(semantics). temporal.

Procedural memory, the lowest form of memory, retains connections between stimuli and
responses and is comparable to what Oakley (1981) referred to as associative memory.
Semantic memory has the additional capability of representing internal events that are not
present, while episodic memory allows the additional capability of acquiring and retaining
knowledge of personally experienced events.

A neuroscientific model called HERA (hemispheric encoding/retrieval asymmetry) attempts


to account for differences in hemispheric activation for semantic versus episodic memories.
According to this model, there is greater activation in the left than in the right prefrontal
hemisphere for tasks requiring retrieval from semantic memory (Nyberg, Cabeza, & Tulving,
1996; Tulving et al., 1994). In contrast, there is more activation in the right than in the left
prefrontal hemisphere for episodic retrieval tasks. This model, then, proposes that semantic
and episodic memories must be distinct because they draw on separate areas of the brain.
For example, if one is asked to generate verbs that are associated with nouns (e.g., “drive”
with

“car”), this task requires semantic memory. It results in greater left-hemispheric activation
(Nyberg, Cabeza, & Tulving, 1996). In contrast, if people are asked to freely recall a list of
words—an episodic-memory task—they show more right hemispheric activation. Some
recent fMRI and ERP studies have not found the predicted frontal asymmetries during
encoding and retrieval (Berryhill et al., 2007; Evans & Federmeier, 2009).

Features and Distinctions of: Declarative and Procedural Memory

Declarative memory is also known as explicit memory. Declarative memory contains


knowledge, facts, information, ideas—basically, anything that can be recalled and described
in words, pictures, or symbols. Anderson (1983) believed that declarative memory stores
information in networks that contain nodes. There are different types of nodes, including
those corresponding to spatial images or to abstract propositions.
In the ACT models, working memory is actually that part of declarative memory that is very
highly activated at any particular moment. The production rules also become activated
when the nodes in the declarative memory that correspond to the conditions of the
relevant production rules are activated. When production rules are executed, they can
create new nodes within declarative memory. Thus, ACT models have been described as
very “activation based” models of human cognition (Luger, 1994).

Disruption in the hippocampus appears to result in deficits in declarative memory (i.e.,


memory for pieces of information), but it does not result in deficits in procedural memory
(i.e., memory for courses of action) (Rockland, 2000).

Declarative memory also may be considered a relatively recent phenomenon. At the same
time, other memory structures may be responsible for nondeclarative forms of memory.

Entrance into long-term declarative memory may occur through a variety of processes. One
method of accomplishing this goal is by deliberately attending to information to
comprehend it. Another is by making connections or associations between the new
information and what we already know and understand. We make connections by
integrating the new data into our existing schemas of stored information. This process of
integrating new information into stored information is called consolidation. In humans, the
process of consolidating declarative information into memory can continue for many years
after the initial experience (Squire, 1986). When you learn about someone or something, for
example, you often integrate new information into your knowledge a long time after you
have acquired that knowledge. For example, you may have met a friend many years ago and
started organizing that knowledge at that time. But you still acquire new information about
that friend—sometimes surprising information—and continue to integrate this new
information into your knowledge base.

In Anderson’s view, episodic and semantic information is included in declarative memory.


Declarative representation of knowledge comes into the system in chunks, or cognitive
units, comprising such things as propositions (such as, “Beth loves Boris”), strings (such as,
“one, two, three”), or even spatial images (“A circle is above the square”). From these basic
elements new information is stored in declarative memory by means of working memory.
The retrieval of information from declarative memory into working memory resembles the
calling up of information from the permanent memory of a computer—data stored on a
hard disk in a computer are temporarily held for processing in a working memory.

In contrast, procedural memory holds information concerning action and sequences

of actions. It is a type of implicit memory or non-declarative memory. For example, when


you ride a bicycle, swim, or swing a golf club, you are

thought to be drawing on your procedural memory.

In the ACT model Anderson also posited the existence of procedural memory in individuals.
Procedural memory, the lowest form of memory, retains connections between stimuli and
responses and is comparable to what Oakley (1981) referred to as associative memory. The
Tower of Hanoi can be solved specifically using the procedural memory.

Research in the area of memory consolidation has shown that people who learned tasks
based on declarative memory (paired associates) or procedural memory (mirror tracing)
showed increased memory of the tasks if they slept during the retention interval (as
opposed to being awake during the retention interval).

The concept of productive memory lies very close to procedural memory. Procedural
memory, or memory for processes, can be tested in implicit-memory tasks as well. Many of
the activities that we do every day fall under the purview of procedural memory; these can
range from brushing your teeth to writing.

In the laboratory, procedural memory is sometimes examined with the rotary pursuit task
(Gonzalez, 2008; see Figure 5.1). The rotary pursuit task requires participants to maintain
contact between an L-shaped stylus and a small rotating disk (Costello, 1967). The disk is
generally the size of a nickel, less than an inch in diameter. This disk is placed on a quickly
rotating platform. The participant must track the small disk with the wand as it quickly spins
around on a platform. After learning with a specific disk and speed of rotation, participants
are asked to complete the task again, either with the same disk and the same speed or with
a new disk or speed. Verdolini-Marston and Balota (1994) noted that when a new disk or
speed is used, participants do relatively poorly. But with the same disk and speed,
participants do as well as they had after learning the task, even if they do not remember
previously completing the task.

Another task used to examine procedural memory is mirror tracing. In the mirror-tracing
task, a plate with the outline of a shape drawn on it is put behind a barrier where it cannot
be seen. Beyond the barrier in the participant’s line of sight is a mirror. When the
participant reaches around the barrier, his or her hand and the plate with the shape are
within view. Participants then take a stylus and trace the outline of the shape drawn on the
plate. When first learning this task, participants have difficulty staying on the shape.
Typically, there are many points at which the stylus leaves the outline. Moreover, it takes a
relatively long time to trace the entire shape. With practice, however, participants become
quite efficient and accurate with this task. Participants’ retention of this skill gives us a way
to study procedural memory (Rodrigue, Kennedy, & Raz, 2005). The mirror-tracing task is
also used to study the impact of sleep on procedural memory.

Connectionist models effectively explain priming effects, skill learning (procedural memory),
and several other phenomena of memory. Procedural memory is a type of non-declarative
memory.

Declarative Memory Procedural Memory

• A memory system thought • A memory system thought to


to contain knowledge, contain information concerning
facts, information, ideas, or action and sequences of
anything that can be actions—for example, one’s
recalled and described in knowledge of how to ride a
words, pictures, or bicycle or swing a golf club.
symbols.
• Explicitly represented and • Implicitly represented and not
consciously accessible (Su, consciously accessible (Su,
Merrill, and Peterson, Merrill, and Peterson, 2001).
2001).
• Stored in cerebral cortex. • Stored in cerebellum.
• Non-REM sleep aids in
declarative memory. • REM sleep aids in procedural
• AKA., explicit memory. memory.
• Declarative memory
consists of information and • Part of implicit memory.
knowledge of the world,
such as the name of a • Procedural memory deals with
favorite aunt, the location motor skills, such as
of the nearest pizza parlor, handwriting, typing skill, and
and the meaning of words, (probably) our ability to ride a
plus a vast lot of other bicycle.
information.

Features and Distinctions of: Implicit and Explicit Memory

Explicit memories are things that are consciously recollected. For example, in recalling your
last vacation, you explicitly refer to a specific time (say, last summer) and a specific event or
series of events. In other words, explicit memory relies largely on the retrieval of conscious
experiences and is cued using recognition and recall tasks. On an explicit memory task, the
researcher directly instructs participants to remember information; the participants are
conscious that their memory is being tested, and the test requires them to intentionally
retrieve some information they previously learned (Roediger & Amir, 2005). Semantic and
episodic memory are two types of explicit memory.

The most common explicit memory test is recall and recognition. Research demonstrates
that people with anterograde amnesia often recall almost nothing on tests of explicit
memory such as recall or recognition. Explicit memory is typically impaired in amnesia
(Amnesia is severe loss of explicit memory). In addition, the hippocampus and some related
nearby cerebral structures appear to be important for explicit memory of experiences and
other declarative information. The hippocampus also seems to play a key role in the
encoding of declarative information (Manns & Eichenbaum, 2006; Thompson, 2000).

Implicit memory, by contrast, is memory that is not deliberate or conscious but shows
evidence of prior learning and storage. In other words, implicit memory is expressed in the
form of facilitating performance and does not require conscious recollection. Procedural
and emotional memory are two types of implicit memory. Schacter (1996) poetically
described implicit memory as “a subterranean world of nonconscious memory and
perception, normally concealed from the conscious mind”. Laboratory work on implicit
memory has been mainly concerned with a phenomenon known as repetition priming.
Repetition priming is priming of a somewhat different sort: facilitation of the cognitive
processing of information after a recent exposure to that same information (Schacter, 1987,
p. 506). For example, participants might be given a very brief exposure (of 30 milliseconds or
less) to a word (such as button) and soon afterward be given a new word completion task
(for example, “Fill in the blanks to create the English word that comes to mind: _U _T O_”).

Another method to examine implicit memory is by measuring tasks involving procedural


knowledge. Procedural memory, or memory for processes, can be tested in implicit-memory
tasks as well. Examples of procedural memory include the procedures involved in riding a
bike or driving a car. In the laboratory, procedural memory is sometimes examined with the
rotary pursuit task (Gonzalez, 2008). Another task used to examine procedural memory is
mirror tracing.

The Process Dissociation Framework

Jacoby and his colleagues (Hay & Jacoby, 1996; Jacoby, 1991, 1998; Toth, Lindsay, & Jacoby,
1992; Toth, Reingold, & Jacoby, 1994) took issue with the idea that implicit memory and
explicit memory represent two distinct memory systems and argued for what he called the
process dissociation framework. Jacoby (1991) preferred to think about memory tasks as
calling on two different processes: intentional and automatic ones.

Performance on direct [that is, explicit] tests of memory typically requires that people
intentionally recollect a past episode, whereas facilitation on indirect [implicit] tests of
memory is not necessarily accompanied by either intention to remember or awareness of
doing so. This difference between the two types of test can be described in terms of the
contrast between consciously controlled and automatic processing.

The model assumes that implicit and explicit memory both have a role in virtually every
response. Thus, only one task is needed to measure both these processes.

Explicit Memory Implicit Memory


• A type of memory retrieval in which
• Memory recovery or recognition
recall is enhanced by the
based on conscious search
presentation of a cue or prime,
processes as one might use in
despite having no conscious
answering a direct question.
awareness of the connection
between the prime and to be-
recalled item.
• Conscious activation
• Non-conscious activation
• Explicit memory decrease with age
• Implicit memory does not decrease
with age.
• Implicit memory does not show the
• There are differences in explicit same changes.
memory over the life span.
• Implicit memory that is comparable
• Infants and older adults often tend to that of young adults.
to have relatively poor explicit
• Aka., non-declarative memory
memory.
[relies on procedural memory].
• Aka., declarative memory
• Implicit memory tasks require
• Explicit memory tasks require perceptual processing (that is,
conceptual processing (in other interpreting sensory information in
words, drawing on information in a meaningful way).
memory and the knowledge base).
• Two types: procedural and
• Two types: episodic and semantic emotional memory.
memory.
Autobiographical Memory, Prospective Memory, Flash Bulb Memory

Autobiographical and flashbulb memory [Refer BSc Semester 2 notes]

Prospective memory is memory for things we need to do or remember in the future. For
example, we may need to remember to call someone, to buy cereal at the supermarket, or
to finish a homework assignment due the next day. We use a number of strategies to
improve prospective memory. Examples are keeping a to-do list, asking someone to remind
us to do something, or tying a string around our finger to remind us that we need to do
something. Research suggests that having to do something regularly on a certain day does
not necessarily improve prospective memory for doing that thing. However, being
monetarily reinforced for doing the thing does tend to improve prospective memory
(Meacham, 1982; Meacham & Singer, 1977).

A prospective-memory task has two components. First, you must establish that you intend
to accomplish a particular task at some future time. Second, at that future time, you must
fulfill your intention (Einstein & McDaniel, 2004; Marsh et al., 1998; McDaniel & Einstein,
2000, 2007). According to surveys, people say that they are more likely to forget a
prospective memory task than any other memory task (Einstein & McDaniel, 2004).
Occasionally, the primary challenge is to remember the content of the action (Schaefer &
Laing, 2000). However, most of the time, the primary challenge is simply to remember to
perform an action in the future (McDaniel & Einstein, 2007).

Prospective memory, like retrospective memory, is subject to decline as we age. Over the
years, we retain more of our prospective memory than of our retrospective memory. This
retention is likely the result of the use of the external cues and strategies that can be used
to bolster prospective memory. In the laboratory, older adults show a decline in prospective
memory; however, outside the laboratory they show better performance than young adults.
This difference may be due to greater reliance on strategies to aid in remembering as we
age (Henry et al., 2004).

Most of the research on prospective memory is reasonably high in ecological validity. One
intriguing component of prospective memory is absentmindedness. Most people do not
publicly reveal their absentminded mistakes. You may therefore think that you are the only
person who forgets to pick up a quart of milk on your way home from school, who dials
Chris’s phone number when you want to speak to Alex, or who fails to include an important
attachment when sending an e-mail.

One problem is that the typical prospective-memory task represents a divided attention
situation. You must focus on your ongoing activity, as well as on the task you need to
remember in the future (Marsh et al., 2000; McDaniel & Einstein, 2000). Absentminded
behavior is especially likely when the intended action causes you to disrupt a customary
schema. That is, you have a customary schema or habit that you usually perform, which is
Action A (for example, driving from your college to your home). You also have a prospective-
memory task that you must perform on this specific occasion, which is Action B (for
example, stopping at the grocery store). In cases like this, your longstanding habit
dominates the more fragile prospective memory, and you fall victim to absentminded
behavior (Hay & Jacoby, 1996).

Prospective-memory errors are more likely in highly familiar surroundings when you are
performing tasks automatically (Schacter, 2001). Errors are also more likely if you are
preoccupied or distracted, or if you are feeling time pressure. In most cases,
absentmindedness is simply irritating. However, sometimes these slips can produce airplane
collisions, industrial accidents, and other disasters that influence the lives of hundreds of
individuals (Finstad et al., 2006).

Methods to Improve Prospective Memory

1. External memory aids are especially helpful on prospective-memory tasks (McDaniel


& Einstein, 2007). An external memory aid is defined as any device, external to
yourself, that facilitates your memory in some way (Herrmann et al., 2002). Some
examples of external memory aids include a shopping list, a rubber band around
your wrist, asking someone else to remind you to do something, and the ring of an
alarm clock, to remind you to make an important phone call.
2. Informal external mnemonics also aid in prospective memory. For example: when
we want to remember to bring a book to class, placing it in a location where we will
have to confront the book on the way to class. Placing letters to be mailed in a
conspicuous position on the dashboard of the car. Using coloured sticky notes in the
room.
3. Forming a vivid, interactive mental image of the action or thing to be remembered.
For example, a vivid, interactive mental image of a quart of milk might help you
avoid driving past the grocery store in an absentminded fashion (Einstein &
McDaniel, 2004).

Prospective Memory Retrospective Memory

• Memory for things we need to do or • Memory system that recalls


remember in the future. information learned in the past.
• Most common memory lapses. • Memory lapses are not as common
• Planning plays a major role. as prospective memory.
• Remembering plays a major role.

Memory is more accurate for both kinds of memory tasks if you use both distinctive
encoding and effective retrieval cues. Furthermore, both kinds of memory are less accurate
when you have a long delay, filled with irrelevant activities, prior to retrieval (Einstein &
McDaniel, 2004; Roediger, 1996). Finally, prospective memory relies on regions of the
frontal lobe that also play a role in retrospective memory (Einstein & McDaniel, 2004; West
et al., 2000).

Note: In a cross-sectional study Lars Nilsson (2003) found short-term memory, semantic
memory, and procedural memory performance was not related to normal aging; however, a
decrease in episodic memory was reported.

UNIT 4: RETRIEVAL: RECALL, RECOGNITION, RECONSTRUCTION, CONFABULATION, ILLUSORY


MEMORY, MEMORY AS AN ACTIVE PROCESS, RELIABILITY OF EYE WITNESS TESTIMONY.

Recall, Recognition and Reconstruction

[Refer BSc Semester 2 notes and General Psychology books]

Confabulation
People with damage to their frontal lobes often engage in a process called confabulation,
which involves making outlandish false statements. One characteristic of confabulation is
that the person believes that even the most impossible-sounding statements are true. It has
been suggested that this may tell us something about the role of the frontal lobes in normal
memory. In other words, confabulation is the problem of failure to adequately check and
validly decide whether something is a genuine memory or an invention shows up
particularly clearly in the case of confabulation, sometimes shown in patients with frontal
lobe damage, whereby they come up with a fantastic and totally false recollection.

In one case for example, a patient “recollected” writing a letter to his aunt announcing the
death of his brother, who in fact was still alive and visited him regularly. When confronted
with this apparent paradox the patient decided that he had in fact had two brothers with
the same name, one of whom had died (Baddeley & Wilson, 1986). Such confabulations are
often held with great conviction, despite their implausibility. The same patient on one
occasion woke up and turned to his wife asking her “Why do you keep telling people we are
married?”. “We are married,” she replied. “We have three children!”. “That does not
necessarily mean that we are married,” retorted her husband. She then proceeded to show
him the wedding photographs to which he responded, “That chap does look like me, but it
isn’t me, because I’m not married.”

Illusory Memory

Memory as an Active Process

Reliability of Eye Witness Testimony

The legal testimony of an eyewitness tends to be very convincing to a jury, but is often
highly unreliable, despite the confidence of the witness. This is particularly problematic
when detailed recall is required, as we often do not retain detail, even of objects or events
that we encounter many times. Recall is also readily distorted by leading questions, or by
false information introduced during the process of cross examination. In recent years,
largely due to the initial work of Elizabeth Loftus, there has been a much wider recognition
of these problems, and improved interview techniques are continually being developed.
Face recognition is a particularly crucial aspect of legal psychology; even when a subject has
seen and remembered a face clearly, it is difficult to convey the information. A number of
techniques for constructing representations of faces have been developed, but their value is
still limited, and the potential for false identification remains high. Line-ups form an
important part of the criminal procedure. They themselves are readily open to manipulation
and error, but again some of the more blatant mistakes can be avoided by carefully
following appropriate procedures.

There are serious potential problems of wrongful conviction when using eyewitness
testimony as the sole, or even the primary, basis for convicting accused people of crimes
(Loftus & Ketcham, 1991; Loftus, Miller, & Burns, 1987; Wells & Loftus, 1984). Moreover,
eyewitness testimony is often a powerful determinant of whether a jury will convict an
accused person. The effect is particularly pronounced if eyewitnesses appear highly
confident of their testimony. This is true even if the eyewitnesses can provide few
perceptual details or offer apparently conflicting responses. People sometimes even think
they remember things simply because they have imagined or thought about them (Garry &
Loftus, 1994). It has been estimated that as many as 10,000 people per year may be
convicted wrongfully on the basis of mistaken eyewitness testimony (Cutler & Penrod, 1995;
Loftus & Ketcham, 1991). In general, people are remarkably susceptible to mistakes in
eyewitness testimony. They are generally prone to imagine that they have seen things they
have not seen (Loftus, 1998).

Some of the strongest evidence for the constructive nature of memory has been obtained
by those who have studied the validity of eyewitness testimony. In a now-classic study,
participants saw a series of 30 slides in which a red Datsun drove down a street, stopped at
a stop sign, turned right, and then appeared to knock down a pedestrian crossing at a
crosswalk (Loftus, Miller, & Burns, 1978). Afterwards, participants were asked a series of 20
questions, one of which referred either to correct information (the stop sign) or incorrect
information (a yield sign instead of the stop sign). In other words, the information in the
question given this second group was inconsistent with what the participants had seen.
Later, after engaging in an unrelated activity, all participants were shown two slides and
asked which they had seen. One had a stop sign, the other had a yield sign. Accuracy on this
task was 34% better for participants who had received the consistent question (stop sign
question) than for participants who had received the inconsistent question (yield sign
question).

Loftus’ eyewitness testimony experiment and other experiments (e.g., Loftus, 1975, 1977)
have shown people’s great susceptibility to distortion in eyewitness accounts.

[Refer Matlin, pg no: 151- 159]

UNIT 5: FORGETTING: DETAILED DISCUSSION OF: INTERFERENCE, DECAY, ORGANIC/


BIOLOGICAL CAUSES, ENCODING FAILURE, FAILURE OF RECONSTRUCTION, MOTIVATED
FORGETTING

[Refer BSc Semester 2 notes & Themes and Variations, pg no: 239- 244]
MODULE 5: COGNITION

UNIT 1: ELEMENTS OF THOUGHT: CONCEPTS, PROPOSITIONS, MENTAL IMAGERY. BRIEF


DISCUSSION OF VARIOUS THEORIES OF CONCEPT FORMATION AND MENTAL IMAGERY
(ANALOG AND PROPOSITIONAL CODING)

Concept an idea about something that provides a means of understanding the world. Medin
(1989) defined a concept as “an idea that includes all that is characteristically associated
with it”. In other words, a concept is a mental representation of some object, event, or
pattern that has stored in it much of the knowledge typically thought relevant to that
object, event, or pattern. It is the fundamental unit of symbolic knowledge (knowledge of
correspondence between symbols and their meaning, for example, that the symbol “3”
means three). Often, a concept may be captured in a single word, such as apple. Each
concept in turn relates to other concepts, such as apple, which relates to redness,
roundness, or fruit. Concepts are dynamic.

Concepts help us establish order in our knowledge base (Medin & Smith, 1984). Concepts
also allow us to categorize, giving us mental “buckets” in which to sort the things we
encounter, letting us treat new, never-before-encountered things in the same way we treat
familiar things that we perceive to be in the same set (Neisser, 1987).

Concepts appear to have a basic level (sometimes termed a natural level) of specificity, a
level within a hierarchy that is preferred to other levels (Medin, Proffitt, & Schwartz, 2000;
Rosch, 1978). Suppose I show you a red, roundish edible object that has a stem and that
came from a tree. You might characterize it as a fruit, an apple, a delicious apple, a Red
Delicious apple, and so on. Most people, however, would characterize the object as an
apple. The basic, preferred level is apple. In general, the basic level is neither the most
abstract nor the most specific. This basic level can be manipulated by context or expertise
(Tanaka & Taylor, 1991).
Concepts are also used in other areas like computer science. Developers try to develop
algorithms that define “spam” so that email programs can filter out unwanted messages
and your mailbox is not flooded with them.

Classical View of Concepts

The classical view of concepts was the dominant view in psychology up until the 1970s and
dates back to Aristotle (Smith & Medin, 1981). This proposal is organized around the belief
that all examples or instances of a concept share fundamental characteristics, or features
(Medin, 1989). In particular, the classical view of concepts holds that the features
represented are individually necessary and collectively sufficient (Medin, 1989). To say a
feature is individually necessary is to say that each example must have the feature if it is to
be regarded as a member of the concept. For example, “has three sides” is a necessary
feature of the concept triangle; things that do not have three sides are automatically
disqualified from being triangles. To say that a set of features is collectively sufficient is to
say that anything with each feature in the set is automatically an instance of the concept.
For example, the set of features “has three sides” and “closed, geometric figure” is sufficient
to specify a triangle; anything that has both is a triangle.

The classical view of concepts has several implications. First, it assumes that concepts
mentally represent lists of features. That is, concepts are not representations of specific
examples but rather abstractions containing information about properties and
characteristics that all examples must have. Second, it assumes that membership in a
category is clear-cut: Either something has all the necessary and sufficient features (in which
case it is a member of the category), or it lacks one or more of the features (in which case it
is not a member). Third, it implies that all members within a category are created equal:
There is no such thing as a “better” or “worse” triangle.

Work by Eleanor Rosch and colleagues (Rosch, 1973; Rosch & Mervis, 1975) confronted and
severely weakened the attraction of the classical view. Rosch found that people judged
different members of a category as varying in “goodness.” For instance, most people in
North America consider a robin and a sparrow very good examples of a bird but find other
examples, such as chickens, penguins, and ostriches, not as good. The classical view holds
that membership in a category is all-or-none: Either an instance (such as robin or ostrich)
belongs to a category, or it doesn’t. This view has no way to explain people’s intuitions that
some birds are “birdier” than others.

A category a concept that functions to organize or point out aspects of equivalence among
other concepts based on common features or similarity to a prototype. For example, the
word apple can act as a category, as in a collection of different kinds of apples. But it also
can act as a concept within the category fruit. Concepts and categories can be divided in
various ways. One commonly used distinction is between natural categories and artifact
categories (Kalenine et al., 2009; Medin, Lynch, & Solomon, 2000). Natural categories are
groupings that occur naturally in the world, like birds or trees. Artifact categories are
groupings that are designed or invented by humans to serve particular purposes or
functions. Examples of artifact categories are automobiles and kitchen appliances. The
speed it takes to assign objects to categories seems to be about the same for both natural
and artifact categories (VanRullen & Thorpe, 2001). Natural and artifact categories are
relatively stable and people tend to agree on criteria for membership in them.

Some categories are created just for the moment or for a specific purpose, for example,
“things you can write on.” These categories are called ad hoc categories (Barsalou, 1983;
Little, Lewandowsky, & Heit, 2006). They are described not in words but rather in phrases.
Their content varies, depending on the context. Categorization also allows us to make
predictions and act accordingly. If I see a four-legged creature with a tail coming toward me,
my classification of it as either a dog or a wolf has implications for whether I’ll want to call
to it, run away, pet it, or call for help.

Proposition is an assertion, which may be either true or false. Anderson and his co-authors
define a proposition as the smallest unit of knowledge that can be judged either true or
false. For instance, the phrase white cat does not qualify as a proposition because we
cannot determine whether it is true or false. It helps to store knowledge as abstract
concepts.
Propositions may be used to describe any kind of relationship. Examples of relationships
include actions of one thing on another, attributes of a thing, positions of a thing, class
membership of a thing, and so on. The key idea is that the propositional form of mental
representation is neither in words nor in images. Rather, it is in an abstract form
representing the underlying meanings of knowledge. Thus, a proposition for a sentence
would not retain the acoustic or visual properties of the words. Similarly, a proposition for a
picture would not retain the exact perceptual form of the picture (Clark & Chase, 1972).

According to the propositional view (Clark & Chase, 1972), both images [e.g., of the cat and
the table in Figure 7.3(a)] and verbal statements are mentally represented in terms of their
deep meanings, and not as specific images or words. That is, they are represented as
propositions. According to propositional theory, pictorial and verbal information are
encoded and stored as propositions. Then, when we wish to retrieve the information from
storage, the propositional representation is retrieved. From it, our minds re-create the
verbal or the imaginal code relatively accurately.

Some evidence suggests that these representations need not be exclusive. People seem to
be able to employ both types of representations to increase their performance on cognitive
tests (Talasli, 1990).

Propositional theory suggests that we do not store mental representations in the form of
images or mere words. We may experience our mental representations as images, but these
images are epiphenomena—secondary and derivative phenomena that occur as a result of
other more basic cognitive processes. According to propositional theory, our mental
representations (sometimes called “mentalese”) more closely resemble the abstract form of
a proposition.

Mental Imagery is the mental representation of things that are not currently seen or sensed
by the sense organs (Moulton & Kosslyn, 2009; Thomas, 2003). In our minds we often have
images for objects, events, and settings. Mental imagery even can represent things that you
have never experienced. For example, imagine what it would be like to travel down the
Amazon River. Mental images even may represent things that do not exist at all outside the
mind of the person creating the image.

Imagery may involve mental representations in any of the sensory modalities, such as
hearing, smell, or taste. Nonetheless, most research on mental imagery in cognitive
psychology has focused on visual imagery, such as representations of objects or settings
that are not presently visible to the eyes. When students kept a diary of their mental
images, the students reported many more visual images than auditory, smell, touch, or taste
images (Kosslyn et al., 1990). Most of us are more aware of visual imagery than of other
forms of imagery. We use visual images to solve problems and to answer questions
involving objects (Kosslyn & Rabin, 1999; Kosslyn, Thompson & Ganis, 2006).

Many psychologists outside of cognitive psychology are interested in applications of mental


imagery to other fields in psychology. Such applications include using guided-imagery
techniques for controlling pain and for strengthening immune responses and otherwise
promoting health. Research also indicates that

the use of mental images can help to improve memory. In the case of persons with Down
syndrome, the use of mental images in conjunction with hearing a story improved memory
for the material as compared with just hearing the story (de la Iglesia, Buceta, & Campos,
2005; Kihara & Yoshikawa, 2001). Mental imagery also is used in other fields such as
occupational therapy. Using this technique, patients with brain damage train themselves to
complete complex tasks.

Dual Code Theory

According to dual-code theory, we use both pictorial and verbal codes for representing
information (Paivio, 1969, 1971) in our minds. These two codes organize information into
knowledge that can be acted on, stored somehow, and later retrieved for subsequent use.
According to Paivio, mental images are analog codes. Analog codes resemble the objects
they are representing. For example, trees and rivers might be represented by analog codes.
Just as the movements of the hands on an analog clock are analogous to the passage of
time, the mental images we form in our minds are analogous to the physical stimuli we
observe.
In contrast, our mental representations for words chiefly are represented in a symbolic
code. A symbolic code is a form of knowledge representation that has been chosen
arbitrarily to stand for something that does not perceptually resemble what is being
represented. Just as a digital watch uses arbitrary symbols (typically, numerals) to represent
the passage of time, our minds use arbitrary symbols (words and combinations of words) to
represent many ideas. For example, sand can be used as well to represent the flow of time.

Paivio, consistent with his dual-code theory, noted that verbal information seems to be
processed differently than pictorial information. For example, in one study, participants
were shown both a rapid sequence of pictures and a sequence of words (Paivio, 1969). They
then were asked to recall the words or the pictures in one of two ways. One way was at
random, so that they recalled as many items as possible, regardless of the order in which
the items were presented. The other way was in the correct sequence.

Participants more easily recalled the pictures when they were allowed to do so in any order.
But they more readily recalled the sequence in which the words were presented than the
sequence for the pictures, which suggests the possibility of two different systems for recall
of words versus pictures.

UNIT 2: MODELS OF KNOWLEDGE ORGANIZATION (IN SEMANTIC MEMORY): PROTOTYPE,


FEATURE COMPARISON, HIERARCHICAL MODEL, CONNECTIONIST MODELS (PARALLEL
DISTRIBUTED PROCESSING) OF MCCLELLAND, RUMELHART, & HINTON), NETWORKS
MODELS –QUILLIAN, SPREADING ACTIVATION - COLLINS & LOFTUS, SCHEMAS.

Prototype Model

Prototype theory groups things together not by their defining features but rather by their
similarity to an averaged model of the category. According to a theory proposed by Eleanor
Rosch, we organize each category on the basis of a prototype. Prototype is an item that is
the most representative of a category. According to prototype approach, an individual
decides whether an item belong to a category by comparing it with a prototype. If the item
is similar to the prototype; you include that item in the category. A prototype is an abstract,
idealized example.
Rosch (1973) also emphasizes that members of a category differ in their prototypicality, or
degree to which they are prototypical. A robin and a sparrow are very prototypical birds,
whereas ostriches and penguins are non-prototypes.

The prototype approach represents a different perspective from the feature comparison
model that we just examined. According to the feature comparison model, an item belongs
to a category as long as it possesses the necessary and sufficient features (Markman, 1999).

Characteristics of prototype

• Prototypes are supplied as examples of a category.


• Prototypes are judged more quickly after semantic priming.
• Prototypes share attributes in a family resemblance category.

According to Eleanor Rosch’s prototype theory, there are various levels of categorization in
semantic memory. An object can be categorized at several different levels. Some category
levels are called superordinate-level categories, which means they are higher-level or more
general categories. “Furniture,” “animal,” and “tool” are all examples of super ordinate level
categories. Basic-level categories are moderately specific. “Chair,” “dog,” and “screwdriver”
are examples of basic-level categories. Finally, subordinate-level categories refer to lower-
level or more specific categories. “Desk chair,” “collie,” and “Phillips screwdriver” are
examples of subordinate categories.

One advantage of the prototype approach is that it can account for our ability to form
concepts for groups that are loosely structured. For example, we can create a concept for
stimuli that merely share a family resemblance, when the members of a category have no
single characteristic in common. Another advantage of the prototype approach is that it can
be applied to social relationships, as well as inanimate objects and non-social categories
(Fehr, 2005).

However, an ideal model of semantic memory must also acknowledge that concepts can be
unstable and variable. Another problem with the prototype approach is that we often do
store specific information about individual examples of a category. An ideal model of
semantic memory would therefore need to include a mechanism for storing this specific
information, as well as abstract prototypes (Barsalou, 1990, 1992). An additional problem is
that the prototype approach may operate well when we consider the general population,
but not when we examine experts in a particular discipline.

Feature Comparison Model

One logical way to organize semantic memory would be in terms of lists of features.
According to an early theory, called the feature comparison model, concepts are stored in
memory according to a list of necessary features or characteristics. People use a decision
process to make judgments about these concepts (Smith et al., 1974). It accounts for the
typicality effect.

Smith, Shoben, and Rips (1974) proposed one alternative to the hierarchical semantic
network model, called a feature comparison model of semantic memory. The assumption
behind this model is that the meaning of any word or concept consists of a set of elements
called features.

Smith and his co-authors (1974) propose that the features used in this model are either
defining features or characteristic features. Defining features are those attributes that are
necessary to the meaning of the item. For example, the defining features of a robin include
that it is living and has feathers and a red breast. Characteristic features are those attributes
that are merely descriptive but are not essential. For example, the characteristic features of
a robin include that it flies, perches in trees, is not domesticated, and is small in size. In
other words, Features come in two types: defining, meaning that the feature must be
present in every example of the concept, and characteristic, meaning the feature is usually,
but not necessarily, present.

In the Smith et al. (1974) model, the verification of sentences such as “A robin is a bird” is
carried out in two stages. In the first stage, the feature lists (containing both the defining
and the characteristic features) for the two terms are accessed, and a quick scan and
comparison are performed. If the two lists show a great deal of overlap, the response “true”
is made very quickly. If the overlap is very small, then the response “false” is made, also very
quickly. If the degree of overlap in the two feature lists is neither extremely high nor
extremely low, then a second stage of processing occurs. In this stage, a comparison is made
between the sets of defining features only. If the lists match, the person responds “true”; if
the lists do not match, the person responds “false.”

The sentence verification technique is one of the major tools used to explore the feature
comparison model. One common finding in research using the sentence verification
technique is the typicality effect. In the typicality effect, people reach decisions faster when
an item is a typical member of a category, rather than an unusual member. Sentences such
as “A robin is a bird” are verified more quickly than sentences such as “A turkey is a bird”
because robins, being more typical examples of birds, are thought to share more
characteristic features with “bird” than do turkeys. The feature comparison model also
explains fast rejections of false sentences, such as “A table is a fruit.” In this case, the list of
features for “table” and the list for “fruit” presumably share very few entries.

The feature comparison model also provides an explanation for a finding known as the
category size effect (Landauer & Meyer, 1972). This term refers to the fact that if one term is
a subcategory of another term, people will generally be faster to verify the sentence with
the smaller category. That is, people are faster to verify the sentence “A collie is a dog” than
to verify “A collie is an animal,” because the set of dogs is part of the set of animals. The
feature comparison model explains this effect as follows. It assumes that as categories grow
larger (for example, from robin, to bird, to animal, to living thing), they also become more
abstract. With increased abstractness, there are fewer defining features. Thus in the first
stage of processing there is less overlap between the feature list of a term and the feature
list of an abstract category.
The model can also explain how “hedges” such as “A bat is sort of like a bird” are processed.
Most of us know that even though bats fly and eat insects, they are really mammals. The
feature comparison model explains that the processing of hedges consists of a comparison
of the characteristic features but not the defining features. Because bats share some
characteristic features with birds (namely, flying and eating insects), we agree they are “sort
of like” birds.
Research on another aspect of the feature comparison model clearly contradicts this
approach. Specifically, a major problem with the feature comparison model is that very few
of the concepts we use in everyday life can be captured by a specific list of necessary,
defining features.

Another problem with the feature comparison model is its assumption that the individual
features are independent of one another. However, many features are correlated for the
concepts we encounter in nature. Finally, the feature comparison model does not explain
how the members of categories are related to one another (Barsalou, 1992).

The loss of semantic memory leads to a condition called semantic dementia. Thus, semantic
memory is the memory system that retains information about facts.

Hierarchical Model

Storing mental representations and templates for every stimulus may overload our database
of knowledge. One way to conserve memory space would be to try to avoid storing
redundant information wherever possible. Rather than storing the information with the
mental representation it is better to store it once, at the higher-level representation. This
illustrates the principle of cognitive economy: Properties and facts are stored at the highest
level possible. To recover information an individual use inference. Information stored at one
level of the hierarchy is not repeated at other levels. A fact is stored at the highest level to
which it applies. For example, the fact that birds breathe is stored in the ANIMAL category,
not the BIRD category.

The idea that information is stored in categories was studied in detail by Collins and Quillian
(1969). They proposed a hierarchical model of semantic memory in which concepts or words
were nodes. They tested the idea that semantic memory is analogous to a network of
connected ideas. Each node is connected to related nodes by means of pointers, or links
that go from one node to another. These idea of linked lists and pointers were derived from
the field of computer science. Thus, the node that corresponds to a given word or concept,
together with the pointers to other nodes to which the first node is connected, constitutes
the semantic memory for that word or concept. The collection of nodes associated with all
the words and concepts one knows about is called a semantic network.
Collins and Quillian (1969) also tested the principle of cognitive economy. They reasoned
that if semantic memory is analogous to a network of nodes and pointers and if semantic
memory honors the cognitive economy principle, then the closer a fact or property is stored
to a particular node, the less time it should take to verify the fact and property. The
assumption is that the longer it takes you to respond to a stimulus, the more mental steps
you had to go through to make that response. It also explains about retrieval from LTM. This
was experimentally studied by Collins and Quillian (1969) through “speeded verification
task”.

Speeded verification task

Collins and Quillian (1969) presented people with a number of similar sentences to find
whether it took people less time to respond to sentences whose representations should
span two levels than they did to sentences whose representations should span three.
Suppose the statement is, “A canary can sing.” When you hear, “A canary”, this activates
the canary category in memory. You then scan the properties of the canary category for
relevant information. If you find it, you stop the search process and respond. In this case,
you would respond “true”. Suppose the statement is “A canary has wings.” You start by
performing the same steps as before, but you don’t find relevant information. So, you
follow the line up to the next category, BIRD. You then scan the contents of the category for
relevant information. You find “has wings” in this category so you stop the search and
respond “true”. This is 2 steps more than you had with the previous statement. Mental
steps take time to perform. Your reaction time should be longer than it was to “A canary
can sing”. Suppose the statement is, “A canary eats.” You go through all the steps you did
with the previous statement plus 2 more: move up one level of the hierarchy to ANIMAL,
then scan the properties. The retrieval process is similar to a form of logical deduction called
a syllogism. In a syllogism you are given two premises and then a conclusion. That is, with
the statement, “A canary eats”, it’s like you think, “All animals eat. A canary is an animal.
Therefore, canary eats.”
The model was called a hierarchical semantic network model of semantic memory,
because researchers thought the nodes were organized in hierarchies. Most nodes in the
network have superordinate and subordinate nodes. A superordinate node corresponds to
the name of the category of which the thing corresponding to the subordinate node is a
member. So, for example, a node for “cat” would have the superordinate node of “animal”
and perhaps several subordinate nodes, such as “Persian,” “tabby,” and “calico.”

Meyer and Schvaneveldt (1971) tried to elaborate the semantic network model. They held
that if related words are stored close by one another and are connected to one another in a
semantic network, then whenever one node is activated or energized, energy spreads to the
related nodes. They demonstrated it through a series of experiments on lexical decision
tasks. Here, participants saw a series of letter strings and are asked to decide, as quickly as
possible, if the letter strings form real words. Thus, they respond yes to strings such as
bread and no to strings such as rencle.

In their study, participants saw two words at a time, one above the other, and had to decide
if both strings were words or not. They discovered that if one of the strings was a real word
(such as bread), participants were faster to respond if the other string was a semantically
associated word (such as butter) than if it was an unrelated word (such as chair) or a
nonword (such as rencle). One interpretation of this finding invokes the concept of
spreading activation, the idea that excitation spreads along the connections of nodes in a
semantic network. There occurs a priming effect in participants’ recognition of nodes. It is
also a very important idea in understanding connectionist networks.

Similarly, a research on word superiority effect also offered explanation along the same
lines of Meyer and Schvaneveldt (1971). People are generally faster to recognize a particular
letter (such as D or K) in the context of a word (such as WOR_) than they are to recognize it
with no context or in the context of a nonword (such as OWR_). This means the word
context helps letter recognition because a node corresponding to a word is activated in the
former case. This automatic activation facilitates recognition of all parts of the word, thus
facilitating letter recognition. The Meyer and Schvaneveldt (1971) extended this idea to
opine that individual nodes can be activated not just directly, from external stimuli, but
indirectly, through spreading activation from related nodes.

Limitations of Collins and Quillian’s model (1969)

• Property of association

Carol Conrad (1972) held that it violates the assumption of cognitive economy.
Properties are associated with each category in the hierarchy, not just the highest
category. Participants in her sentence verification experiments were no slower to
respond to sentences such as “A shark can move” than to “A fish can move” or “An
animal can move.” However, the principle of cognitive economy would predict that
the property “can move” would be stored closest to the node for “animal” and thus
that the three sentences would require decreasing amounts of time to verify. Conrad
argued that the property “can move” is one frequently associated with “animal,”
“shark,” and “fish” and that frequency of association rather than cognitive economy
predicts reaction time. There is repeated connection for each nodes in a category for
better retrieval.

• Hierarchical structure
Rips, Shoben, and Smith (1973) showed that participants were faster to verify “A pig
is an animal” than to verify “A pig is a mammal,” thus demonstrating a violation of
predicted hierarchical structure. Familiar terms are verified faster than unfamiliar
terms regardless of their position in the hierarchy.

• Typicality effect

Rips et al. (1973) found that responses to sentences such as “A robin is a bird” were
faster than responses to “A turkey is a bird,” even though these sentences should
have taken an equivalent amount of time to verify. In general, typical instances of a
concept are responded to more quickly than atypical instances. The hierarchical
network model did not predict typicality effects; instead, it predicted that all
instances of a concept should be processed similarly. So, all instances of a concept
are not equally good examples of it.

Network Model

These limitations led Collins and Loftus (1975) present and elaboration of the Collins and
Quillian (1969) hierarchical network model known as spreading activation theory. They
conceived semantic memory as a network with nodes in the network corresponding to
concepts. They also considered related concepts as connected by paths in the network. They
further asserted that when one node is activated, the excitation of that node spreads down
the paths or links to related nodes. They believed that as activation spreads outward, it
decreases in strength, activating very related concepts a great deal but activating distantly
related nodes only a little bit.

In Collins and Loftus (1975) representation of semantic network, each link or connection
between two concepts is thought to have a certain weight or set of weights associated with
it. The weights indicate how important one concept is to the meaning of a concept to which
it is connected. So, very similar concepts- such as “car” and “truck”- have many connecting
links and are placed close to each other. Less similar concepts, such as “house” and “sunset”
(both may, or may not, be red), have no direct connections and are therefore spaced far
apart. Weights may vary for different directions along the connections. Thus, it may be very
important to the meaning of truck that it is a type of vehicle but not very important to the
meaning of vehicle that truck is an example.

Collins and Loftus (1975) dispensed the assumption of cognitive economy and hierarchical
organization, the avoid their model being affected by the limitations of Collins and Quillian
(1969) model. This proposal is regarded more as a descriptive framework than as a specific
model.

Connectionist Models (Parallel Distributed Processing) Of McClelland, Rumelhart, & Hinton

The parallel distributed processing (PDP) approach argues that cognitive processes can be
represented by a model in which activation flows through networks that link together a
large number of simple, neuron-like units (Markman, 1999; McClelland & Rogers, 2003;
Rogers & McClelland, 2004). The researchers who designed this approach took into account
the physiological and structural properties of human neurons (McClelland & Rogers, 2003).

Characteristics of PDP approach:

• Cognitive processes are based on parallel operations, rather than serial operations.
• A network contains basic neuron-like units or nodes, which are connected together
so that a specific node has many links to other nodes.
• A concept is represented by the pattern of activity distributed throughout a set of
nodes.
• The connections between these neuron-like units are weighted, and the connection
weights determine how much activation one unit can pass on to another unit.
• When a unit reaches a critical level of activation, it may affect another unit, either by
exciting it (if the connection weight is positive) or by inhibiting it (if the connection
weight is negative).
• Every new piece of information you learn will change the strength of connections
among relevant units by adjusting the connection weights.
• Sometimes we have only partial memory for some information, rather than
complete, perfect memory. The brain’s ability to provide partial memory is called
graceful degradation. Example: tip-of-the-tongue phenomena.

According to the PDP approach, each individual’s characteristics are connected in a mutually
stimulating network. If the connections among the characteristics are well established
through extensive practice, then an appropriate clue allows you to locate the characteristics
of a specified individual. Advantages of PDP model:

• Spontaneous generalization

It is the process of using individual cases to draw inferences about general


information (Protopapas, 1999; Rogers & McClelland, 2004). Spontaneous
generalization accounts for some of the memory errors and distortions like
development of stereotypes. PDP model argues that we reconstruct a memory, and
that memory sometimes includes inappropriate information (McClelland, 1999).

• Default assignment

The process of fill in missing information about a particular person or a particular


object by making a best guess based on information from other similar people or
objects. Example, information about a particular engineering student.
Both spontaneous generalization and default assignment can produce errors. Theorists
argue that the PDP approach works better for tasks in which several processes typically
operate simultaneously, as in pattern recognition, categorization, and memory search. The
PDP approach has also been used to explain cognitive disorders, such as the reading
problems experienced by people with dyslexia (Levine, 2002; O’Reilly & Munakata, 2000). It
can also account for the cognitive difficulties found in people with schizophrenia (Chen &
Berrios, 1998) and semantic-memory deficits (Leek, 2005; McClelland & Rogers, 2003;
Tippett et al., 1995).

However, some critics say that the PDP model is not currently structured enough to handle
the subtleties and complexities of semantic memory (McClelland & Rogers, 2003). The PDP
approach also has trouble explaining why we sometimes quickly forget extremely well-
learned information that occurs when we learn additional information (Ratcliff, 1990). On
the other hand, the model cannot explain why we sometimes can recall earlier material
when it has been replaced by more current material (Lewandowsky & Li, 1995).

[Refer pdf for PDP model by Rumelhart & Norman, 1988]

Schemas

The term schema usually refers to something larger than an individual concept. Schemata
(the plural of schema) incorporate both general knowledge about the world and information
about particular events. Bartlett (1932) defined a schema as an “active organization of past
reactions, or of past experiences, which must always be supposed to be operating in any
well adapted organic response”. The key term here is organization. A schema is thought to
be a large unit of organized information used for representing concepts, situations, events,
and actions in memory (Rumelhart & Norman, 1988). One main approach to understanding
how concepts are related in the mind is through schemas. They are very similar to semantic
networks, except that schemas are often more task-oriented.

Rumelhart and Ortony (1977) viewed schemata as the fundamental building blocks of
cognition, units of organized knowledge analogous to theories. Generally, they saw
schemata as “packets of information” that contain both variables and a fixed part. Consider
a schema for the concept dog. The fixed part would include the information that a dog is a
mammal, has (typically) four legs, and is domesticated; the variables would be things like
breed (poodle, cocker spaniel, Bernese mountain dog), size (toy, medium, extra-large), color
(white, brown, black, tricolored), temperament (friendly, aloof, vicious), and name (Spot,
Rover, Tandy).

Schemas have several characteristics that ensure wide flexibility in their use (Rumelhart &
Ortony, 1977; Thorndyke, 1984):

1. Schemas can include other schemas. For example, a schema for animals includes a
schema for cows, a schema for apes, and so on.

2. Schemas encompass typical, general facts that can vary slightly from one specific instance
to another. For example, although the schema for mammals includes a general fact that
mammals typically have fur, it allows for humans, who are less hairy than most other
mammals. It also allows for porcupines, which seem more prickly than furry, and for marine
mammals like whales that have just a few bristly hairs.

3. Schemas can vary in their degree of abstraction. For example, a schema for justice is
much more abstract than a schema for apple or even a schema for fruit.

Schemas also can include information about relationships (Komatsu, 1992). Some of this
information includes relationships among the following:

• concepts (e.g., the link between trucks and cars);

• attributes within concepts (e.g., the height and the weight of an elephant);

• attributes in related concepts (e.g., the redness of a cherry and the redness of an apple);

• concepts and particular contexts (e.g., fish and the ocean); and

• specific concepts and general background knowledge (e.g., concepts about particular U.S.
presidents and general knowledge about the U.S. government and about U.S. history).

Schemata can also indicate the relationships among the various pieces of information. For
example, to end up with a dog, the “parts” of the dog (tail, legs, tongue, teeth) must be put
together in a certain way. A creature with the four legs coming out of its head, its tail
sticking out of its nose, and its tongue on the underside of its belly would not “count” as an
instance of a dog, even if all the required dog parts were present.

Moreover, schemata can be connected to other schemata in a variety of ways. Schemata


also exist for things bigger than individual concepts. Furthermore, schemata fill in default
values for certain aspects of the situation, which let us make certain assumptions. Schemata
are assumed to exist at all levels of abstraction; thus schemata can exist for small parts of
knowledge (what letter does a particular configuration of ink form?) and for very large parts
(what is the theory of relativity?). They are thought of as active processes rather than as
passive units of knowledge. They are not simply called up from memory and passively
processed. Instead, people are thought to be constantly assessing and evaluating the fit
between their current situation and a number of relevant schemata and subschemata.

Some researchers think schemata are used in just about every aspect of cognition.
Schemata are deemed to play an important role in perception and pattern matching as we
try to identify the objects we see before us. They are considered important in memory
functioning as we call to mind relevant information to help us interpret current information
and make decisions about what to do next. A problem with schemas is that they can give
rise to stereotypes.

UNIT 3: REASONING: INDUCTIVE & DEDUCTIVE REASONING, COGNITIVE ERRORS.

UNIT 4: CREATIVITY: FEATURES OF CREATIVE THINKING, CONVERGENT & DIVERGENT


THINKING, PRODUCTIVE AND REPRODUCTIVE THINKING, INSIGHT.

Creativity is a cognitive activity that results in a new or novel way of viewing a problem or
situation. Creativity is widely heralded as an important part of everyday life and education.
Papilla and Olds (1987) have defined creativity as “the ability to see things in a new and
unusual light, to see problems that no one else may even realize exist and then to come up
with new unusual and effective solutions.” Creative thinking tends to occur when people
engage in some tasks or activity that is more or less automatic, such as walking and
swimming. Selective attention permits creative thinking. The ability to be creative is
important but is often a neglected topic in the education of young individuals. The
characteristics of creative thinking are:
i. It is universal.
ii. It can be enhanced.
iii. It is self-expression and carries ego-involvement.

FACTORS
AFFECTING
CREATIVITY

ANALYTICAL
INTELLIGENCE KNOWLEDGE IMAGINATION
THINKING

Characteristics of creative individuals are elaborated below:

• One is extremely high motivation to be creative in a particular field of endeavor (e.g.,


for the sheer
• enjoyment of the creative process).
• A second factor is both non-conformity in violating any conventions that might
inhibit the creative work and dedication in maintaining standards of excellence and
self-discipline related to the creative work.
• A third factor in creativity is deep belief in the value of the creative work, as well as
willingness to criticize and improve the work.
• A fourth is careful choice of the problems or subjects on which to focus creative
attention.
• A fifth characteristic of creativity is thought processes characterized by both insight
and divergent thinking. A sixth factor is risk taking.
• The final two factors in creativity are extensive knowledge of the relevant domain
and profound commitment to the creative endeavor.

In addition, the historical context and the domain and field of endeavor influence the
expression of creativity.

Convergent & Divergent Thinking

J. P. Guilford (1967) distinguished between two types of thinking: convergent thinking and
divergent thinking. Convergent thinking moves in a straightforward manner to a particular
conclusion. Much of pedagogy emphasizes convergent thinking, in which students are asked
to recall factual information, such as What is the capital of Bulgaria? It is the type of thinking
in which a problem is seen as having only one answer, and all lines of thinking will
eventually lead to that single answer, using previous knowledge and logic. Convergent
thinking makes us observe that a pen and pencil can be used to write, have similar shapes,
and so on. Convergent thinking works well for routine problem solving but may be of little
use when a more creative solution is needed.

Divergent thinking requires a person to generate many different answers to a question, the
“correctness” of the answers being somewhat subjective. For example: For how many
different things can you use a brick? It is the reverse of convergent thinking. Divergent
thinking is a type of creative thinking in which a person starts from one point and comes
with many different ideas or possibilities based on that point. Divergent thinking has been
attributed not only to creativity but also to intelligence (Guilford, 1967). Divergent thinking
is also an effective measure in solving the barriers to problem solving, such as functional
fixedness. Divergent or more creative answers may utilize objects or ideas in more abstract
terms. The divergent thinker is more flexible in his or her thinking.

Productive and Reproductive Thinking

Gestalt psychologists emphasized the importance of the whole as more than a collection of
parts. In regard to problem solving, Gestalt psychologists held that insight problems require
problem solvers to perceive the problem as a whole. Gestalt psychologist Max Wertheimer
(1945/1959) wrote about productive thinking, which involves insights that go beyond the
bounds of existing associations. He distinguished it from reproductive thinking, which is
based on existing associations involving what is already known. According to Wertheimer,
insightful (productive) thinking differs fundamentally from reproductive thinking.

Productive thinking is characterized by shifts in perspective which allow the problem solver
to consider new, sometimes transformational, approaches. In other words, productive
thinking is solving a problem with insight. This is a quick insightful unplanned response to
situations and environmental interaction.
Re‐productive thinking, on the other hand, involves the application of familiar, routine,
procedures. In other words, reproductive thinking is solving a problem with previous
experiences and what is already known.

Insight

Insight is a change in frame of reference or in the way elements of the problem are
interpreted and organized. The process by which insight occurs is not well understood. The
notion of insight may be associated with certain cognitive processes that are known to
originate in the right hemisphere of the brain. Experiences of insight that lead to creative
problem solving often are nonverbal, “aha” moments.

In the century Gestalt psychologist Wolfgang Kohler studied chimpanzees’ ability to solve
problems. In a famous study involving a chimp named Sultan, Kohler (1925) demonstrated
that chimpanzees can experience insight into solving a problem. In this particular study, the
goal was for Sultan to reach a banana suspended from the ceiling of the cage. The cage was
equipped with some boxes and pieces of stick, none of which alone could be used to access
the food. After several unsuccessful attempts, Sultan was reported to have the “insight” to
combine the sticks into a larger stick, which ultimately lead to a successful attempt.

Insight influence various cognitive processes like creativity, problem solving, illusions and
many more. It is a very important concept in Gestalt Psychology. They argued that the parts
of a problem may initially seem unrelated to one another, but a sudden flash of insight
could make the parts instantly fit together into a solution (Davidson, 2003). Behaviorists
rejected the concept of insight because the idea of a sudden cognitive reorganization was
not compatible with their emphasis on observable behavior. Furthermore, some
contemporary psychologists prefer to think that people solve problems in a gradual, orderly
fashion. These psychologists are uneasy about the sudden transformation that is suggested
by the concept of insight (Metcalfe, 1998). Currently, then, some psychologists favor the
concept of insight, and others reject this concept.

According to the psychologists who favor the concept of insight, people who are working on
an insight problem usually hold some inappropriate assumptions when they begin to solve
the problem (Chi, 2006; Ormerod et al., 2006). In other words, top-down processing
inappropriately dominated your thinking, and you were considering the wrong set of
alternatives (Ormerod et al., 2006).

Great artistic, musical, scientific, or other discoveries often seem to share a critical moment,
a mental “Eureka” experience when the proverbial “lightbulb” goes on. Many biographies of
composers, artists, scientists, and other eminent experts begin with “Eureka” stories
(Perkins, 1981). According to Smith (1995a), insights need not be sudden “a-ha”
experiences. They may and often do occur gradually and incrementally over time. When an
insightful solution is needed but not forthcoming, sleep may help produce a solution. In
both mathematical problem solving and solution of a task that requires understanding
underlying rules, sleep has been shown to increase the likelihood that an insight will be
produced (Stickgold & Walker, 2004; Wagner et al., 2004).

Problems are divided into two types based on insight; insight problem and non-insight
problem. An insight problem is a problem that initially seems impossible to solve, but a
correct alternative suddenly enters a person’s mind. On the other hand, non-insight
problems are problems that are solved gradually, using memory, reasoning skills, and a
routine set of strategies.

In some cases, the best way to solve an insight problem is to stop thinking about this
problem and do something else for a while. Many artists, scientists, and other creative
people believe that incubation helps them solve problems creatively. Incubation is defined
as a situation in which you are initially unsuccessful in solving a problem, but you are more
likely to solve the problem after taking a break, rather than continuing to work on the
problem without interruption (Perkins, 2001; Segal, 2004). Incubation sounds like a
plausible strategy for solving insight problems (e.g., Csikszentmihalyi, 1996). Unfortunately,
however, the well-controlled laboratory research shows that incubation is not consistently
helpful (Perkins, 2001; Segal, 2004; Ward, 2001).

Neuroscience studies indicate that the right hippocampus is critical in the formation of an
insightful solution (Luo & Niki, 2003). Another study demonstrated a spike of activity in the
right anterior temporal area immediately before an insight is formed. This area is active
during all types of problem solving, as it involves making connections among distantly
related items (Jung-Beeman et al., 2004).

UNIT 5: PSYCHOLINGUISTICS: (LANGUAGE AND THOUGHT) LINGUISTIC RELATIVITY & VERBAL


DEPRIVATION HYPOTHESES. THEORIES OF LANGUAGE ACQUISITION: SKINNER-
BEHAVIOURISM, CHOMSKY (LAD) LENNEBERG-GENETIC READINESS.

Psycholinguistics: (Language and Thought)

One of the most interesting areas in the study of language is the relationship between
language and the thinking of the human mind (Harris, 2003). Many people believe that
language shapes thoughts.

A famous hypothesis, outlined by Benjamin Whorf (1956), asserts that the categories and
relations that we use to understand the world come from our particular language, so that
speakers of different languages conceptualise the world in different ways. Language
acquisition, then, would be learning to think, not just learning to talk. This is an intriguing
hypothesis, but virtually all modern cognitive scientists believe it is false (see Pinker, 1994a).
Babies can think before they can talk. Cognitive psychology has shown that people think not
just in words but in images and abstract logical propositions. Language acquisition has a
unique contribution to make to this issue.

Psycholinguistics is the study of the mental aspects of language and speech. It is primarily
concerned with the ways in which language is represented and processed in the brain. A
branch of both linguistics and psychology, psycholinguistics is part of the field of cognitive
science.

Early philosophers in Greece and India debated the nature of language (Chomsky, 2000).
Centuries later, both Wilhelm Wundt and William James also speculated about our
impressive abilities in this area (Carroll, 2004; Levelt, 1998). However, the current discipline
of psycholinguistics can be traced to the 1960s, when psycholinguists began to test whether
psychological research would support the theories of Noam Chomsky (McKoon & Ratcliff,
1998).
Linguistic Relativity Hypothesis

The concept relevant to the question of whether language influences thinking is linguistic
relativity. Linguistic relativity refers to the assertion that speakers of different languages
have differing cognitive systems and that these different cognitive systems influence the
ways in which people think about the world.

The linguistic-relativity hypothesis is sometimes referred to as the Sapir-Whorf hypothesis,


named after the two men who were most forceful in propagating it. Edward Sapir
(1941/1964) said that “we see and hear and otherwise experience very largely as we do
because the language habits of our community predispose certain choices of interpretation”
(p. 69). Benjamin Lee Whorf (1956) stated this view even more strongly: “We dissect nature
along lines laid down by our native languages”.

Whorf concluded that a thing represented by a word is conceived differently by people


whose languages differ and that the nature of the language itself is the cause of those
different ways of viewing reality. For example, Whorf studied Native American languages
and found that clear translation from one language to another was impossible.

The Sapir-Whorf hypothesis has been one of the most widely discussed ideas in all of the
social and behavioral sciences (Lonner, 1989). However, some of its implications appear to
have reached mythical proportions. For example, “many social scientists have warmly
accepted and gladly propagated the notion that Eskimos have multitudinous words for the
single English word snow. Contrary to popular beliefs, Eskimos do not have numerous words
for snow (Martin, 1986).

Our thoughts and our language interact in myriad ways, only some of which we now
understand. Clearly, language facilitates thought; it even affects perception and memory.
For some reason, we have limited means by which to manipulate non-linguistic images
(Hunt & Banaji, 1988). Such limitations make desirable the use of language to facilitate
mental representation and manipulation. Even nonsense pictures (“droodles”) are recalled
and redrawn differently, depending on the verbal label given to the picture (Bower, Karlin, &
Dueck, 1975). Language also affects how we encode, store, and retrieve information in
memory.
Verbal Deprivation Hypothesis

The Verbal Deprivation Hypothesis is proposed in 1973 by British sociologist Basil Bernstein.
According to APA, the hypothesis states that children who are denied regular experience of
an elaborated code of language—that is, a more formal use of language involving complex
constructions and an unpredictable vocabulary—may develop an educational and even
cognitive deficit. The concept is controversial as it has been associated with the view that
nonstandard or vernacular forms of a language (e.g., Black English) are inherently inferior.
The idea that nonstandard forms inhibit higher level cognitive processes (e.g., abstract
reasoning) is now discredited, but concerns remain that lack of early exposure to the more
formal codes of a language appears to correlate with educational underachievement.

The hypothesis postulates the existence of two codes of speech, a 'restricted code' and an
'elaborated code'. The distinction between the two is made in a number of ways; in terms of
lexicon grammar, logic and performance, but what it amounts to basically is this: the
elaborated code is the vehicle of rationality. In the elaborated code one can reason, plan
ahead, take account of the views of other people, have access to scientific and literary
concepts. Reason and argument need the resources of the elaborated code.

The restricted code on the other hand embodies authority, group solidarity and coercion. It
is the antithesis of the elaborated code. Bernstein and his associates claim to have shown
both the existence of these two codes, and their location among different sections of the
population. The restricted code is most commonly used by the "unskilled working class', and
since its powers of expression do not fit its users for success in activities involving the use of
reason, they achieve poorly in the educational system, unless they are able to switch to an
'elaborated code'.

Bernstein does not actually bring evidence that some sections of the working class cannot
put forward an argument, cannot give reasons, cannot take account of another person's
point of view, cannot plan ahead, or discuss topics of other than immediate concern. He and
his associates’ content themselves with irrelevant quantitative data giving comparisons
between 'middle class' and 'working class' speakers on such items as pause between speech,
frequency of occurrence of pronouns, adjectives, and auxiliary expressions, in speech. He
does not succeed in showing that at any fundamental level there exist the two codes
claimed to exist. Space forbids me to show on detail just how weak Bernstein's evidence is
and how misconceived his criteria are.

Theories of Language Acquisition: Skinner- Behaviourism, Chomsky (LAD) Lenneberg-


Genetic Readiness.

Skinner- Behaviourism

The behaviourist psychologists developed their theories while carrying out a series of
experiments on animals. They observed that rats or birds, for example, could be taught to
perform various tasks by encouraging habit-forming. Researchers rewarded desirable
behaviour. This was known as positive reinforcement. Undesirable behaviour was punished
or simply not rewarded — negative reinforcement. The behaviourist B. F. Skinner then
proposed this theory as an explanation for language acquisition in humans. In Verbal
Behaviour (1957), he stated: “The basic processes and relations which give verbal behaviour
its special characteristics are now fairly well understood. Much of the experimental work
responsible for this advance has been carried out on other species, but the results have
proved to be surprisingly free of species restrictions. Recent work has shown that the
methods can be extended to human behaviour without serious modifications.” (cited in
Lowe and Graham, 1998, p.68)

Skinner suggested that a child imitates the language of its parents or carers. Successful
attempts are rewarded because an adult who recognises a word spoken by a child will
praise the child and/or give it what it is asking for. The linguistic input was key — a model
for imitation to be either negatively or positively reinforced. Successful utterances are
therefore reinforced while unsuccessful ones are forgotten. No essential difference
between the way a rat learns to negotiate a maze and a child learns to speak.

Limitations

While there must be some truth in Skinner’s explanation, there are many objections to it.
Language is based on a set of structures or rules, which could not be worked out simply by
imitating individual utterances. The mistakes made by children reveal that they are not
simply imitating but actively working out and applying rules.

For example, a child who says “drinked” instead of “drank” is not copying an adult but
rather over-applying a rule.

The vast majority of children go through the same stages of language acquisition.

Apart from certain extreme cases, the sequence seems to be largely unaffected by the
treatment the child receives or the type of society in which s/he grows up.

Children are often unable to repeat what an adult says, especially if the adult utterance
contains a structure the child has not yet started to use.

Few children receive much explicit grammatical correction. Parents are more interested in
politeness and truthfulness. According to Brown, Cazden & Bellugi (1969): “It seems to be
truth value rather than well-formed syntax that chiefly governs explicit verbal reinforcement
by parents — which renders mildly paradoxical the fact that the usual product of such a
training schedule is an adult whose speech is highly grammatical but not notably truthful.”
(cited in Lowe and Graham, 1998)

There is evidence for a critical period for language acquisition. Children who have not
acquired language by the age of about seven will never entirely catch up. The most famous
example is that of Genie, discovered in 1970 at the age of 13. She had been severely
neglected, brought up in isolation and deprived of normal human contact. Of course, she
was disturbed and underdeveloped in many ways. During subsequent attempts at
rehabilitation, her caretakers tried to teach her to speak. Despite some success, mainly in
learning vocabulary, she never became a fluent speaker, failing to acquire the grammatical
competence of the average five-year-old.

Noam Chomsky- Innateness Theory (LAD- Language Acquisition Device)

Noam Chomsky published a criticism of the behaviourist theory in 1957. In addition to some
of the arguments listed above, he focused particularly on the impoverished language input
children receive. This theory is connected with the writings of Chomsky, although the theory
has been around for hundreds of years.

Children are born with an innate capacity for learning human language. Humans are
destined to speak. Children discover the grammar of their language based on their own
inborn grammar. Certain aspects of language structure seem to be preordained by the
cognitive structure of the human mind. This accounts for certain very basic universal
features of language structure: every language has nouns/verbs, consonants and vowels. It
is assumed that children are pre-programmed, hard-wired, to acquire such things.

Yet no one has been able to explain how quickly and perfectly all children acquire their
native language. Every language is extremely complex, full of subtle distinctions that
speakers are not even aware of. Nevertheless, children master their native language in 5 or
6 years regardless of their other talents and general intellectual ability. Acquisition must
certainly be more than mere imitation; it also doesn’t seem to depend on levels of general
intelligence, since even a severely retarded child will acquire a native language without
special training. Some innate feature of the mind must be responsible for the universally
rapid and natural acquisition of language by any young child exposed to speech.

Chomsky concluded that children must have an inborn faculty for language acquisition.
According to this theory, the process is biologically determined – the human species has
evolved a brain whose neural circuits contain linguistic information at birth. The child’s
natural predisposition to learn language is triggered by hearing speech and the child’s brain
is able to interpret what s/he hears according to the underlying principles or structures it
already contains.

This natural faculty has become known as the Language Acquisition Device (LAD). Chomsky
did not suggest that an English child is born knowing anything specific about English, of
course. He stated that all human languages share common principles. (For example, they all
have words for things and actions — nouns and verbs.) It is the child’s task to establish how
the specific language s/he hears expresses these underlying principles.

For example, the LAD already contains the concept of verb tense. By listening to such forms
as “worked”, “played” and “patted”, the child will form the hypothesis that the past tense of
verbs is formed by adding the sound /d/, /t/ or / id/ to the base form. This, in turn, will lead
to the “virtuous errors” mentioned above. It hardly needs saying that the process is
unconscious. Chomsky does not envisage the small child lying in its cot working out
grammatical rules

consciously!

Chomsky’s ground-breaking theory remains at the centre of the debate about language
acquisition. However, it has been modified, both by Chomsky himself and by others.
Chomsky’s original position was that the LAD contained specific knowledge about language.
Dan Isaac Slobin has proposed that it may be more like a mechanism for working out the
rules of language:

“It seems to me that the child is born not with a set of linguistic categories but with some
sort of process mechanism — a set of procedures and inference rules, if you will - that he
uses to process linguistic data. These mechanisms are such that, applying them to the input
data, the child ends up with something which is a member of the class of human languages.
The linguistic universals, then, are the result of an innate cognitive competence rather than
the content of such a competence” (cited in Russell, 2001).

Evidence to Support Innateness Theory

Work in several areas of language study has provided support for the idea of an innate
language faculty. Three types of evidence are offered here:

1) Slobin has pointed out that human anatomy is peculiarly adapted to the production of
speech. Unlike our nearest relatives, the great apes, we have evolved a vocal tract which
allows the precise articulation of a wide repertoire of vocal sounds.

2) Neuro-science has also identified specific areas of the brain with distinctly linguistic
functions, notably Broca’s area and Wernicke’s area. Stroke victims provide valuable data:
depending on the site of brain damage, they may suffer a range of language dysfunction,
from problems with finding words to an inability to interpret syntax.
3) Experiments aimed at teaching chimpanzees to communicate using plastic symbols or
manual gestures have proved controversial. It seems likely that our ape cousins, while able
to learn individual “words”, have little or no grammatical competence. Pinker (1994) offers a
good account of this research.

The formation of creole varieties of English appears to be the result of the LAD at work. The
linguist Derek Bickerton has studied the formation of Dutch-based creoles in Surinam.
Escaped slaves, living together but originally from different language groups, were forced to
communicate in their very limited Dutch.

The result was the restricted form of language known as a pidgin. The adult speakers were
past the critical age at which they could learn a new language fluently — they had learned
Dutch as a foreign language and under unfavourable conditions. Remarkably, the children of
these slaves turned the pidgin into a full language, known by linguists as a creole. They were
presumably unaware of the process but the outcome was a language variety which follows
its own consistent rules and has a full expressive range. Creoles based on English are also
found, in the Caribbean and elsewhere.

Studies of the sign languages used by the deaf have shown that, far from being crude
gestures replacing spoken words, these are complex, fully grammatical languages in their
own right. A sign language may exist in several dialects.

Children learning to sign as a first language pass through similar stages to hearing children
learning spoken language. Deprived of speech, the urge to communicate is realised through
a manual system which fulfils the same function. There is even a signing creole, again
developed by children, in Nicaragua (Pinker, 1994).

Limitations of Chomsky’s Theory

Chomsky’s work on language was theoretical. He was interested in grammar and much of
his work consists of complex explanations of grammatical rules. He did not study real
children. The theory relies on children being exposed to language but takes no account of
the interaction between children and their caretakers. Nor does it recognise the reasons
why a child might want to speak, the functions of language.
In 1977, Bard and Sachs published a study of a child known as Jim, the hearing son of deaf
parents. Jim’s parents wanted their son to learn speech rather than the sign language they
used between themselves. He watched a lot of television and listened to the radio,
therefore receiving frequent language input. However, his progress was limited until a
speech therapist was enlisted to work with him.

Simply being exposed to language was not enough. Without the associated interaction, it
meant little to him.

Subsequent theories have placed greater emphasis on the ways in which real children
develop language to fulfil their needs and interact with their environment, including other
people.

Lenneberg-Genetic Readiness

The critical period hypothesis is the subject of a long-standing debate


in linguistics and language acquisition over the extent to which the ability to
acquire language is biologically linked to age. The hypothesis claims that there is an ideal
time window to acquire language in a linguistically rich environment, after which further
language acquisition becomes much more difficult and effortful. The critical period
hypothesis was first proposed by Montreal neurologist Wilder Penfield and co-author Lamar
Roberts in their 1959 book Speech and Brain Mechanisms,[1] and was popularized by Eric
Lenneberg in 1967 with Biological Foundations of Language.[2]

The critical period hypothesis states that the first few years of life is the crucial time in
which an individual can acquire a first language if presented with adequate stimuli, and that
first-language acquisition relies on neuroplasticity. If language input does not occur until
after this time, the individual will never achieve a full command of language. [3] There is
much debate over the timing of the critical period with respect to SLA, with estimates
ranging between 2 and 13 years of age.[4]

The critical period hypothesis is derived from the concept of a critical period in the biological
sciences, which refers to a set period in which an organism must acquire a skill or ability, or
said organism will not be able to acquire it later in life. Strictly speaking, the experimentally
verified critical period relates to a time span during which damage to the development of
the visual system can occur, for example if animals are deprived of the necessary binocular
input for developing stereopsis.

Preliminary research into the Critical period hypothesis investigated brain lateralization as a
possible neurological cause,[5] however this theoretical cause was largely discredited since
lateralization does not necessarily increase with age, and no definitive link between
language learning ability and lateralization was ever determined.[6] Recently, it has been
suggested that if a critical period does exist, it may be due at least partially to the delayed
development of the prefrontal cortex in human children.[7][8] Researchers have suggested
that delayed development of the prefrontal cortex and an associated delay in the
development of cognitive control may facilitate convention learning, allowing young
children to learn language far more easily than cognitively mature adults and older children.
This pattern of prefrontal development is unique to humans among similar mammalian (and
primate) species, and may explain why humans—and not chimpanzees—are so adept at
learning language.
MODULE 6: APPLYING COGNITIVE PSYCHOLOGY CONCEPTS TO EVERYDAY LIFE (TO READ:
NOT TO BE INCLUDED IN SHORT AND LONG ESSAYS)

UNIT 1: TOP-DOWN INFLUENCE OF MOTIVATION & LEARNING AND ROLE OF CULTURE ON


ATTENTION, PERCEPTION AND MEMORY.

[Refer Galloti, pg no: 377- 385 for Role of Culture on Perception and Memory]

[Refer Gallotti, pg no: 55 for Top-Down Influences of Learning]

UNIT 2: VISUO-SPATIAL SUB-CODES, CONTRIBUTIONS OF HUBEL &WIESEL. PERCEPTUAL


ORGANIZATION (GESTALT LAWS)

Visuo-Spatial Sub-Codes

[Refer Aqueen ppt]

Contributions of Hubel &Wiesel

During 1964, David Hubel and Torsten Wiesel studied the short- and long-term
effects of depriving kittens of vision in one eye. In their experiments, Wiesel and
Hubel used kittens as models for human children. Hubel and Wiesel researched
whether the impairment of vision in one eye could be repaired or not and whether
such impairments would impact vision later on in life. The researchers sewed one
eye of a kitten shut for varying periods of time. They found that when vision
impairments occurred to the kittens right after birth, their vision was significantly
affected later on in life, as the cells that were responsible for processing visual
information redistributed to favor the unimpaired eye. Hubel and Wiesel worked
together for over twenty years and received the 1981 Nobel Prize for Physiology or
Medicine for their research on the critical period for mammalian visual system
development. Hubel and Wiesel’s experiments with kittens showed that there is a
critical period during which the visual system develops in mammals, and it also
showed that any impairment of that system during that time will affect the lifelong
vision of a mammal.
In 1959, Hubel and Wiesel conducted a series of experiments on cats and kittens as
models for humans, and in the 1970s they repeated the experiments on primates.
Their collaboration lasted for over twenty years, during which time Hubel and
Wiesel elucidated details about the development of the visual system.

In 1964 at the time the article was published, surgeons operated on individuals with
congenital cataracts, a disorder in which the lens of the eye is clouded upon birth,
later in those individuals’ lives rather than at birth. Those individuals required
intensive treatment after surgery, as there was still impairment to vision in the
affected eye. Hubel and Wiesel questioned why their vision remained impaired.
Hubel and Wiesel hypothesized that there was a time period dur ing which the visual
nerve cells develop and that if the retina did not receive any visual information at
that time, the cells of the visual cortex redistribute their response in favor of the
working eye. By 1964, Hubel and Wiesel performed a set of experiments to test their
hypothesis. Other researchers had studied the behavior and vision of animals after
they were raised in the dark, but Hubel and Wiesel were the first to study animal
behavior after physically suturing one of the eyes, thus further reduci ng the visual
input to the retina.

For the purpose of the experiment, Hubel and Wiesel used newborn kittens and
sutured one of their eyes shut for the first three months of their lives. The sutured
eye did not get any visual information and received 10,000 to 100,000 times less
light than the normal eye. That meant that there was no visual information for the
retina of the sutured eye to record and thus the visual cortex could not receive any
input from that eye. Hubel and Wiesel used four kittens for the e xperiment.

After three months, Hubel and Wiesel opened the sutured eyes, and recorded the
changes. They found a noticeable difference in cortical cell response. The
researchers recorded the activity of the visual system in each kitten by inserting a
tungsten electrode into the sedated kitten’s visual cortex of the brain, which let
them monitor the activity of each cortical cell separately. The tungsten rod detected
electrical activity or inactivity in the cortex, which indicated whether or not the
visual cortex retrieved information from the previously sutured eye. By recording
electrical activity in the kittens’ visual cortex, Hubel and Wiesel observed how the
cells of the visual cortex reacted to different stimuli from both eyes and whether or
not there was a difference in the signals from the previously sutured eye and the
normal eye.

Next, Wiesel and Hubel showed the kittens different patterns of light to stimulate
the cortical cells. Normally, about eighty-five percent of cortical cells respond
identically to both eyes in a mammal with normal vision and only fifteen percent of
those cells respond to one eye only. However, when Hubel and Wiesel performed
the experiment on kittens with previously sutured eyes, they found that one out of
eighty-four cells responded to the previously sutured eye and the other eighty -three
cells responded to the normal eye only. That meant that the cortical cells
redistributed to favor the normal eye, as it was their onl y source of visual
information during the early development of the kitten. The researchers also noted
that all kittens who had one of their eyes sutured had some cortical cells that did
not respond to any stimuli at all. The researchers concluded that thos e cells were
likely only associated with the previously sutured eye. Because those cells did not
respond at all to any visual stimuli, they had not regenerated and could not be used
again. That meant that some cortical neuron function can be fully lost if a vision
impairment occurs during visual system development.

Hubel and Wiesel also performed a simple vision test on the kittens. They put an
opaque barrier on one eye of the kitten and monitored th e kitten’s movement. They
later repeated the same procedure for the other eye. The researchers noted that
when the kittens were allowed to see with the previously sutured eye, they were
uncoordinated and showed no signs of vision. However, the normal eye f unctioned
properly and the researchers noted no impairment. Those findings meant that the
previously sutured eye had lost its vision function and was not able to recover upon
being open, which provided further evidence that previous vision deprivation affe cts
long-term vision. Hubel and Wiesel concluded that an abnormality occurred
somewhere within the visual pathway from the eye to the brain that caused the
cortical neurons to redistribute and function only with the normal eye.
Hubel and Wiesel investigated where in the vision pathway the abnormality of vision
cells came from. They sought to know whether the abnormality was a cortical or a
geniculate abnormality, as that information would reveal how the vision pathway
works. Another question that they asked was whether or not depriving the kittens of
light or form (sight of object) caused the abnormality in the vision pathway. Their
research aimed to explain how the deprivation of either one related to the
continuous vision impairment in children after surgery. Hubel and Wiesel also
questioned if the kittens’ visual system reacted to the visual impairment the same
way the system of an older or an adult cat would. Their findings sought to explain
whether the connections made by the visual system before birth w ere innate or
developed after birth. Finally, Hubel and Wiesel questioned whether the neural
connections would deteriorate if an impairment was present, or whether the neural
connections could not develop in the presence of an impairment. To answer those
questions, Hubel and Wiesel performed multiple experiments with kittens and adult
cats.

Following the vision tests, Hubel and Wiesel sought to answer where the
abnormality occurred and how it worked. They checked the lateral geniculate body,
which is a transfer site in the thalamus that receives visual information from the
retina and transfers it to the occipital lobe of the brain. The cells in the lateral
geniculate body normally respond more to one eye than the other. The vast majority
of the geniculate cells that were associated with the previously sutured eye were
intact and worked properly. However, upon analyzing those cells with a microscope,
Hubel and Wiesel found that the cross sectional area of the lateral geniculate body
had shrunk an average of forty percent and that some geniculate cells were smaller
and contained little substance inside. That meant that the cells were not being used
nearly as much as they could have been, causing the entire area to atrophy. The
lateral geniculate body atrophied because it was receiving only half of its normal
visual information, but it continued to transfer visual information from the eye to
the brain. The researchers found no other physical abnormalities anywhere along
the visual pathway. Hubel and Wiesel concluded that the abnormality that caused
vision loss of the sutured eye likely occurred somewhere in the cortex of the brain,
which was the last stop in the visual pathway.

Next, Hubel and Wiesel investigated whether the visual impairment in the kittens
was caused by the deprivation of light or the depreciation of viewing forms. Light
refers to colors as well as light or dark perception of the eye, while form refers to
recognizing shapes of different objects. To determine the cause of the visual
impairment, the researchers took the newborn kittens and put an opaque barrier
over one of their eyes, which reduced the incoming amount of light to only ten to
one hundred times. However, the barrier did not allow the kittens to distinguish
forms or shapes. The results indicated that cortical cells only responded to the open
eye, but the morphological changes in the lateral geniculate body cells were
significantly reduced. Those findings suggested that cortical cells redistributed due
to form deprivation, while the morphological abnormalities of the lateral geniculate
body were due to light deprivation.

Next, Hubel and Wiesel investigated whether those visual effects would be
replicated in older kittens that had already experienced vision. For that purpose,
they sutured the eye of kittens shut at nine weeks of age for one month. Upon
opening the eye, the researchers found that the distribution of cortical cells
between eyes was still largely in favor of the open eye. However, there was almost
no difference to the lateral geniculate body size. That, once again, established that
the source of abnormality was cortical and not geniculate.

The researchers also tried the experiment with adult cats. They observed after
visually depriving adult cats for several months, that the cats did not display any
changes in cortical cell distribution or changes in the morphology of their lateral
geniculate bodies. Hubel and Wiesel concluded that younger kittens were most at
risk for developing cortical abnormalities and, thus, blindness. That risk declined
with every month of life and was almost non-existent in adults. Hubel and Wiesel
found that there was a period at the beginning of kitten’s life when the ability to
view light and forms was most important for development.
Finally, Hubel and Wiesel researched whether visual pathway connections were
present at birth and deteriorated with disuse or whether they did not develo p if not
used early on. To determine that, they experimented with three more kittens. The
researchers closed the eye of one of the kittens when the kitten was eight days old,
which is about the time that eyes first start to open in kittens. They closed the eyes
of the other two kittens after one to two weeks of age. The researchers studied the
electrical connections in the brain at birth for all three kittens and found that their
cortical cells responded to visual stimuli similarly to those in adult cats. T his
observation meant that the cortical cells had some ocular dominance. However, the
cats could recognize the stimuli from both eyes. Hubel and Wiesel studied the same
electrical connections in the brain later, after reopening the sutured eyes, and found
that they had deteriorated and that cortical cells had redistributed in favor of the
normal eye yet again. Hubel and Wiesel concluded that the neural pathways in the
visual system are present at birth and deteriorate with disuse.

Hubel and Wiesel’s experiment helped uncover how the visual system develops in
mammals. First, they found a critical period during which the visual system
developed and learned that the deprivation of vision during that time could impair
vision forever. The conclusions of Hubel and Wiesel’s experiment led surgeons to
operate on congenital cataracts as soon as the infant was diagnosed. In 1981, Hubel
and Wiesel received a Nobel Prize for Physiology or Medicine for their research on
the development of the visual system.

Perceptual Organization (Gestalt Laws)

[Refer BSc notes and CP notes pdf]

UNIT 3: SUBLIMINAL PERCEPTION, PERCEPTUAL DEFENSE, SYNESTHESIA

Subliminal Perception

[Refer BSc notes]

Subliminal perception is the influence of stimuli that are insufficiently intense to produce a
conscious sensation but strong enough to influence some mental processes. Literally,
subliminal is below the sensory threshold, thus imperceptible. However, subliminal
perception often refers to stimuli that are clearly strong enough to be above the
psychological limen but do not enter consciousness, and is technically referred to as
subraliminal (above limen). Public interest in subliminal messages began in the late 1950s
when advertisers briefly flashed messages between frames at a movie theater in attempts
to increase sales of popcorn and soda. Since then, subliminal messages have been reported
in a variety of sources such as naked women in ice cubes, demonic messages in rock ‘n roll
music, and recent presidential campaign advertisements.

The topic of subliminal perception is closely related to perceptual priming, in which the
display of a word is so brief that the participant cannot report seeing it, however, the word
actually facilitates the recognition of an associate to that word without any conscious
awareness of the process. Furthermore, several studies (Philpott & Wilding, 1979;
Underwood, 1976, 1977) have shown that subliminal stimuli have an effect on the
recognition of subsequent stimuli. Therefore, some effect of the subliminal stimuli is
observed.

Filter location. Contemporary models of attention focus on where the selection (or filtering
out) of information takes place in the cognitive process. Inherent in many of these filter
theories is the notion that people are not aware of signals in the early part of processing of
information but, after some type of decision or selection, pass some of the signals on for
further processing. The models typically differ based on early or late selection depending on
where the filter location is hypothesized.

Perceptual Defense

• The process by which stimuli that are potentially threatening, offensive, or


unpleasant are either not perceived or are distorted in perception, especially when
presented as brief flashes.

• A person may build a defense (a block or a refusal to recognize) against stimuli or


situational events in the context that are person or culturally unacceptable or
threatening.
• Study- College students college students were presented with the word “intelligent”
as a characteristic of a factory worker. This was counter to their perception of
factory workers, and they built defenses in the following ways:

1. Denial: A few of the subjects denied the existence of intelligence in factory workers.

2. Modification and distortion: This was one of the most frequent forms of defense.
The pattern was to explain away the perceptual conflict by joining intelligence with
some other characteristic, for example, “He is intelligent, but doesn’t possess the
initiative to rise above his group.

3. Change in perception: Many of the students changed their perception of the worker
because of the intelligence characteristic. The change, however, was usually very
subtle; example, “He cracks jokes” became “He’s witty.”

4. Recognition: But refusal to change. Very few subjects explicitly recognized the
conflict between their perception of the worker and the characteristic of intelligence
that was confronting them. For example, one subject stated, “the traits seem to be
conflicting, most factory workers I have about aren’t too intelligent.

Health related advertisement generally use this concept. Example- a smoker is exposed to
an advertisement stating the harmful effects of cigarette smoking.

Synaesthesia

• Synesthesia or synesthesia is a perceptual phenomenon in which stimulation of one


sensory or cognitive pathway leads to involuntary experiences in a second sensory or
cognitive pathway.

• People who report a lifelong history of such experiences are known as synesthetes.

• People who have synesthesia — may see sounds, taste words or feel a sensation on
their skin when they smell certain scents.

• Many synesthetes experience more than one form of the condition.

• There are over 80 different types of synesthesia described by science.


• Example- every time you bite into a food, you also feel its geometric shape: round,
sharp, or square.

• Maybe when you’re feeling emotional over a person you love, you can close your
eyes and see certain colors playing at your field of vision.

• Synaesthesia has often been conceptualised as an abnormality (e.g. “breakdown of


modularity”)

• However, in the past decade, it is discovered that synaesthesia shares much in


common with ordinary perception; it relies on common perceptual mechanisms and
may merely represent an augmentation of normal propensities for cross-modal
interactions.

• There is a simple common characteristic between synaesthesia and


hallucination: The absence of an ‘appropriate’ stimulus.

• Synaesthesia should be regarded as a pathological condition.

• Differences between developmental synaesthesia and spontaneous


hallucination: Developmental synaesthesia is not transient; it is elicited in a
consistent manner by specific stimuli; it is not disruptive; and it occurs
naturally without neurological disease or the aid of recreational drugs (Van
Campen, 2007).

• Although the correspondence between sensory input and perceptual experience is


different in synaesthetes and non-synaesthetes – in both groups this
correspondence is regular.

Personalities with synaesthetes:

• Marilyn Monroe

• Alessia Cara

• Billie Eilish
• Kanye West

• Vincent Van Gogh had a form of synesthesia called chromesthesia—an experience


of the senses where the person associates sounds with colors.

[Refer Solso. Pg no: 239]

UNIT 4: META-MEMORY, MNEMONICS.

Meta-Memory

Metamemory strategies involve reflecting on our own memory processes with a view to
improving our memory. Such strategies are especially important when we are transferring
new information to long-term memory by rehearsing it. Metamemory strategies are just
one component of metacognition, our ability to think about and control our own processes
of thought and ways of enhancing our thinking.

The use of mnemonic devices and other techniques for aiding memory involves
metamemory (our understanding and reflection upon our memory and how to improve it).
Because most adults spontaneously use categorical clustering, its inclusion in this list of
mnemonic devices is actually just a reminder to use this common memory strategy.

Metamemory refers to the introspective examination of one’s own memory contents


facilitating judgment and discretion. Thus, Metamemory is not memory itself but rather
analysis, commentaries, appraisal of the memory index and learning. For instance, when
Descartes was engaged in his famous doubting meditation – musing about how his
memories or perceptions could have been different than they were, or how he could have
been mistaken about them – he was engaging in metacognition. Such reflection of the
phenomenological selves was also highly prone to subjectivity. Since Metamemory is
primarily judgment and appraisal of the memory index, three basic judgments of formed the
core of Metamemory research Feeling of knowing Judgments, Tip of the Tongue Judgments
and Judgment of Learning. Though the list of judgments is exhaustive as metamemory
concerns any judgment about the memory so other evaluations such as source judgment,
recognition judgment, and confidence judgment also include as imperative part.
Metamemory operates at two levels i.e. the objective level and the metalevel. At the
objective level, the memories itself is the concern whereas at the metalevel, the regulation
of the objective level is involved.

Mnemonics

Mnemonic devices are specific techniques to helps memorize information (Best, 2003).

• In categorical clustering, organize a list of items into a set of categories.

• In interactive images, imagine (as vividly as possible) the objects represented by words
you have to remember as if the objects are interacting with each other in some active way.

• In the pegword system, associate each word with a word on a previously memorized list
and form an interactive image between the two words.

• In the method of loci, visualize walking around an area with distinctive, wellknown
landmarks and link the various landmarks to specific items to be remembered.

• In using acronyms, devise a word or expression in which each of its letters stands for a
certain other word or concept.

• In using acrostics, form a sentence, rather than a single word, to help one remember new
words.

• In using the keyword system, create an interactive image that links the sound and
meaning of a foreign word with the sound and meaning of a familiar word.

The success of mnemonics in facilitating memory is attributed to their assistance in


organizing information.

[Refer BSc notes]

[Refer Matlin, pg no: 171- 175]

[Refer Galloti, pg no: 125]


UNIT 5: ARTIFICIAL INTELLIGENCE, META-COGNITION.

Artificial Intelligence

An artificial intelligence (AI) is the area of computer science that attempts to construct
computers that can demonstrate human-like cognitive processes (Stenning et al., 2006).
Computer programs have been developed both to simulate human intelligence and to
exceed it. In many ways, computer programs have been created with the intention of
solving problems faster and more efficiently than humans. Much of early information-
processing research centered on work based on computer simulations of human intelligence
as well as computer systems that use optimal methods to solve tasks. Programs of both
kinds can be classified as examples of artificial intelligence (AI), or intelligence in symbol-
processing systems such as computers (see Schank & Towle, 2000). Computers cannot
actually think; they must be programmed to behave as though they are thinking. That is,
they must be programmed to simulate cognitive processes. In this way, they give us insight
into the details of how people process information cognitively.

The Turing Test

Probably the first serious attempt to deal with the issue of whether a computer program
can be intelligent was made by Alan Turing (1963). The basic idea behind the Turing Test is
whether an observer can distinguish the performance of a computer from that of a human.
The test is conducted with a computer, a human respondent, and an interrogator. The
interrogator has two different “conversations” with an interactive computer program. The
goal of the interrogator is to figure out which of two parties is a person communicating
through the computer, and which is the computer itself. The interrogator can ask the two
parties any questions at all. However, the computer will try to fool the interrogator into
believing that it is human. The human, in contrast, will be trying to show the interrogator
that he or she truly is human. The computer passes the Turing Test if an interrogator is
unable to distinguish the computer from the human.

Often, what researchers are interested in when assessing the “intelligence” of computers is
not their reaction time, which is often much faster than that of humans. They are interested
instead in patterns of reaction time, that is, whether the problems that take the computer
relatively longer to solve also take human participants relatively longer.

Sometimes, the goal of a computer model is not to match human performance but to
exceed it. In this case, maximum AI, rather than simulation of human intelligence, is the goal
of the program. The criterion of whether computer performance matches that of humans is
no longer relevant. Instead, the criterion of interest is that of how well the computer can
perform the task assigned to it. Computer programs that play chess, for example, typically
play in a way that emphasizes “brute force,” or the consideration of all possible moves
without respect to their quality. The programs evaluate extremely large numbers of possible
moves. Many of them are moves humans would never even consider evaluating (Berliner,
1969; Bernstein, 1958). Using brute force, the IBM program, “Deep Blue,” beat world
champion Gary Kasparov in a 1997 chess match. The same brute-force method is used in
programs that play checkers (Samuel, 1963). These programs generally are evaluated in
terms of how well they can beat each other or, even more importantly, human contenders
playing against them.

Expert Systems

Expert systems are computer programs that can perform the way an expert does in a fairly
specific domain. They are not developed to model human intelligence, but to simulate
performance in just one domain, often a narrow one. They are mostly based on rules that
are followed and worked down like a decision tree.

Several programs were developed to diagnose various kinds of medical disorders, like
cancer. Such programs are obviously of enormous potential significance, given the very high
costs (financial and personal) of incorrect diagnoses. Not only are there expert systems for
use by doctors, but there are even medical expert systems on-line for use by consumers
who would like an analysis of their symptoms. Expert systems are used in other areas as
well, for example in banks. The processing of small mortgages is relatively expensive for
banks because a lot of factors need to be considered. If the data are fed into a computer,
however, an expert system makes a decision about the mortgage application based on rules
it was programmed with. There is one expert system with which you may have made some
experiences yourself: Microsoft Windows offers troubleshooting through the “help section”
where you can enter into a dialogue with the system in order to figure out a solution to your
particular problem.

One has to be cautious in the use of expert systems. Because patients generally do not have
the knowledge their doctors have, their use of expert systems, such as on-line ones, may
lead them to incorrect conclusions about what illnesses they suffer. In medicine, patient use
of the Internet is no substitute for the judgment of a medical doctor.

The application of expertise to problem solving generally involves converging on a single


correct solution from a broad range of possibilities. A complementary asset to expertise in
problem solving involves creativity. Here, an individual extends the range of possibilities to
consider never-before-explored options. In fact, many problems can be solved only by
inventing or discovering strategies to answer a complex question.

The FRUMP Project

Let’s consider a classic example of a computer program designed to perform reading tasks.
One script-based program is called FRUMP, an acronym for Fast Reading Understanding and
Memory Program (De Jong, 1982). The goal of FRUMP is to summarize newspaper stories,
written in ordinary language. When it was developed, FRUMP could interpret about 10% of
news releases issued by United Press International (Butcher & Kintsch, 2003; Kintsch, 1984).
FRUMP usually worked in a top-down fashion by applying world knowledge, based on 48
different scripts.

Consider, for example, the “vehicle accident” script. The script contains information such as
the number of people killed, the number of people injured, and the cause of the accident.
On the basis of the “vehicle accident” script, FRUMP summarized a news article as follows:
“A vehicle accident occurred in Colorado. A plane hit the ground. 1 person died.” FRUMP did
manage to capture the facts of the story. However, it missed the major reason that the item
was newsworthy: Yes, 1 person was killed, but 21 people actually survived!

Research on script-based programs like FRUMP show that humans draw numerous
inferences that artificial intelligence systems cannot access (Kintsch, 1998, 2007). We can be
impressed that FRUMP and other programs can manage some languagelike processes.
However, consistent with Theme 2, their errors highlight the wide-ranging capabilities of
human readers (Thagard, 2005).

More Recent Projects

Cognitive scientists continue to develop programs designed to understand language (Moore


& Wiemer-Hastings, 2003; Shermis & Burstein, 2003; Wolfe et al., 2005). One of the most
useful artificial intelligence programs was created by cognitive psychologist Thomas
Landauer and his colleagues (Foltz, 2003; Landauer et al., 2007). Their program, called latent
semantic analysis (LSA), can perform many fairly sophisticated language tasks. For instance,
it can also be programmed to provide tutoring sessions in disciplines such as physics
(Graesser et al., 2004).

LSA can also assess the amount of semantic similarity between two discourse segments. In
fact, LSA can even be used to grade essays written by college students (Graesser et al.,
2007). For example, suppose a textbook contains the following sentence: “The phonological
loop responds to the phonetic characteristics of speech but does not evaluate speech for
semantic content” (Butcher & Kintsch, 2003, p. 551).

LSA is indeed impressive, but even its developers note that it cannot match a human grader.
For instance, it cannot assess a student’s creativity when writing an essay (Murray, 1998).
Furthermore, all the current programs master just a small component of language
comprehension. For example, LSA typically ignores syntax, whereas humans can easily
detect syntax errors. In addition, LSA learns only from written text, whereas humans learn
from spoken language, facial expressions, and physical gestures (Butcher & Kintsch, 2003).
Once again, the artificial intelligence approach to language illustrates humans’ tremendous
breadth of knowledge, cognitive flexibility, understanding of syntax, and sources of
information.

[Also refer CP notes]

META-COGNITION

[Refer Galloti, pg no: 343- 345] [Refer CP notes and BSc notes]
PSYCHOPATHOLOGY
MODULE 1: CLASSIFICATORY SYSTEMS AND NEURODEVELOPMENTAL DISORDERS

UNIT 3: INTELLECTUAL DISABILITIES, AUTISM SPECTRUM DISORDER, SPECIFIC LEARNING


DISORDERS AND COMMUNICATION DISORDERS

Levels of Intellectual Disability

The American Association on Intellectual and Developmental Disability (AAIDD) defines


intellectual disability as a disability characterized by significant limitations in both
intellectual functioning (reasoning, learning, and problem solving) and in adaptive behavior
(conceptual, social, and practical skills) that emerges before the age of 1 8 years. DSM-5
criteria for intellectual disability include significantly subaverage general intellectual
functioning associated with concurrent impairment in adaptive behavior, manifested before
the age of 1 8. Approximately 85 percent of individuals who have intellectual disability fall
within the DSM-5 mild intellectual disability category.

The severity levels of intellectual disability are expressed in DSM-5 as

INTELLECTUAL
DISABILITY

MILD MODERATE (IQ SEVERE PROFOUND (IQ


(IQ 50-70) 35-50) (IQ 20 -35) LESS THAN 20)

Mild

• IQ: 50-70
• Represents approximately 85 percent of persons with intellectual disability.
• Not identified until the first or second grade, when academic demands increase.
• They often acquire academic skills approximate to that of a sixth-grade level.
• Specific causes for the intellectual disability are often unidentified in this group.
• Many adults with mild intellectual disability can live independently with appropriate
support and raise their own families.
• Their social adjustment may be comparable to an adolescent.
• They lack innovation, judgement, fail to anticipate consequences of their acrions and
have poor control over their impulses.

Moderate

• IQ: 35-50
• Represents about 10 percent of persons with intellectual disability.
• Most children with moderate intellectual disability acquire language and can
communicate adequately during early childhood.
• They are challenged academically and often are not able to achieve above a second
to third grade level.
• During adolescence, socialization difficulties often set these persons apart, and a
great deal of social and vocational support is beneficial (Considered to be trainable).
• As adults, individuals with moderate intellectual disability may be able to perform
semiskilled work under appropriate supervision.
• Physically they appear to be clumsy and suffer from motor incoordination.
• They are unable to do any task/work that require initiative, originality, memory and
consistent attention.

Severe

• IQ: 20-35
• Represents about 4 percent of individuals with intellectual disability.
• They may be able to develop communication skills in childhood and often can learn
to count as well as recognize words that are critical to functioning.
• The cause for the intellectual disability is more likely to be identified than in milder
forms of intellectual disability.
• In adulthood, persons with severe intellectual disability may adapt well to supervised
living situations, such as group homes, and may be able to perform work-related and
basic tasks under supervision.
• Severe motor, speech and sensory deficits are common.
Profound

• IQ: less than 20


• Constitutes approximately 1 to 2 percent of individuals with intellectual disability.
• Most individuals with profound intellectual disability have identifiable causes for
their condition.
• Children with profound intellectual disability may be taught some self-care skills and
learn to communicate their needs given the appropriate training.
• Short life-span.
• Completely dependent and at times institutionalized.

The DSM-5 also includes a disorder called "Unspecified Intellectual Disability" (Intellectual
Developmental Disorder), reserved for individuals over the age of 5 years who are difficult
to evaluate but are strongly suspected of having intellectual

disability. Individuals with this diagnosis may have sensory or physical impairments such as
blindness or deafness, or concurrent mental disorders, making it difficult to administer

typical assessment tools. (e.g., Bayley Scales of Infant Development and Cattell Infant Scale)
to aid in determining adaptive functional impairment.

Communication Disorders

The DSM-5 also includes a disorder called "Unspecified Intellectual Disability" (Intellectual
Developmental Disorder), reserved for individuals over the age of 5 years who are difficult
to evaluate but are strongly suspected of having intellectual disability. Individuals with this
diagnosis may have sensory or physical impairments such as blindness or deafness, or
concurrent mental disorders, making it difficult to administer typical assessment tools. (e.g.,
Bayley Scales of Infant Development and Cattell Infant Scale) to aid in determining adaptive
functional impairment.
In DSM-5, Language Disorder includes both expressive and mixed receptive-expressive
problems. DSM-5 speech disorders include Speech Sound Disorder (formerly known as
Phonological Disorder) and Childhood-Onset Fluency Disorder (Stuttering). There are
different types of communication disorders as specified in DSM-5.

COMMUNICATION
DISORDER

SOCIAL UNSPECIFIED
LANGUAGE SPEECH SOUND CHILD-ONSET COMMUNICATION
FLUENCY DISORDER COMMUNICATIO
DISORDER DISORDER (Pragmatic) N DISORDER
(Stuttering) DISORDER

EXPRESSIVE
LANGUAGE
DEFICITS

MIXED RECEPTIVE
AND EXPRESSIVE
DEFICITS

Language Disorder

Language disorder consists of difficulties in the acquisition and use of language across many
modalities, including spoken and written, due to deficits in comprehension or production
based on both expressive and receptive skills. These deficits include

• reduced vocabulary
• limited abilities in forming sentences using the rules of grammar
• impairments in conversing based on difficulties using vocabulary to connect
sentences in descriptive ways.

Expressive Language Deficits

Expressive language deficits are present when a child demonstrates a selective deficit in
expressive language development relative to receptive language skills and nonverbal
intellectual function.
Milestone in language development:

Age Description

6 months laugh and coo

9 months babble and verbalize syllables such as


dadada or mamama

1 year imitate vocalizations

and can often speak at least one word.

1.5 years children typically can say a handful of


words

2 years Combining words into simple sentences

2.5 years can name an action in a picture, and are


able to make themselves understood
through their verbalizations about half of
the time.

3 years can speak understandably, and

are able to name a color and describe what


they see with several

adjectives.

4 years can name at least 4 colors, and can


converse understandably.

In the early years, prior to entering preschool, the development of proficiency in vocabulary
and language usage is highly variable, and influenced by the amount and quality of verbal
interactions with family members, and after beginning school, a child's language skills are
significantly influenced by the level of verbal engagement in school. A child with expressive
language deficits may be identified using the Wechsler Intelligence Scale for Children III
(WISC-III), in that verbal intellectual level may appear to be depressed compared with the
child's overall intelligence quotient (IQ). A child with expressive language problems is likely

to function below the expected levels of acquired vocabulary, correct tense usage, complex
sentence constructions, and word recall. Children with expressive language deficits often
present verbally as younger than their age.

Epidemiology

• The prevalence of expressive language disturbance decreases with a child's


increasing age.
• 6 percent in children between the ages of 5 and 1 1 years of age.
• In school-age children over the age of 1 1 years, the estimates are lower, ranging
from 3 percent to 5 percent.
• The disorder is two to three times more common in boys than in girls.
• Most prevalent among children whose relatives have a family history of phonologic
disorder or other communication disorders.

Comorbidity

• ADHD- 19 percent,
• anxiety disorders (10 percent),
• oppositional defiant disorder, and conduct disorder (7 percent combined).
• Children with expressive language disorder are also at higher risk for a speech
disorder, receptive difficulties, and other learning disorders.
• Many disorders-such as reading disorder, developmental coordination disorder, and
other communication disorders are associated with expressive language disturbance.

Etiology

Causes are multifactorial. MRI studies indicate:


• Diminished left-right brain asymmetry in the perisylvian and planum temporale
regions.
• inversion of brain asymmetry (right > left).
• Left handedness or ambilaterality appears to be associated with expressive language
problems with more frequency than righthandedness.
• Several studies of twins show significant concordance for monozygotic twins with
respect to language disorders.
• Environmental and educational factors are also postulated to contribute to
developmental language disorders.

Course and Prognosis

The prognosis for expressive language disturbance worsens the longer it persists in a child;
prognosis is also dependent on the severity of the disorder. Outcome of expressive language
deficits is influenced by other comorbid disorders. If children do not develop mood
disorders or disruptive behavior problems, the prognosis is better. The rapidity and extent
of recovery depends on the severity of the disorder.

Treatment

• Parent Child Interaction Therapy


• Language therapy

Mixed Receptive And Expressive Deficits

Children with both receptive and expressive language impairment may have impaired ability
in sound discrimination, deficits in auditory processing, or poor memory for sound
sequences. Children with mixed receptive-expressive disturbance exhibit impaired skills in
the expression and reception (understanding and comprehension) of spoken language. It is
characterized by:

• limited vocabulary,
• use of simplistic sentences, and
• short sentence usage.
Recognition of mixed expressive-receptive language disturbance may be delayed because of
early misattribution of their communication by teachers and parents as a behavioral
problem rather than a deficit in understanding.

Epidemiology

• Mixed receptive-expressive language deficits occur less frequently than expressive


deficits
• Mixed receptive expressive language disturbance is believed to occur in about 5
percent of pre-schoolers and to persist in approximately 3 percent of school-age
children.
• At least twice as prevalent in boys as in girls.

Comorbidity

• About half of children with these deficits also have pronunciation difficulties leading
to speech sound disorder, and about half also have reading disorder.
• ADHD is present in at least one third of children with mixed receptive-expressive
language disturbances.

Etiology

Language disorders most likely have multiple determinants, including genetic factors,
developmental brain abnormalities, environmental influences, neurodevelopmental
immaturity, and auditory processing features in the brain.

• Cognitive deficits, particularly slower processing of tasks involving naming objects, as


well as fine motor tasks.
• Slower myelinization of neural pathways has been hypothesized to account for the
slow processing found in children with developmental language disorders.
• Several studies suggest an underlying impairment of auditory discrimination.

Course and Prognosis


The overall prognosis for language disorder with mixed receptive-expressive disturbance is
less favorable than that for expressive language disturbance alone. When the mixed
disorder is identified in a young child, it is usually severe, and the short-term prognosis is
poor. The prognosis for children who have mixed receptive-expressive language
disturbances varies widely and depends on the nature and severity of the damage.

Treatment

• small, special educational setting that allows more individualized learning.


• Psychotherapy
• Family counselling

Speech Sound Disorder

Children with speech sound disorder have difficulty pronouncing speech sounds correctly
due to omissions of sounds, distortions of sounds, or atypical pronunciation. Formerly called
phonological disorder, typical speech disturbances in speech sound disorder include
omitting the last sounds of the word (e.g., saying mou for mouse or drin for drink), or
substituting one sound for another (saying bwu instead of blue or tup for cup). Distortions in
sounds can occur when children allow too much air to escape from the side of their mouths
while saying sounds like sh or producing sounds like s or z with their tongue protruded.

Speech sound disturbances such as dysarthria and dyspraxia are not diagnosed as speech
sound disorder if they are known to have a neurological basis, according to DSM-5 .

Epidemiology

• Epidemiologic studies suggest that the prevalence of speech sound disorder is at


least 3 percent in pre-schoolers, 2 percent in children 6 to 7 years of age, and 0.5
percent in 17-year old adolescents.
• Speech sound disorder is approximately two to three times more common in boys
than in girls.
• It is also more common among first-degree relatives of patients with the disorder
than in the general population.
• The prevalence of speech sound disorders reportedly falls to 0.5 percent by mid to
late adolescence.

Comorbidity

• Disorders most commonly present with speech sound disorders are language
disorder, reading disorder, and developmental coordination disorder.
• Enuresis may also accompany the disorder.

Etiology

• A maturational delay in the developmental brain


• Twin studies show that concordance rates for monozygotic twins that are higher
than chance.
• Articulation disorders caused by structural or mechanical problems are rare.
Articulation problems that are not diagnosed as speech sound disorder may be
caused by neurological impairment and can be divided into dysarthria and apraxia or
dyspraxia. Dysarthria results from an impairment in the neural mechanisms
regulating the muscular control of speech. This can occur in congenital conditions,
such as cerebral palsy, muscular dystrophy, or head injury, or because of infectious
processes. Apraxia or dyspraxia is characterized by difficulty in the execution of
speech, even when no obvious paralysis or weakness of the muscles used in speech
exists.

Course and Prognosis

Children older than age 5 with articulation problems are at higher risk for auditory
perceptual problems. Spontaneous recovery is rare after the age of 8 years.

Treatment

• Phonological approach
• Traditional approach
• Early intervention can be helpful
• Parental counseling and monitoring of child-peer relationships and school behavior
Child-Onset Fluency Disorder (Stuttering)

Child-onset fluency disorder (stuttering) usually begins during the first years of life and is
characterized by disruptions in the normal flow of speech by involuntary speech motor
events. Stuttering can include a variety of specific disruptions of fluency, including sound or
syllable repetitions, sound prolongations, dysrhythmic phonations, and complete blocking
or unusual pauses between sounds and syllables of words. Associated behaviours such as
eye blinks, facial grimacing, head jerks, and abnormal body movements, may be observed
before or during the disrupted speech.

Epidemiology

• An epidemiologic survey of 3- to 1 7-year-olds reports that the prevalence of


stuttering is approximately 1 .6 percent.
• Stuttering tends to be most common in young children and has often resolved
spontaneously by the time the child is older.
• The typical age of onset is 2 to 7 years of age, with 90 percent of children exhibiting
symptoms by age 7 years.
• According to the DSM-5, the rate dips to 0.8 percent by adolescence.
• Stuttering affects about three to four males for every one female.

Comorbidity

Anxiety disorder, Tic’s disorder, phonological disorder, expressive language disorder, mixed
receptive expressive language disorder, and ADHD.

Etiology

• Causes are multifactorial.


• Anxiety or conflict
• Stuttering can be exacerbated by certain stressful situations.
• Incomplete lateralization or abnormal cerebral dominance (organic model).
• EEG studies indicate that male stutterers had right hemispheric alpha suppression
across stimulus words and tasks; nonstutterers had left hemispheric suppression.
• Overrepresentation of left-handedness and ambidexterity.
• Leaming theories about the cause of stuttering include the semantogenic theory, in
which stuttering is basically a learned response to normative early childhood
disfluencies.
• Another learning model focuses on classic conditioning, in which the stuttering
becomes conditioned to environmental factors.
• In the cybernetic model, speech is viewed as a process that depends on appropriate
feedback for regulation; stuttering is hypothesized to occur because of a breakdown
in the feedback loop.

Course and Prognosis

The course of stuttering is often long term, with periods of partial remission lasting for
weeks or months and exacerbations occurring most frequently when a child is under
pressure to communicate.

Treatment

• Lidcombe Program
• family-based, parent-child interaction therapy
• CBT
• selective serotonin reuptake inhibitor (SSRI) antidepressants.
Stutterers who have poor self-image, comorbid anxiety disorders or depressive
disorders.

Social (Pragmatic)Communication Disorder

Social (pragmatic) communication disorder is a newly added diagnosis to DSM-5


characterized by persistent deficits in using verbal and nonverbal communication for social
purposes in the absence of restricted and repetitive interests and behaviors. Deficits may be
exhibited by difficulty in understanding and following social rules of language, gesture, and
social context.
One of the reasons that social (pragmatic) communication disorder was introduced into the
DSM-5 was to include those children with social communication impairment who do not
exhibit restrictive and repetitive interests and behaviors, and therefore do not fulfill the
criteria for autism spectrum disorders. Pragmatic communication encompasses the ability to
infer meaning in a given communication by not only understanding the words used, but also
integrating the phrases into their prior understanding of the social environment.

Epidemiology

It is difficult to estimate the prevalence of social (pragmatic) communication disorder.

Comorbidity

Language disorder, ADHD, Specific learning disorders with impairments in reading and
writing, social anxiety disorder.

Etiology

• A family history of communication disorders, autism spectrum disorder, or specific


learning disorder
• Multifactorial

Course and Prognosis

The course and outcome of social (pragmatic) communication disorder is highly variable and
dependent on both the severity of the disorder and potential interventions administered.

Treatment

A randomized controlled trial of a social communication intervention directed specifically at


children with social (pragmatic) communication disorder aimed at three areas of
communication: ( 1 ) social understanding and social interaction; (2) verbal and nonverbal
pragmatic skills, including conversation; and (3) language processing, involving making
inferences, and learning new words.
Unspecified Communication Disorder

Disorders that do not meet the diagnostic criteria for any specific communication disorder
fall into the category of unspecified communication disorder. Example: Voice disorder.

Operationally, speech production can be broken down into five interacting subsystems,
including respiration (airflow from the lungs), phonation (sound generation in the larynx),
resonance (shaping of the sound quality in the pharynx and nasal cavity), articulation
(modulation of the sound stream into consonant and vowel sounds with the tongue, jaw,
and lips), and suprasegmentalia (speech rhythm, loudness, and intonation).

Cluttering is not listed as a disorder in the DSM-5, but it is an associated speech abnormality
in which the disturbed rate and rhythm of speech impair intelligibility.

UNIT 4: SEPARATION ANXIETY DISORDER, SCHOOL PHOBIA, SELECTIVE MUTISM, REACTIVE


ATTACHMENT DISORDER, ADHD

Separation Anxiety Disorder

Separation anxiety is a universal human developmental phenomenon emerging in infants


younger than 1 year of age and marking a child's awareness of a separation from his or her
mother or primary caregiver. Normative separation anxiety peaks between 9 months and 1
8 months and diminishes by about 2 Yi years of age, enabling young children to develop a
sense of comfort away from their parents in preschool. Separation anxiety or stranger
anxiety most likely evolved as a human response that has survival value.

According to DSM-5, separation anxiety disorder is characterized by a level of fear or anxiety


regarding separation from their parents or primary caregiver, which is beyond
developmental expectations. Furthermore, there may be a pervasive worry that harm will
come to a parent upon separation, which leads to extreme distress, and sometimes
nightmares. The DSM-5 requires the presence of at least three symptoms related to
excessive worry about separation from a major attachment figure for a period of at least 4
weeks.
Epidemiology

• 2.4 percent meeting diagnostic criteria for separation anxiety disorder


• Separation anxiety disorder is estimated to be about 4 percent in children and young
adolescents.
• Separation anxiety disorder is more common in young children than in adolescents
and has been reported to occur equally in boys and girls.

Etiology

Biopsychosocial factors

• parental psychopathology and parenting styles on the emergence of anxiety


disorders in childhood
• parental overprotection has been associated with an increased risk of the
development of anxiety disorders in children
• It is also well known that maternal depression and anxiety have led to an increased
risk for anxiety and depression in children.
• child's temperament influences
• External life stresses (The death of a relative, a child's illness, a change in a child's
environment, or a move to a new neighborhood or school)
• a higher resting heart rate and an acceleration of heart rate
• elevated salivary cortisol levels, elevated urinary catecholamine levels, and greater
papillary dilation during cognitive tasks.

Social Learning factors

• Fear, in response to a variety of unfamiliar or unexpected situations, may be


unwittingly communicated from parents to children by direct modeling.
• Some parents appear to teach their children to be anxious by overprotecting them
from expected dangers or by exaggerating the dangers.
• Ongoing family conflict, and behavioral inhibition among young children.

Genetic factors
• Heritability for anxiety disorders in children and adolescents ranges from 36 percent
to 65 percent
• Family studies have shown that the offspring of adults with anxiety disorders are at
an increased risk of having an anxiety disorder themselves.
• Separation anxiety disorder and depression in children overlap, and the presence of
an anxiety disorder increases the risk of a future episode of a depressive disorder.

Course and Prognosis

Course and prognosis are varied and are related to the age of onset, the duration of the
symptoms, and the development of comorbid anxiety and depressive disorders. In cases
with multiple comorbidities, the prognosis is more guarded. Early age of onset and later age
at diagnosis predict slow recovery.

Treatment

• Psychotherapy
• CBT
• SSRIs [fluvoxamine (Luvox), fluoxetine (Prozac), sertraline (Zoloft), and paroxetine
(Paxil)]
• Family education
• Family psychosocial intervention
• Setraline
• CBT+ SSRI/ sertraline
• Exposure based CBT
• Coaching Approach behavior and Leading by Modeling (the CALM program)
• Parent-Child Interaction Therapy (PCIT).

School Phobia

The term school phobia first was introduced in the clinical literature by Johnson in 1941
(Johnson, Falstein, Szurek, & Svendson, 1941). The term was used to denote a syndrome
of childhood characterized by marked anxiety about attending school and absenteeism.
The syndrome was thought to be precipitated by fear of leaving the mother or home
rather than a fear of school per se. In fact, Johnson (Estes, Haylett, & Johnson, 1956) later
substituted the term separation anxiety to communicate better the focus of pathology.
Similarly, Bowlby (1973) has maintained that the school-phobic child fears the absence or
loss of an attachment figure (i.e., mother) or security base (i.e., home) rather than fearing
and avoiding the actual school situation, as a truly phobic individual would. He suggested
that the term pseudophobia be used to help clarify the underlying pathological processes
involved.

Unfortunately, historically, and even into the present, the label "school phobia" has been
applied to both types of children, that is, those who evidence separation anxiety and those
who show a phobic reaction toward school. However, it is clear that not all children with
school phobia show separation anxiety problems, nor do all children with separation anxiety
disorder exhibit school refusal.

The phobic child will be fearful and avoid school alone, whereas the separation-anxious
child will be fearful and avoid a host of situations all related to the theme of separation. The
true school-phobic child will be fearful of some aspect of the school environment. The
nature of the fear can be either "simple" (for example, an excessive and/or irrational fear of
being physically harmed by other children) or "social" (for example, an excessive and/or
irrational fear of being criticized by teachers or other students) according to DSM criteria. In
distinguishing separation anxiety disorder from a true phobic disorder of school, it is often
helpful to inquire as to where the child is when not in school. Children with a phobia of
school will be equally comfortable in any setting other than the school environment, not
being limited to being at home or with mother.

Selective Mutism

Selective mutism, believed to be related to social anxiety disorder, although an independent


disorder, is characterized in a child by persistent lack of speaking in one or more specific
social situations, most typically, the school setting. A child with selective mutism may
remain completely silent or near silent, in some cases only whispering in a school setting.
Although selective mutism often begins before age 5 years, it may not be apparent until the
child is expected to speak or read aloud in school. Children with this disorder are fully
capable of speaking competently when not in a socially anxiety-producing situation.

Epidemiology

• According to the DSM-5, the point prevalence of selective mutism using clinic or
school samples has been found to range between 0.03 percent and 1 percent,
depending on whether a clinical or community sample is studied.
• Young children are more vulnerable to the disorder than older ones.
• Selective mutism appears to be more common in girls than in boys.

Etiology

Genetic Factors

• Children with selective mutism are at greater risk for delayed onset of speech or
speech abnormalities that may be contributory.
• 90 percent of children with selective mutism met diagnostic criteria for social
phobia.
• Maternal anxiety, depression, and heightened dependence needs are often noted in
families of children with selective mutism

Parental Interactions

• Maternal overprotection and anxiety disorders in parents may exacerbate


interactions that unwittingly reinforce selective mutism behaviors.
• Some children seem predisposed to selective mutism after early emotional or
physical trauma; thus, some clinicians refer to the phenomenon as traumatic mutism
rather than selective mutism.

Speech and Language Factors

• Selective mutism is conceptualized as an anxiety-based refusal to speak; however, a


higher than expected proportion of children with the disorder have a history of
speech delay.
• Children with selective mutism are at higher risk for a disturbance in auditory
processing, which may interfere with efficient processing of incoming sounds.

Course and Prognosis

Many very young children with early symptoms of selective mutism in a transitional period
when entering preschool have a spontaneous improvement over a number of months and
never fulfil criteria for the disorder. A common pattern for a child with selective mutism is

to speak almost exclusively at home with the nuclear family but not elsewhere, especially
not at school. Children who do not improve by age 1 0 years appear to have a long-term
course and a worse prognosis.

Treatment

A multimodal approach using psychoeducation for the family, CBT, and SSRIs as needed is
recommended. Individual CBT is recommended as a first-line treatment. Family education
and cooperation are beneficial.

Reactive Attachment Disorder

Reactive attachment disorder and disinhibited social engagement disorder are clinical
disorders characterized by aberrant social behaviors in a young child that reflect grossly
negligent parenting and maltreatment that disrupted the development of normal
attachment behavior. The diagnosis of reactive attachment disorder was first defined in the
DSM-III in 1 980. According to the DSM-5, reactive attachment disorder is characterized by a
consistent pattern of emotionally withdrawn responses toward adult caregivers, limited
positive affect, sadness, and minimal social responsiveness to others, and concomitant
neglect, deprivation, and lack of appropriate nurturance from caregivers. It is presumed that
reactive attachment disorder is due to grossly pathological caregiving received by the child.
The pattern of care may exhibit disregard for a child's emotional or physical needs or
repeated changes of caregivers, as when a child is frequently relocated during foster care.
Reactive attachment disorder is not accounted for by autism spectrum disorder, and the
child must have a developmental age of at least 9 months.
Epidemiology

• Occur in less than 1 percent of the population.


• Given that pathogenic care, including maltreatment, occurs more frequently in the
presence of general psychosocial risk factors, such as poverty, disrupted families,
and mental illness among caregivers, these circumstances are likely to increase the
risk of reactive attachment disorder and disinhibited social engagement disorder.

Etiology

• Disturbances of normal attachment behaviors.


• Presumed to be linked to maltreatment of the child, including emotional neglect,
physical abuse, or both.
• The likelihood of neglect increases with parental psychiatric disorder, substance
abuse, intellectual disability, the parent's own harsh upbringing, social isolation,
deprivation, and premature parenthood (i.e. adolescent).
• Higher risk of failure to gain weight as neonates, feeding difficulty, and poor impulse
control.
• Multiple psychiatric comorbidities, lower intelligence quotients (IQs).

Course and Prognosis

The prognosis for children with reactive attachment disorder and disinhibited social
engagement disorder is influenced by the duration and severity of the neglect and the
degree of impairment that results. Children who have multiple problems stemming from
pathogenic caregiving may recover physically faster and more completely than they do
emotionally. Findings from these studies suggest that children with reactive attachment
disorder, who are later adopted into caring environments, improve in their attachment
behaviors and may normalize over time.

Treatment

• Comprehensive assessment of the current level of safety and adequate caregiving


• Hospitalization
• Attachment and Biobehavioral Catch-up (ABC) or to a control Intervention

[ABC intervention showed significantly lower rates of disorganized attachment


(32%), and higher rates of secure attachment (52%) compared to those who received
the control intervention]

• Psychosocial interventions for families


[( 1 ) psychosocial support services, including hiring a homemaker, improving the
physical condition of the apartment, or obtaining more adequate housing; improving
the family's financial status; and decreasing the family's isolation;
(2)psychotherapeutic interventions, including individual psychotherapy, psychotropic
medications, and family or marital therapy;
(3) educational counseling services, including motherinfant or mother-toddler
groups, and counseling to increase awareness and understanding of the child's needs
and to develop parenting skills; and
(4) provisions for close monitoring of the progression of the patient's emotional and
physical well-being.]

Attention-Deficit-Hyperactivity Disorder (ADHD)

UNIT 5: CONDUCT DISORDER, OPPOSITIONAL DEFIANT DISORDER, TIC DISORDERS,


ELIMINATION DISORDERS- ENCOPRESIS AND ENURESIS, EATING DISORDERS- PICA,
ANOREXIA NERVOSA, BULIMIA NERVOSA.

Conduct Disorder

Oppositional Defiant Disorder

DSM-5, has divided oppositional defiant disorder into three types: Angry/Irritable Mood,
Argumentative/ Defiant Behavior, and Vindictiveness. A child may meet diagnostic criteria
for oppositional defiant disorder with a 6-month pattern of at least four symptoms from the
three types above. Angry/Irritable children with oppositional defiant disorder often lose
their tempers, are easily annoyed, and feel irritable much of the time. Argumentative/
Defiant children display a pattern of arguing with authority figures, and adults such as
parents, teachers, and relatives. Children with this type of oppositional defiant disorder
actively refuse to comply with requests, deliberately break rules, and purposely annoy
others. These children often do not take responsibility for their actions, and often blame
others for their misbehavior. Children with the Vindictive type of oppositional defiant
disorder are spiteful, and have shown vindictive or spiteful actions at least twice in 6 months
to meet diagnostic criteria. Oppositional defiant disorder is characterized by enduring
patterns of negativistic, disobedient, and hostile behavior toward authority figures, as well
as an inability to take responsibility for mistakes, leading to placing blame on others.
Children with oppositional defiant disorder frequently argue with adults and become easily
annoyed by others, leading to a state of anger and resentment. Children with oppositional
defiant disorder may have difficulty in the classroom and with peer relationships, but
generally do not resort to physical aggression or significantly destructive behavior.

Epidemiology

• Oppositional and negativistic behavior, in moderation, is developmentally normal in


early childhood and adolescence.
• Oppositional defiant disorder has been reported to occur at rates ranging from 2 to
16 percent with increased rates reported in boys before puberty, and an equal sex
ratio reported after puberty.
• The prevalence of oppositional defiant behavior in males and females diminishes in
youth older than 1 2 years of age.

Etiology

• Pathology begins when this developmental phase persists abnormally, authority


figures overreact, or oppositional behavior recurs considerably more frequently than
in most children of the same mental age.
• Children exhibit a range of temperamental predispositions to strong will, strong
preferences, or great assertiveness.
• In late childhood, environmental trauma, illness, or chronic incapacity, such as
mental retardation, can trigger oppositionality as a defense against helplessness,
anxiety, and loss of self-esteem.
• Classic psychoanalytic theory implicates unresolved conflicts as fueling defiant
behaviors targeting authority figures.
• Behaviorists have observed that in children, oppositionality may be a reinforced,
learned behavior through which a child exerts control over authority figures.
• In addition, increased parental attention during a tantrum can reinforce the
behavior.

Course and Prognosis

The course of oppositional defiant disorder depends on the severity of the symptoms and
the ability of the child to develop more adaptive responses to authority. The stability of
oppositional defiant disorder varies over time, with approximately 25 percent of children
with the disorder no longer meeting diagnostic criteria. The prognosis for oppositional
defiant disorder in a child depends somewhat on family functioning and the development of
comorbid psychopathology.

Treatment

• Family intervention using both direct training of the parents in child management
skills and careful assessment of family interactions.
• CBT
• Individual psychotherapy in which they role play and "practice" more adaptive
responses.

Tic Disorders (Tourette’s Disorder)

Tics are neuropsychiatric events characterized by brief rapid motor movements or


vocalizations that are typically performed in response to irresistible premonitory urges. Tic
disorders are significantly more common in children than in adults. Tics may be transient or
chronic, with a waxing and waning course.

Motor tics most commonly affect the muscles of the face and neck, such as eye-blinking,
head-jerking, mouth grimacing, or head-shaking. Typical vocal tics include throat clearing,
grunting, snorting, and coughing. Tics are repetitive muscle contractions resulting in
movements or vocalizations that are experienced as involuntary, although they can
sometimes be suppressed voluntarily.

The most widely studied and most severe tic disorder is Gilles de la Tourette syndrome, also
known as Tourette 's disorder. Georges Gilles de la Tourette (1857-1 904) first described a
patient with a syndrome, which became known as Tourette 's disorder in 1 885, while he
was studying with Jean-Martin Charcot in France.

Tics often consist of motions that are used in volitional movements. Motor and vocal tics are
divided into simple and complex types. Simple motor tics are those composed of repetitive,
rapid contractions of functionally similar muscle groups-for example, eye-blinking, neck-
jerking, shoulder-shrugging, and facial-grimacing. Common simple vocal tics include
coughing, throat-clearing, grunting, sniffing, snorting, and barking. Complex motor tics
appear to be more purposeful and ritualistic than simple tics. Common complex motor tics
include grooming behaviors, the smelling of objects, jumping, touching behaviors,
echopraxia (imitation of observed behavior), and copropraxia (display of obscene gestures).
Complex vocal tics include repeating words or phrases out of context, coprolalia (use of
obscene words or phrases), palilalia (a person's repeating his or her words), and echolalia
(repetition of the last-heard words of others).

Epidemiology

• Males are affected between 2 and 4 times more often than females.
• At age 1 3 years, however, using stringent criteria, the prevalence rate for Tourette 's
disorder drops to 0.3 percent.
• The lifetime prevalence of Tourette's disorder is estimated to be approximately 1
percent.

Etiology

Genetic factors

• Twin studies indicate that concordance for the disorder in monozygotic twins is
significantly greater than that in dizygotic twins.
• Tourette 's disorder and chronic motor or vocal tic disorder are likely to occur in the
same families.
• The sons of mothers with Tourette 's disorder seem to be at the highest risk for the
disorder.
• Studies of a long family pedigree suggest that Tourette 's disorder may be
transmitted in a bilinear mode; that is, Tourette's disorder appears to be inherited
through an autosomal pattern in some families, intermediate between dominant
and recessive.
• First-degree relatives of persons with Tourette 's disorder are at high risk for the
development of Tourette 's disorder, chronic motor or vocal tic disorder, and OCD.

Neuroimaging studies

• Deactivation of the putamen and globus pallidus, along with partial activation of
regions of the prefrontal cortex and caudate nucleus.
• Neuroimaging studies using cerebral blood flow in positron emission tomography
(PET) and single photon emission tomography (SPECT) suggest that alterations of
activity may occur in various brain regions in patients with Tourette 's disorder
compared to controls, including the frontal and orbital cortex, striatum, and
putamen.

Immunological factors and post infection

• Group A beta-hemolytic streptococcal infections.

Course and Prognosis

Tourette 's disorder usually emerge in early childhood, with a natural history leading to
reduction or complete resolution of tics symptoms in most cases by adolescence or early
adulthood. Children with mild forms of Tourette's disorder often have satisfactory peer
relationships, function well in school, and develop adequate self-esteem, and may not
require treatment.

Treatment
• A scale to measure tic severity, the Premonitory Urge for Tics Scale (PUTS), was
examined psychometrically, and found to be internally consistent and correlated
with overall tic severity in youth over 1 0 years of age.
• Evidence-based Behavioral and Psychosocial Treatment

[Comprehensive Behavioral Intervention for Tics," (CBIT) both found converging


evidence supporting habit-reversal training and exposure and response prevention
as efficacious treatments for tic reduction.]

o Habit Reversal- The primary components of habit reversal are awareness


training, in which the child uses self-monitoring to enhance awareness of tic
behaviors and the premonitory urges or sensations indicating that a tic is
about to occur.
o Exposure and Response Prevention- The rationale for this treatment is based
on the notion that tics occur as a conditioned response to unpleasant
premonitory urges, and since the tics reduce the urge, they become
associated with the premonitory urge. Each time the urge is reduced by the
tic, their association is further strengthened. exposure and response
prevention ask the patient to suppress tics for increasingly prolonged periods
in order to break the association between the urges and the tics.
• Evidence-based Pharmacotherapy
o Atypical and Typical Antipsychotic Agents- Risperidone, Haloperidol (Haldol),
pimozide (Orap), fluphenazine, Aripiprazole, Olanzapine and ziprasidone.
o Noradrenergic Agents- clonidine, guanfacine, atomoxetine and SSRIs.
o Alternative Agents-Tetrabenazine, Topiramate, and Tetrahydrocannabinol.

ELIMINATION DISORDERS

When children exhibit incontinence of urine or feces on a regular basis, it is troubling to the
child and families, and often misunderstood as voluntary misbehavior.
ELIMINATION
DISORDER

ENURESIS ENCOPRESIS

Encopresis

In DSM-5, encopresis is the repeated passage of feces into inappropriate places. The
diagnosis is not made until after age 4 years. Encopresis is characterized by a pattern of
passing feces in inappropriate places, such as in clothing or other places, at least once per

month for 3 consecutive months, whether the passage is involuntary or intentional. Up to


about 80 percent of children with fecal incontinence have associated constipation. A child
with encopresis typically exhibits dysregulated bowel function; for example, with infrequent
bowel movements, constipation, or recurrent abdominal pain and sometimes pain on
defecations.

Epidemiology

• Globally, community prevalence of encopresis ranges from 0.8 to 7 .8 percent.


• Incidence rates for encopretic behavior decrease drastically with increasing age.
• Encopresis is virtually absent in youth with normal intellectual function by the age of
16 years.
• Males are found to from three to six times more likely to have encopresis than
females.
• A significant relation exists between encopresis and enuresis.

Etiology

• Encopresis involves an often-complicated interplay between physiological and


psychological factors leading to an avoidance of defecation.
• Inadequate training or the lack of appropriate toilet training may delay a child's
attainment of continence.
• Evidence indicates that some encopretic children have lifelong inefficient and
ineffective sphincter control.
• In about 5 to 1 0 percent of cases, fecal incontinence is caused by medical conditions
including abnormal innervation of the anorectal region, ultrashort segment
Hirschsprung disease, neuronal intestinal dysplasia, or spinal cord damage.
• Encopresis, in some cases can be considered secondary, that is, emerging after a
period of normal bowel habits in conjunction with a disruptive life event, such as the
birth of a sibling or a move to a new home.
• When encopresis manifests after a long period of fecal continence, it may reflect a
developmental regressive behavior based on a severe stressor, such as a parental
separation, loss of a best friend, or an unexpected academic failure.
• Most children with encopresis retain feces and become constipated, either
voluntarily or secondary to painful defecation (Megacolon).

Course and Prognosis

The outcome of encopresis depends on the etiology, the chronicity of the symptoms, and
coexisting behavioral problems. Encopresis in children who have contributing physiological
factors, such as poor gastric motility and an inability to relax the anal sphincter muscles, is
more difficult to treat than that in those with constipation but normal sphincter tone. The
outcome of encopresis is influenced by a family's willingness and ability to participate in
treatment without being overly punitive and by the child's ability and motivation to engage
in treatment.

Treatment

• A typical treatment plan for a child with encopresis includes daily oral administration
of laxatives such as PEG at 1 g/kg per day.
• Cognitive-behavioral intervention
• Interactive parent-child family guidance intervention
• Supportive psychotherapy
• Relaxation techniques

Enuresis
Enuresis is repeated urination into bed or clothes. The diagnosis is made after age 5 years
for enuresis, the ages at which a typically developing child is expected to master these skills.

Enuresis is characterized by repeated voiding of urine into clothes or bed, whether the
voiding is involuntary or intentional. The behavior must occur twice weekly for at least 3
months or must cause clinically significant distress or impairment socially or academically.
The child's chronological or developmental age must be at least 5 years.

Epidemiology

• The prevalence of enuresis ranges from 5 to 10 percent in 5-year-olds, 1 .5 to 5


percent in 9- to 10-year-olds, and about 1 percent in adolescents 15 years and older.
• The prevalence of enuresis decreases with increasing age.
• Enuresis affects about 1 % of adults.
• Although most children with enuresis do not have a comorbid psychiatric disorder,
children with enuresis are at higher risk for the development of another psychiatric
disorder.
• Nocturnal enuresis is about 50 percent more common in boys and accounts for
about 80 percent of children with enuresis.
• Diurnal enuresis is also seen more often in boys who often delay voiding until it is
too late.
• A spontaneous resolution of nocturnal enuresis is about 15 percent per year.
• Nocturnal enuresis consists of a normal volume of voided urine, whereas when small
volumes of urine are voided at night, other medical causes may be present.

Etiology

Biological factors

• Enuresis involves complex neurobiological systems that include contributions from


cerebral and spinal cord centers, motor and sensory functions, and autonomic and
voluntary nervous systems.
• Excessive volumes of urine produced at night may lead to enuresis at night in
children without any physiologic abnormalities.
• Night time enuresis often occurs in the absence of a specific neurogenic cause.
• Daytime enuresis may develop based on behavioral habits developed over time.
• Genetic factors are believed to play a role in the expression of enuresis, given that
the emergence of enuresis has been found to be significantly greater in first-degree
relatives.
• The concordance rate is higher in monozygotic twins than in dizygotic twins.
• Nocturnal enuresis occurs when the bladder is full because of lower than expected
levels of night time antidiuretic hormone.

Psychosocial factors

• Psychosocial stressors appear to precipitate enuresis in a subgroup of children with


the disorder.
• The birth of a sibling, hospitalization between the ages of 2 and 4 years, the start of
school, separation of a family due to divorce, or a move to a new environment.

Course and Prognosis

Enuresis is often self-limited, and a child with enuresis may have a spontaneous remission.
Enuresis after at least one dry year usually begins between the ages of 5 and 8 years; if it
occurs much later, especially during adulthood, organic causes must be investigated.
Relapses occur in children with enuresis who are becoming dry spontaneously and in those
who are being treated. Relapses occur in children with enuresis who are becoming dry
spontaneously and in those who are being treated.

Treatment

• If toilet training was not attempted, the parents and the patient should be guided in
this undertaking.
• Record keeping is helpful in determining a baseline and following the child's
progress, and may itself be a reinforcer.
• A star chart may be particularly helpful.
• Other useful techniques include restricting fluids before bed and night lifting to toilet
train the child.
• Interventions with alarm therapy, which is triggered by wet underwear, has been a
mainstay of treatment for enuresis.
• Alarm therapy works by alerting a child to respond when voiding begins during sleep.
• Desmopressin (DDAVP)
• Behaviour therapy

[Classic conditioning with the bell (or buzzer) and pad (alarm) apparatus is generally
the most effective treatment for enuresis, with dryness resulting in more than 50
percent of cases. Bladder training-encouragement or reward for delaying micturition
for increasing times during waking hours-has also been used. Although sometimes
effective, this method is decidedly inferior to the bell and pad.]

• Reboxetine (Edronax, Vestra)


• Psychotherapy

[Psychotherapy may be useful in dealing with the coexisting psychiatric problems


and the emotional and family difficulties that arise secondary to chronic enuresis.]

EATING DISORDERS

Pica

Pica is defined as persistent eating of nonnutritive substances. Typically, no specific


biological abnormalities account for pica, and in many cases, pica is identified only when
medical problems such as intestinal obstruction, intestinal infections, or poisonings arise,
such as lead poisoning due to ingestion of lead containing paint chips. Pica is more frequent
in the context of autism spectrum disorder or intellectual disability. Pica can emerge in
young children, adolescents, or adults; however, a minimum of 2 years of age is suggested
by DSM-5 in the diagnosis of pica, in order to exclude developmentally appropriate
mouthing of objects by infants that may accidentally result in ingestion. Pica occurs in both
males and females. Among adults, certain forms of pica, including geophagia (clay eating)
and amylophagia (starch eating), have been reported in pregnant women.

Epidemiology
• The prevalence of pica is unclear.
• Pica is more common among children and adolescents with autism spectrum
disorder and intellectual disability.
• It has been reported that up to 1 5 percent of persons with severe intellectual
disability have engaged in pica.
• Pica appears to affect both sexes equally.

Etiology

• Pica is most often a transient disorder that typically lasts for several months and
then remits.
• Nutritional deficiencies in minerals such as zinc or iron have been anecdotally
reported in some instances.
• Severe child maltreatment in the form of parental neglect and deprivation has been
reported in some cases of pica.
• Lack of supervision, as well as adequate feeding of infants and toddlers may increase
the risk of pica.

Course and Prognosis

The prognosis for pica is usually good, and typically in children with normal intellectual
function, pica generally remits spontaneously within several months. In childhood, pica
usually resolves with increasing age; in pregnant women, pica is usually limited to the term
of the pregnancy.

Treatment

• The first step in determining appropriate treatment of pica is to investigate the


specific situation whenever possible.
• No definitive treatment exists for pica per se [most treatment is aimed at education
and behavior modification].
• Positive reinforcement, modeling, behavioral shaping, and overcorrection treatment
have been used. Increasing parental attention, stimulation, and emotional
nurturance may yield positive results.
• Medical complications (e.g., lead poisoning) that develop secondarily to the pica
must also be treated.

Anorexia Nervosa

The term anorexia nervosa is derived from the Greek term for "loss of appetite" and a Latin
word implying nervous origin. Anorexia nervosa is a syndrome characterized by three
essential criteria.

1. The first is a self-induced starvation to a significant degree a behavior.


2. The second is a relentless drive for thinness or a morbid fear of fatness-a
psychopathology.
3. The third criterion is the presence of medical signs and symptoms resulting from
starvation-a physiological symptomatology.

Anorexia nervosa is often, but not always, associated with disturbances of body image, the
perception that one is distressingly large despite obvious medical starvation. The distortion
of body image is disturbing when present, but not pathognomic, invariable, or required for
diagnosis. Two subtypes of anorexia nervosa exist: restricting and binge/purge.

Approximately half of anorexic persons will lose weight by drastically reducing their total
food intake. The other half of these patients will not only diet but will also regularly engage
in binge eating followed by purging behaviors. Anorexia nervosa is much more prevalent in
females than in males and usually has its onset in adolescence.

Epidemiology

• Anorexia nervosa has been reported more frequently with increasing reports of the
disorder in prepubertal girls and in boys.
• The most common ages of onset of anorexia nervosa are the midteens, but up to 5
percent of anorectic patients have the onset of the disorder in their early 20s.
• The most common age of onset is between 14 and 18 years.
• Anorexia nervosa is estimated to occur in about 0.5 to 1 percent of adolescent girls.
• It occurs 1 0 to 20 times more often in females than in males.
Comorbidity

Anorexia nervosa is associated with depression in 65 percent of cases, social phobia in 35


percent of cases, and obsessive-compulsive disorder in 25 percent of cases.

Etiology

Biological factors

• Higher concordance rates in monozygotic twins than in dizygotic twins.


• Diminished norepinephrine turnover and activity are suggested by reduced 3-
methoxy-4-hydroxyphenylglycol (MHPG) levels in the urine and the cerebrospinal
fluid (CSF) of some patients with anorexia nervosa.
• Endogenous opioids may contribute to the denial of hunger in patients with anorexia
nervosa.
• Starvation results in many biochemical changes, some of which are also present in
depression, such as hypercortisolemia and nonsuppression by dexamethasone.
• Thyroid function is suppressed.
• Starvation may produce amenorrhea, which reflects lowered hormonal levels
(luteinizing, follicle-stimulating, and gonadotropin-releasing hormones).
• Hypothalamic-pituitary axis (neuroendocrine) dysfunction.
• Some studies have shown evidence for dysfunction in serotonin, dopamine, and
norepinephrine.
• Corticotropin-releasing factor (CRF), neuropeptide Y, gonadotropin-releasing
hormone, and thyroid-stimulating hormone.

Social factors

• Patients with anorexia nervosa find support for their practices in society's emphasis
on thinness and exercise.
• No family constellations are specific to anorexia nervosa, but some evidence
indicates that these patients have close, but troubled, relationships with their
parents.
• Vocational and avocational interests interact with other vulnerability factors to
increase the probability of developing eating disorders.
• A gay orientation in men is a proved predisposing factor.

Psychological and psychodynamic factors

• Anorexia nervosa appears to be a reaction to the demand that adolescents behave


more independently and increase their social and sexual functioning.
• These patients typically lack a sense of autonomy and selfhood.
• Psychoanalytic clinicians who treat patients with anorexia nervosa generally agree
that these young patients have been unable to separate psychologically from their
mothers.
• Starvation may unconsciously mean arresting the growth of this intrusive internal
object and thereby destroying it.
• Often, a projective identification process is involved in the interactions between the
patient and the patient's family.

Course and Prognosis

The course of anorexia nervosa varies greatly-spontaneous recovery without treatment,


recovery after a variety of treatments, a fluctuating course of weight gains followed by
relapses, and a gradually deteriorating course resulting in death caused by complications of
starvation. In general, the prognosis is not good. Studies have shown a range of mortality
rates from 5 to 18 percent.

Treatment

• Hospitalization [to restore patients' nutritional state]


• Psychotherapy
o CBT
o Dynamic psychotherapy
o Family therapy
• Cyproheptadine (Periactin)
• Amitriptyline (Elavil)
• Clomipramine (Anafranil)
• Pimozide (Orap)
• Chlorpromazine (Thorazine)
• Fluoxetine (Prozac)

Bulimia Nervosa

Bulimia nervosa is characterized by episodes of binge eating combined with inappropriate


ways of stopping weight gain. Unlike patients with anorexia nervosa, those with bulimia
nervosa typically maintain a normal body weight. The term bulimia nervosa derives from the
terms for "oxhunger" in Greek and "nervous involvement" in Latin. For some patients,
bulimia nervosa may represent a failed attempt at anorexia nervosa, sharing the goal of
becoming very thin, but occurring in an individual less able to sustain prolonged
semistarvation or severe hunger as consistently as classic restricting anorexia nervosa
patients. For others, eating binges represent "breakthrough eating" episodes of giving in to
hunger pangs generated by efforts to restrict eating so as to maintain a socially desirable
level of thinness.

Epidemiology

• Bulimia nervosa is more prevalent than anorexia nervosa.


• Estimates of bulimia nervosa range from 1 to 4 percent of young women.
• As with anorexia nervosa, bulimia nervosa is more common in women than in men,
but its onset is often later in adolescence than that of anorexia nervosa.
• The onset may also occur in early adulthood.

Etiology

Biological factors

• Serotonin and norepinephrine have been implicated. Plasma endorphin levels are
raised.
• Increased frequency of bulimia nervosa is found in first-degree relatives of persons
with the disorder.
• MRI studies suggests that overeating in bulimia nervosa may result from an
exaggerated perception of hunger signals related to sweet taste mediated by the
right anterior insula area of the brain.

Social factors

• Patients with bulimia nervosa, as with those with anorexia nervosa, tend to be high
achievers and to respond to societal pressures to be slender.
• As with anorexia nervosa patients, many patients with bulimia nervosa are
depressed and have increased familial depression, but the families of patients with
bulimia nervosa are generally less close and more conflictual than the families of
those with anorexia nervosa.
• Patients with bulimia nervosa describe their parents as neglectful and rejecting.

Psychological factors

• Patients with bulimia nervosa are more outgoing, angry, and impulsive than those
with anorexia nervosa.
• Alcohol dependence, shoplifting, and emotional lability (including suicide attempts)
are associated with bulimia nervosa.
• These patients generally experience their uncontrolled eating as more ego-dystonic
than do patients with anorexia nervosa and so seek help more readily.
• Patients with bulimia nervosa lack superego control and the ego strength of their
counterparts with anorexia nervosa.
• Their difficulties in controlling their impulses are often manifested by substance
dependence and self-destructive sexual relationships in addition to the binge eating
and purging that characterize the disorder.
• Many patients with bulimia nervosa have histories of difficulties separating from
caretakers.

Patients with purging type may be at risk for certain medical complications such as
hypokalemia from vomiting or laxative abuse and hypochloremic alkalosis. Those who vomit
repeatedly are at risk for gastric and esophageal tears, although these complications are
rare.

Course and Prognosis

Bulimia nervosa is characterized by higher rates of partial and full recovery compared with
anorexia nervosa. The mortality rate for bulimia nervosa has been estimated at 2 percent
per decade according to DSM-5.

Treatment

• Psychotherapy
o CBT

[CBT implements a number of cognitive and behavioral procedures to (1)


interrupt the self-maintaining behavioral cycle of binging and dieting and (2)
alter the individual's dysfunctional cognitions; beliefs about food, weight,
body image; and overall self-concept.]

o Dynamic psychotherapy
o "Stepped-care" programs
• Selective serotonin reuptake inhibitors (SSRis), such as fluoxetine (Prozac).
• Imipramine (Tofranil)
• Desipramine (Norpramin)
• Trazodone (Desyrel)
• Monoamine oxidase inhibitors (MAOis)
• Carbamazepine (Tegretol)
• Lithium (Eskalith)
MODULE 3: MOOD DISORDERS

UNIT 1: DEPRESSIVE DISORDERS- DISRUPTIVE MOOD DYSREGULATION DISORDER, MAJOR


DEPRESSIVE DISORDER SINGLE AND RECURRENT EPISODES

Disruptive Mood Dysregulation Disorder

Disruptive mood dysregulation disorder, a new inclusion in the DSM-5, is characterized by


severe, developmentally inappropriate, and recurrent temper outbursts at least three times
per week, along with a persistently irritable or angry mood between temper outbursts. In
order to meet diagnostic criteria, the symptoms must be present for at least a year, and the
onset of symptoms must be present by the age of 1 0 years old. Youths diagnosed with
mood dysregulation disorder who also exhibit multiple symptoms of hyperarousal may be
comorbid for ADHD.

Disruptive mood dysregulation disorder closely resembles the "broad phenotype" of bipolar
disorder. Disruptive mood dysregulation disorder requires that irritable outbursts be
present in at least two settings, whereas oppositional defiant disorder requires that they be
present in only one setting.

Epidemiology

• Severe mood dysregulation has a lifetime prevalence of 3 percent in children age 9


to 19 years.
• Within that percentage, males (78 percent) are more prevalent than females (22
percent).
• The mean age of onset is 5 to 11 years of age.

Comorbidity

The most common comorbidities are ADHD (94 percent), oppositional defiant disorder (84
percent), anxiety disorders (47 percent), and major depressive disorder (20 percent).

Signs and Symptoms

Children or adolescents with DMDD experience:


• Severe temper outbursts (verbal or behavioral), on average, three or more times per week

• Outbursts and tantrums that have been ongoing for at least 12 months

• Chronically irritable or angry mood most of the day, nearly every day

• Trouble functioning due to irritability in more than one place (at home, at school, and with
peers)

Over time, as children grow and develop, the symptoms of DMDD may change. For example,
an adolescent or young adult with DMDD may experience fewer tantrums, but they begin to
exhibit symptoms of depression or anxiety. For these reasons, treatment may change over
time, too. Children with DMDD may have trouble in school and experience difficulty
maintaining healthy relationships with family or peers. They also may have a hard time in
social settings or participating in activities such as team sports. If you think your child has
DMDD, it is essential to seek a diagnosis and treatment.

Course and Prognosis

Disruptive mood dysregulation disorder is a chronic disorder. Longitudinal studies thus far
have shown that patients with disruptive mood dysregulation disorder in childhood have a
high risk of progressing to major depressive disorder, dysthymic disorder, and anxiety
disorders over time.

Treatment

• SSRIs
• stimulants
• atypical psychotic agents and mood stabilizers
• Divalproex (Depakote)
• CBT
• Behavioural psychotherapy
• dialectical behavior therapy for children (DBT-C)
• parent training
UNIT 5: SUICIDE: TYPES, EPIDEMIOLOGY, SIGNS AND SYMPTOMS OF SUICIDAL RISK,
FACTORS ASSOCIATED WITH SUICIDE RISK, CAUSAL FACTORS, MANAGEMENT.

Suicide is derived from the Latin word for "self-murder." It is a fatal act that represents the
person's wish to die. In psychiatry, suicide is the primary emergency, with homicide and
failure to diagnose an underlying potentially fatal illness representing other, less common
psychiatric emergencies. Suicide is impossible to predict, but numerous clues can be seen.

Epidemiology

• There are over 35,000 deaths per year (approximately 100 per day) in the United
States attributed to suicide.
• The prime suicide site in the world is the Golden Gate Bridge in San Francisco, with 1
,600 suicides committed there since the bridge opened in 1937.
• It is estimated that there is a 25 to 1 ratio between suicide attempts and completed
suicides.

Risk Factors

• Gender differences

Men commit suicide more than four times as often as women, regardless of age or
race, in the United States--despite the fact that women attempt suicide or have
suicidal thoughts three times as often as men. Men are more likely than women to
commit suicide using firearms, hanging, or jumping from high places. Women, on the
other hand, more commonly take an overdose of psychoactive substances or poison.
Globally, the most common method of suicide is hanging.

• Age

Suicide is rare before puberty. Among men, suicides peak after age 45; among
women, the greatest number of completed suicides occurs after age 55. Rates of 29
per 100,000 population occur in men age 65 or older. Older persons attempt suicide
less often than younger persons, but are more often successful. Suicide is the third
leading cause of death in those aged 15 to 24 years, after accidents and homicides.
Most suicides now are among those aged 35 to 64.

• Race
Suicide rates among white men and women are approximately two to three times as
high as for African American men and women across the life cycle. Among young
persons who live in inner cities and certain Native American and Alaskan Native
groups, suicide rates have greatly exceeded the national rate. Suicide rates among
immigrants are higher than those in the native-born population.
• Religion
Historically, Protestants and Jews in the United States have had higher suicide rates
than Catholics. Muslims have much lower rates.
• Marital status

Marriage lessens the risk of suicide significantly, especially if there are children in the
home. Divorce increases suicide risk, with divorced men three times more likely to
kill themselves as divorced women. Widows and widowers also have high rates.
Suicide occurs more frequently than usual in persons who are socially isolated and
have a family history of suicide (attempted or real). Persons who commit so-called
anniversary suicides take their lives on the day a member of their family did.
Homosexual men and women appear to have higher rates of suicide than
heterosexuals.

• Occupation
The higher the person's social status, the greater the risk of suicide, but a drop in
social status also increases the risk. Work, in general, protects against suicide.
Among occupational rankings, professionals, particularly physicians, have
traditionally been considered to be at greatest risk. Other high-risk occupations
include law enforcement, dentists, artists, mechanics, lawyers, and insurance agents.
Suicide is higher among the unemployed than among employed persons. The suicide
rates increase during economic recessions and depressions and decrease during
times of high employment and during wars.
o Physician suicides
Female physicians have a higher risk of suicide than other women. Both male
and female physicians commit suicide significantly more often by substance
overdoses. Among physicians, psychiatrists are considered to be at greatest
risk.

• Climate
No significant seasonal correlation with suicide has been found. Suicides increase
slightly in spring and fall but, contrary to popular belief, not during December and
holiday periods.
• Physical health

A physical illness is estimated to be an important contributing factor in about half of


all suicides. Factors associated with illness that contribute to both suicides and
suicide attempts are loss of mobility, disfigurement, chronic (intractable) pain and
secondary effects. Certain drugs can produce depression, which may lead to suicide
in some cases.

• Mental illness

Almost 95 percent of all persons who commit or attempt suicide have a diagnosed
mental disorder. Depressive disorders account for 80 percent of this figure,
schizophrenia accounts for 10 percent, and dementia or delirium for 5 percent.
Among all persons with mental disorders, 25 percent are also alcohol dependent and
have dual diagnoses. Persons with delusional depression are at highest risk of
suicide. A history of impulsive behavior or violent acts increases the risk of suicide.
Diagnoses of substance abuse and antisocial personality disorder occurred most
often among suicides in persons less than 30 years of age and diagnoses of mood
disorders and cognitive disorders most often among suicides in those age 30 and
above. Stressors associated with suicide in those under 30 were separation,
rejection, unemployment, and legal troubles; illness stressors.

• Psychiatric patients
Psychiatric patients' risk for suicide is 3 to 12 times that of nonpatients. Inpatients
have five and ten times higher suicide risks, respectively, than their counterparts in
the general population. For male and female outpatients who have never been
admitted to a hospital for psychiatric treatment, the suicide risks are three and four
times greater. Individuals with higher suicidal risk are given electroconvulsive
therapy (ECT). Those in the general population who commit suicide tend to be
middle aged or older. The mean age of male suicides was 29.5 years and that of
women 38.4 years. The period after discharge from the hospital is also a time of
increased suicide risk. Studies show that one third or more of depressed patients
who commit suicide do so within 6 months of leaving a hospital; presumably they
have relapsed. The main risk groups are patients with depressive disorders,
schizophrenia, and substance abuse and patients who make repeated visits to the
emergency room.

[ Refer Kaplan & Saddock, pg no. 765-766]

Etiology

Sociological Factors

• Emile Durkheim's Theory- Durkheim divided suicides into three social categories:
egoistic, altruistic, and anomic.
• Egoistic suicide applies to those who are not strongly integrated into any social
group. Couples with children are the best protected group.
• Altruistic suicide applies to those susceptible to suicide stemming from their
excessive integration into a group, with suicide being the outgrowth of the
integration-for example, a Japanese soldier who sacrifices his life in battle.
• Anomic suicide applies to persons whose integration into society is disturbed so that
they cannot follow customary norms of behavior.
• Anomie also indicate drastic change in economic situation. In Durkheim's theory,
anomie also refers to social instability and a general breakdown of society's
standards and values.

Psychological Factors
• Freud’s Theory- suicide represents aggression turned inward against an introjected,
ambivalently cathected love object.
• Freud doubted that there would be a suicide without an earlier repressed desire to
kill someone else.
• Karl Menninger’s theory- conceived of suicide as inverted homicide because of a
patient's anger toward another person. This retroflexed murder is either turned
inward or used as an excuse for punishment.
• He also described a self-directed death instinct (Freud's concept ofThanatos) plus
three components of hostility in suicide: the wish to kill, the wish to be killed, and
the wish to die.
• Recent theories- Fantasies of wishes for revenge, power, control, or punishment;
atonement, sacrifice, or restitution; escape or sleep; rescue, rebirth, reunion with
the dead; or a new life.
• A suicide attempt can cause a long-standing depression to disappear.
• A study by Aaron Beck showed that hopelessness was one of the most accurate
indicators of long-term suicidal risk.

Biological Factors

• Low concentrations of the serotonin metabolite 5-hydroxyindoleacetic acid (5-HIAA)


in the lumbar cerebrospinal fluid (CSF).
• Genetic factors- family history of suicide increases the risk of attempted suicide and
that of completed suicide in most diagnostic groups.
• Tryptophan hydroxylase (TPH) is an enzyme involved in the biosynthesis of
serotonin. A polymorphism in the human TPH gene has been identified, with two
alleles-U and L.
• The presence of the L allele was associated with an increased risk of suicide
attempts.
• Twin studies- these results show that monozygotic twin pairs have significantly
higher concordance for both suicide and attempted suicide.

Legal and Ethical Factors


• In about half of the cases in which suicides occur while patients are on a psychiatric
unit, a lawsuit results.

Parasuicidal Behaviour

Parasuicide is a term introduced to describe patients who injure themselves by self-


mutilation (e.g., cutting the skin), but who usually do not wish to die. the female-to-male
ratio is almost 3 to 1 . The incidence of self-injury in psychiatric patients is estimated to be
more than 50 times that in the general population. These patients are usually in their 20s
and may be single or married. Most cut delicately, not coarsely, usually in private with a
razor blade, knife, broken glass, or mirror. The wrists, arms, thighs, and legs are most
commonly cut; the face, breasts, and abdomen are cut infrequently. Most persons who cut
themselves claim to experience no pain and give reasons for this behavior such as anger at
themselves or others, relief of tension, and the wish to die. Most are classified as having
personality disorders and are significantly more introverted, neurotic, and hostile than
controls. Alcohol abuse and other substance abuse are common.

Variables Enhancing Risk of Suicide among Vulnerable Groups

Adolescence and late life

Bisexual or homosexual gender identity

Criminal behavior

Cultural sanctions for suicide

Delusions

Disposition of personal property

Divorced, separated, or single marital status

Early loss or separation from parents

Family history of suicide


Hallucinations

Homicide

Hopelessness

Hypochondriasis

lmpulsivity

Increasing agitation

Increasing stress

Insomnia

Lack of future plans

Lack of sleep

Lethality of previous attempt

Living alone

Low self-esteem

Male sex

Physical illness or impairment

Previous attempts that could have resulted in death

Protestant or non-religious status

Recent childbirth

Recent loss

Repression as a defense

Secondary gain
Severe family pathology

Severe psychiatric illness

Sexual abuse

Signals of intent to die

Suicide epidemics

Unemployment

White race

Inpatient versus Outpatient Treatment

• Absence of a strong social support system, a history of impulsive behavior, and a


suicidal plan of action are indications for hospitalization.
• To decide whether outpatient treatment is feasible, clinicians should use a
straightforward clinical approach: Ask patients who are considered suicidal to agree
to call when they become uncertain about their ability to control their suicidal
impulses.
• In return for a patient's commitment, clinicians should be available to the patient 24
hours a day.
• If a patient who is considered seriously suicidal cannot make the commitment,
immediate emergency hospitalization is indicated.
• If the patient refuses hospitalization, the family must take the responsibility to be
with the patient 24 hours a day.
• In a hospital, patients can receive antidepressant or antipsychotic medications as
indicated; individual therapy, group therapy, and family therapy are available, and
patients receive the hospital's social support and sense of security.
• ECT may be necessary for some severely depressed patients, who may require
several treatment courses.
• Some medications (e.g., risperidone [Risperdal]) have both antipsychotic and
antidepressant effects and are useful when the patient has signs and symptoms of
both psychosis and depression.
• Supportive psychotherapy by a psychiatrist shows concern and may alleviate some of
a patient's intense suffering.

Goals to Reduce Suicide

1 . Promote awareness that suicide is a public health problem that is preventable

2 . Develop broad-based support for suicide prevention

3 . Develop and implement strategies to reduce the stigma associated with being a
consumer of mental health, substance abuse, and suicide prevention services

4 . Develop and implement suicide prevention programs

5 . Promote efforts to reduce access to lethal means and methods of self-harm

6. Implement training for recognition of at-risk behavior and delivery of effective treatment

7. Develop and promote effective clinical and professional practices

8 . Improve access to, and community linkages with, mental health and substance abuse
services

9 . Improve reporting and portrayals of suicidal behavior, mental illness, and substance
abuse in the entertainment and news media

10. Promote and support research on suicide and suicide prevention

11 . Improve and expand surveillance systems.

Types

• Victim precipitated homicide


The phenomenon of using others, usually police, to kill oneself is well known to law
enforcement personnel.
• Murder-suicides

Murder-suicides receive a disproportionate amount of attention because they are


dramatic and tragic. Pacts tend to be made more often by females or elderly
couples.

• Terrorist suicides
Terrorist-bomber suicides represent a special category of murder-suicides, one in
which there is no question of willingness of the victim's part and in which the victims
are unknown to the perpetrators.
• Inevitable suicide
Not all suicides are preventable; some may be inevitable. In fact, over one third of all
completed suicides occur in persons who are receiving treatment for a psychiatric
disorder, most commonly depression, bipolar disorder, or schizophrenia.
• Surviving suicide

To be a suicide survivor refers to those who have lost a loved one to suicide, not to
someone who has attempted suicide but lived. Survivors feel that the loved one
intentionally and willfully took his or her life and that if only the survivor had done
something differently, the decedent would still be here. Because the decedent
cannot tell them otherwise, survivors are at the mercy of their often merciless
consciences.

• Egoistic, altruistic and anomic suicide


Egoistic suicide applies to those who are not strongly integrated into any social
group. Couples with children are the best protected group.
Altruistic suicide applies to those susceptible to suicide stemming from their
excessive integration into a group, with suicide being the outgrowth of the
integration-for example, a Japanese soldier who sacrifices his life in battle.
Anomic suicide applies to persons whose integration into society is disturbed so that
they cannot follow customary norms of behavior.
Mood Disorder Among Creative Individuals

[Refer PDF]
MODULE 4: ANXIETY, TRAUMA, AND STRESS RELATED AND SOMATOFORM DISORDERS

UNIT 1: ANXIETY DISORDERS: GENERALIZED ANXIETY DISORDER, PANIC DISORDER AND


AGORAPHOBIA, SPECIFIC PHOBIA, SOCIAL ANXIETY DISORDER

Panic Disorder

The idea of panic disorder may have its roots in the concept of irritable heart syndrome,
which the physician Jacob Mendes DaCosta (1833-1900) noted in soldiers in the American
Civil War. DaCosta's syndrome included many psychological and somatic symptoms that
have since been included among the diagnostic criteria for panic disorder. In 1 895, Sigmund
Freud introduced the concept of anxiety neurosis, consisting of acute and chronic
psychological and somatic symptoms.

An acute intense attack of anxiety accompanied by feelings of impending doom is known as


panic disorder. The anxiety is characterized by discrete periods of intense fear that can vary
from several attacks during one day to only a few attacks during a year. Patients with panic
disorder present with a number of comorbid conditions, most commonly agoraphobia,
which refers to a fear of or anxiety regarding places from which escape might be difficult.

Epidemiology

• The lifetime prevalence of panic disorder is in the 1 to 4 percent range.


• Women are two to three times more likely to be affected than men, although
underdiagnosis of panic disorder in men may contribute to the skewed distribution.
• The only social factor identified as contributing to the development of panic disorder
is a recent history of divorce or separation.
• Panic disorder most commonly develops in young adulthood-the mean age of
presentation is about 25 years but both panic disorder and agoraphobia can develop
at any age.

Comorbidity

• Of patients with panic disorder, 91 percent have at least one other psychiatric
disorder.
• About 1/3rd of persons with panic disorders have major depressive disorder before
onset.
• About 2/3rd first experience panic disorder during or after the onset of major
depression.
• 15 to 30 percent have social anxiety disorder or social phobia.
• 2 to 20 percent have specific phobia.
• 15 to 30 percent have generalized anxiety disorder.
• 2 to 10 percent have PTSD.
• Up to 30 percent have OCD.
• Other common comorbid conditions are hypochondriasis or illness anxiety disorder,
personality disorders, and substance related disorders.

Symptoms

DSM-5 criteria:

A. Recurrent unexpected panic attacks. A panic attack is an abrupt surge of intense fear or
intense discomfort that reaches a peak within minutes and during which time four (or more)
of the following symptoms occur:

Note: The abrupt surge can occur from a calm state or an anxious state.

1 . Palpitations, pounding heart, or accelerated heart rate.

2. Sweating.

3 . Trembling or shaking.

4. Sensations of shortness of breath or smothering.

5 . Feelings of choking.

6. Chest pain or discomfort.

7. Nausea or abdominal distress.

8. Feeling dizzy, unsteady, light-headed, or faint.


9. Ch ills or heat sensations.

10. Paresthesias (numbness or tingling sensations).

11 . Derealization (feelings of unreality) or depersonalization (being detached from one-


self).

12 . Fear of losing control or "going crazy."

13 . Fear of dying.

Note: Culture-specific symptoms (e.g., tinnitus, neck soreness, headache, uncontrollable


screaming or crying) may be seen. Such symptoms should not count as one of the four
required symptoms.

B. At least one of the attacks has been followed by 1 month (or more) of one or both of the
following:

1 . Persistent concern or worry about additional panic attacks or their consequences (e.g.,
losing control, having a heart attack, "going crazy").

2 . A significant maladaptive change in behavior related to the attacks (e.g., behaviors


designed to avoid having panic attacks, such as avoidance of exercise or unfamiliar

situations).

C. The disturbance is not attributable to the physiological effects of a substance (e.g., a drug
of abuse, a medication) or another medical condition (e.g., hyperthyroidism,
cardiopulmonary disorders).

D. The disturbance is not better explained by another mental disorder (e.g., the panic
attacks do not occur only in response to feared social situations, as in social anxiety
disorder; in response to circumscribed phobic objects or situations, as in specific phobia; in
response to obsessions, as in obsessive-compulsive disorder; in response to separation from
attachment figures, as in separation anxiety disorder).

Etiology
Biological factors

• Abnormal regulation of brain noradrenergic systems.


• Peripheral and central nervous system (CNS) dysregulation.
• Increased sympathetic tone, to adapt slowly to repeated stimuli, and to respond
excessively to moderate stimuli.
• The major neurotransmitter systems that have been implicated are those for
norepinephrine, serotonin, and GABA.
• Serotonergic dysfunction is quite evident in panic disorder.
• Attenuation of local inhibitory GABAergic transmission in the basolateral amygdala,
midbrain, and hypothalamus.
• The biological data have led to a focus on the brainstem (particularly the
noradrenergic neurons of the locus ceruleus and the serotonergic neurons of the
median raphe nucleus), the limbic system (possibly responsible for the generation of
anticipatory anxiety), and the prefrontal cortex (possibly responsible for the
generation of phobic avoidance).
• Presynaptic α2-adrenergic receptors play a significant role.
• Panic-inducing substances (sometimes called panicogens) induce panic attacks in
most patients with panic disorder and in a much smaller proportion of persons
without panic disorder or a history of panic attacks.
• Brain imaging studies have implicated pathological involvement in the temporal
lobes, particularly the hippocampus and the amygdala.
• Cortical atrophy, in the right temporal lobe.
• Dysregulation of cerebral blood flow (smaller increase or an actual decrease in
cerebral blood flow).
• Cerebral vasoconstriction.
• Studies have found that the prevalence of panic disorder in patients with mitral valve
prolapse is the same as the prevalence of panic disorder in patients without mitral
valve prolapse.

Genetic factors
• First-degree relatives of patients with panic disorder have a four- to eightfold higher
risk for panic disorder.
• Monozygotic twins are more likely to be concordant for panic disorder than are
dizygotic twins.

Psychosocial factors

• Difficulty tolerating anger.


• Physical or emotional separation from significant person both
• in childhood and in adult life.
• May be triggered by situations of increased work
• responsibilities.
• Perception of parents as controlling, frightening, critical, and demanding.
• Internal representations of relationships involving sexual or
• physical abuse.
• A chronic sense of feeling trapped.
• Vicious cycle of anger at parental rejecting behavior
• followed by anxiety that the fantasy will destroy the tie to
• parents.
• Failure of signal anxiety function in ego related to self-fragmentation and self-other
boundary confusion.
• Typical defense mechanisms: reaction formation, undoing,
• somatization, and externalization.
• Panic attacks as arising from an unsuccessful defense against anxiety-provoking
impulses.
• Onset of panic is generally related to environmental or psychological factors.
• Patients with panic disorder have a higher incidence of stressful life events
(particularly loss) than control subjects in the months before the onset of panic
disorder.
• Stressful psychological events produce neurophysiological changes in panic disorder.
• Separation from the mother early in life was clearly more likely to result in panic
disorder than was paternal separation.
• Childhood physical and sexual abuse.
• Unconscious meaning of stressful events.

Course and Prognosis

Panic disorder usually has its onset in late adolescence or early adulthood. Panic disorder, in
general, is a chronic disorder, although its course is variable. The frequency and severity of
the attacks can fluctuate. Panic attacks can occur several times in a day or less than once a
month. Excessive intake of caffeine or nicotine can exacerbate the symptoms. Depression
can complicate the symptom picture in anywhere from 40 to 80 percent of all patients.
Patients with good premorbid functioning and symptoms of brief duration tend to have
good prognoses.

Treatment

• SSRIs
• Benzodiazepines
• Trycyclic and Tetracycle Drugs- among tricyclic drugs,
• clomipramine and imipramine (Tofranil) are the most effective in the treatment of
panic disorder. Some data support the efficacy of desipramine (Norpramin), and less
evidence suggests a role for maprotiline (Ludiomil), trazodone (Desyrel),
nortriptyline (Pamelor), amitriptyline (Elavil), and doxepin (Adapin).
• Monoamine Oxidase Inhibitors- The most robust data support the effectiveness of
phenelzine (Nardil), and some data also support the use oftranylcypromine
(Parnate).
• If patients fail to respond to one class of drugs, another should be tried. Case reports
have suggested the effectiveness of carbamazepine (Tegretol), valproate
(Depakene), calcium channel inhibitors and buspirone.
• Pharmacological treatment should generally continue for 8 to 12 months.
• Patients may be likely to relapse if they have been given benzodiazepines and the
benzodiazepine therapy is terminated in a way that causes withdrawal symptoms.
• CBT
• Cognitive Therapy- patient's false beliefs and information about panic attacks.
Agoraphobia

The term agoraphobia was coined in 1 87 1 to describe the condition of patients who were
afraid to venture alone into public places. The term is derived from the Greek words agora
and phobos, meaning "fear of the marketplace."

Agoraphobia refers to a fear of or anxiety regarding places from which escape might be
difficult. It can be the most disabling of the phobias because it can significantly interfere
with a person's ability to function in work and social situations outside the home.
Agoraphobia almost always develops as a complication in patients with panic disorder. That
is, the fear of having a panic attack in a public place from which escape would be formidable
is thought to cause the agoraphobia. Although agoraphobia often coexists with panic
disorder, DSM-5 classifies agoraphobia as a separate condition that may or may not be
comorbid with panic disorder.

Epidemiology

• Lifetime prevalence of agoraphobia vary between 2 to 6 percent.


• Persons older than age 65 years have a 0.4 percent prevalence rate of agoraphobia,
but this may be a low estimate.
• At least three fourths of the affected patients have panic disorder.
• In many cases, the onset of agoraphobia follows a traumatic event.

Symptoms

DSM-5 criteria:

A. Marked fear or anxiety about two (or more) of the following five situations:

1 . Using public transportation (e.g., automobiles, buses, trains, ships, planes)

2 . Being in open spaces (e.g., parking lots, marketplaces, bridges)

3 . Being in enclosed places (e.g., shops, theatres, cinemas)

4. Standing in li ne or being in a crowd


5 . Being outside of the home alone

B. The individual fears or avoids these situations because of thoughts that escape might be
difficult or help might not be available in the event of developing panic-like symptoms or
other incapacitating or embarrassing symptoms (e.g., fear of falling in elderly adults; fear of
incontinence).

C. The agoraphobic situations almost always provoke fear or anxiety.

D. The agoraphobic situations are actively avoided, require the presence of a companion, or
are endured with intense fear or anxiety.

E. The fear or anxiety is out of proportion to the actual danger posed by the agoraphobic
situations and to the sociocultural context.

F. The fear, anxiety, or avoidance is persistent, typically lasting for 6 months or more.

G. The fear, anxiety, or avoidance causes clinically significant distress or impairment in


social, occupational, or other important areas of functioning.

H . If another medical condition (e.g., inflammatory bowel disease, Parkinson's disease) is


present, the fear, anxiety, or avoidance is clearly excessive.

I. The fear, anxiety, or avoidance is not better explained by the symptoms of another mental
disorder-for example, the symptoms are not confined to specific phobia, situational type; do
not involve only social situations (as in social anxiety disorder); and are not related
exclusively to obsessions (as in obsessive-compulsive disorder), perceived defects or flaws in
physical appearance (as in body dysmorphic disorder), reminders of traumatic events (as in
posttraumatic stress disorder), or fear of separation (as in separation anxiety disorder).

Note: Agoraphobia is diagnosed irrespective of the presence of panic disorder. If an


individual's presentation meets the criteria for panic disorder and agoraphobia, both
diagnoses should be assigned.

Course and Prognosis


Most cases of agoraphobia are thought to be caused by panic disorder. When the panic
disorder is treated, the agoraphobia often improves with time. For rapid and complete
reduction of agoraphobia, behavior therapy is sometimes indicated. Agoraphobia without a
history of panic disorder is often incapacitating and chronic, and depressive disorders and
alcohol dependence often complicate its course.

Treatment

• SSRIs- yohimbine (Yocon), bupropion (Wellbutrin), or mirtazapine (Remeron); dose


reduction; or adjunctive use of sildenafil (Viagra).
• Benzodiazepines- Alprazolam (Xanax) and lorazepam (Ativan) are the most
commonly prescribed benzodiazepines. Clonazepam (Klonopin) has also been shown
to be effective. Benzodiazepines are also best avoided in individuals with histories of
alcohol or substance abuse unless there are compelling reasons, such as failure to
respond to other classes of medications.
• Tricyclic and Tetracyclic drugs- the tricyclic drugs clomipramine (Anafranil) and
imipramine (Tofranil) are the most effective in the treatment of these disorders.
• Supportive psychotherapy
• Insight-oriented psychotherapy
• Behaviour therapy
• Cognitive therapy
• Virtual therapy

UNIT 3: OBSESSIVE COMPULSIVE AND RELATED DISORDER - OCD, BODY DYSMORPHIC


DISORDER, HOARDING DISORDER, TRICHOTILLOMANIA, EXCORIATION; ETIOLOGY AND
INTERVENTION.

Body Dysmorphic Disorder

Body dysmorphic disorder is characterized by a preoccupation with an imagined defect in


appearance that causes clinically significant distress or impairment in important areas of
functioning. If a slight physical anomaly is actually present, the person's concern with the
anomaly is excessive and bothersome.
The disorder was recognized and named dysmorphophobia more than 100 years ago by Emil
Kraepelin, who considered it a compulsive neurosis; Pierre Janet called it obsession de la
honte du corps (obsession with shame of the body). DSM-III in 1980 included
dysmorphophobia for the first time in the US diagnostic criteria. In the fourth text revision
of DSM (DSM-IV-TR), the condition was known as body dysmorphic disorder, because the
DSM editors believed that the term dysmorphophobia inaccurately implied the presence of
a behavioral pattern of phobic avoidance. In the fifth edition of DSM (DSM-5), body
dysmorphic disorder is included in the obsessive-compulsive spectrum disorders due to its
similarities to obsessive-compulsive disorder (OCD).

Epidemiology

• Available data indicate that the most common age of onset is between 1 5 and 30
years and that women are affected somewhat more often than men.
• Affected patients are also likely to be unmarried. Body dysmorphic disorder
commonly coexists with other mental disorders.
• One study found that more than 90 percent of patients with body dysmorphic
disorder had experienced a major depressive episode in their lifetimes; about 70
percent had experienced an anxiety disorder; and about 30 percent had experienced
a psychotic disorder.

Etiology

• The cause of body dysmorphic disorder is unknown.


• Serotoninergic dysfunction
• Stereotyped concepts of beauty.
• In psychodynamic models, body dysmorphic disorder is seen as reflecting the
displacement of a sexual or emotional conflict onto a nonrelated body part.
• Association occurs through the defense mechanisms of repression, dissociation,
distortion, symbolization, and projection.

Clinical Features
• The most common concerns involve facial flaws, particularly those involving specific
parts (e.g., the nose).
• Sometimes the concern is vague and difficult to understand, such as extreme
concern over a "scrunchy" chin.
• Other body parts of concern are hair, breasts, and genitalia.
• A proposed variant of dysmorphic disorder among men is the desire to "bulk up" and
develop large muscle mass, which can interfere with ordinary living, holding a job, or
staying healthy.
• The specific body part may change during the time a patient is affected with the
disorder.
• Common associated symptoms include ideas or frank delusions of reference (usually
about persons' noticing the alleged body flaw), either excessive mirror checking or
avoidance of reflective surfaces, and attempts to hide the presumed deformity (with
makeup or clothing).
• The effects on a person's life can be significant; almost all affected patients avoid
social and occupational exposure.
• As many as one-third of patients may be housebound because of worry about being
ridiculed for the alleged deformities; and approximately one-fifth of patients
attempt suicide.

Course and Prognosis

Body dysmorphic disorder usually begins during adolescence, although it may begin later
after a protracted dissatisfaction with the body. The onset can be gradual or abrupt. The
disorder usually has a long and undulating course with few symptom-free intervals. The part
of the body on which concern is focused may remain the same or may change over time.

Treatment

• Monoamine oxidase inhibitors (MAOis)


• Pimozide (Orap)
• Clomipramine (Anafranil)
• Fluoxetine (Prozac)
• How long treatment should be continued after the symptoms of body dysmorphic
disorder have remitted is unknown.
• Augmentation of the selective serotonin reuptake inhibitor (SSRI) with clomipramine
(Anafranil), buspirone (BuSpar), lithium (Eskalith), methylphenidate (Ritalin), or
antipsychotics may improve the response rate.

Hoarding Disorder

Compulsive hoarding is a common and often disabling phenomenon associated with


impairment in such functions as eating, sleeping, and grooming. Hoarding may result in
health problems and poor sanitation, particularly when hoarding of animals is involved, and
may lead to death from fire or falling. The disorder is characterized by acquiring and not
discarding things that are deemed to be of little or no value, resulting in excessive clutter of
living spaces. Hoarding was originally considered a subtype of obsessive-compulsive
disorder (OCD), but is now considered to be a separate diagnostic entity. It is commonly

driven by an obsessive fear of losing important items that the person believes may be of use
at some point in the future, by distorted beliefs about the importance of possessions, and
by extreme emotional attachment to possessions.

Epidemiology

• Hoarding is believed to occur in approximately 2 to 5 percent of the population,


although some studies have found lifetime prevalence as high as 1 4 percent.
• It occurs equally among men and women, is more common in single persons.
• Hoarding usually begins in early adolescence and persists throughout the lifespan.

Comorbidity

• The most significant comorbidity is found between hoarding disorder and OCD, with
as many as 30 percent of OCD patients showing hoarding behavior.
• Studies have found an association between hoarding and compulsive buying. Buying
or acquiring needless things (including receiving gifts) may be a source of comfort for
hoarders, many of whom find themselves with extra items for a perceived but
irrational future need.
• Hoarding is associated with high rates of personality disorders in addition to OCD.
These include dependent, avoidant, schizotypal, and paranoid types.
• 20 percent of hoarding patients met the criteria for ADHD.
• Hoarding behaviors are relatively common among schizophrenic patients and have
been noted in dementia and other neurocognitive disorders. One study found
hoarding in 20 percent of dementia patients and 1 4 percent of brain injury patients.
• Onset of hoarding has been reported in cases of frontotemporal dementia and may
follow surgery resulting in structural defects in prefrontal and orbitofrontal cortex.
• Other disorders associated with hoarding include eating disorders, depression,
anxiety disorders, substance use disorders (particularly alcohol dependence),
kleptomania, and compulsive gambling.
• Among anxiety disorders, hoarding is most associated with generalized anxiety
disorder (27 percent) and social anxiety disorder (14 percent).

Etiology

• Little is known about the etiology of hoarding disorder.


• 80 percent of hoarders have at least one first-degree relative with hoarding
behavior.
• Lower metabolism in the posterior cingulate cortex and the occipital cortex of
hoarders, which may also account for various cognitive impairments within hoarders
such as attention and decision-making deficits.
• One study of the molecular genetics for hoarding found a link between hoarding
behavior and markers on chromosomes 4q, 5q, andl 7q.
• Catecholamine-0-methyltransferase (COMT) gene on chromosome 22q 1 1 .2 1
might contribute to the genetic susceptibility to hoarding.

Clinical Features

• Hoarding is driven by the fear of losing items that the patient believes will be needed
later and a distorted belief about or an emotional attachment to possessions.
• Most hoarders do not perceive their behavior to be a problem.
• In fact, many perceive their behavior to be reasonable and part of their identity.
• Most hoarding patients accumulate possessions passively rather than intentionally,
thus clutter accumulates gradually over time.
• Common hoarded items include newspapers, mail, magazines, old clothes, bags,
books, lists, and notes.
• Hoarding poses risks to not only the patient, but also to those around them.
o Clutter accumulated from hoarding has been attributed to deaths from fire or
patients being crushed by their possessions. It can also attract pest
infestations that can pose a health risk both to the patient and residents
around them.
• In severe cases, hoarding can interfere with work, social interaction, and basic
activities such as eating or sleeping.
• The pathological nature of hoarding comes from the inability to organize possessions
and keep them organized.
• Many hoard to avoid making decisions about discarding items.
• Patients with hoarding disorder also overemphasize the importance of recalling
information and possessions.
• For example, a hoarder will keep old newspapers and magazines because they
believe that if discarded the information will be forgotten and will never be retrieved
again. In addition, patients believe that forgetting information will lead to serious
consequences and prefer to keep possessions in sight so as not to forget them.

Course and Prognosis

The disorder is a chronic condition with a treatment-resistant course. Treatment seeking


does not usually occur until patients are in their 40s or 50s, even if the hoarding began
during adolescence. Symptoms may fluctuate throughout the course of the disorder, but full
remission is rare. Patients have very little insight into their behavior and usually seek
treatment under pressure from others. Some patients begin hoarding in response to a
stressful event, while others report a slow and steady progression throughout life. Those
who report onset due to a stressful event have a later age of onset than those who do not.
Those with an earlier age of onset run a longer and more chronic course.

Treatment
• Hoarding disorder is difficult to treat.
• The most effective treatment for the disorder is a cognitive behavioral model (CBT).
o The challenges posed by hoarding patients to typical CBT treatment include
poor insight to the behavior and low motivation and resistance to treatment.
• Pharmacological treatment studies using SSRis have shown mixed results.

Hair- Pulling Disorder (TrichotiIlomania)

Hair-pulling disorder is a chronic disorder characterized by repetitive hair pulling, leading to


variable hair loss that may be visible to others. It is also known as trichotillomania, a term
coined by a French dermatologist Francois Hallopeau in 1889.

Epidemiology

• The prevalence of hair-pulling disorder may be underestimated because of


accompanying shame and secretiveness.
• The most serious, chronic form of the disorder usually begins in early to mid-
adolescence.
• Lifetime prevalence ranging from 0.6 percent to as high as 3 .4 percent in general
• populations.
• Female to male ratio as high as 1 0 to 1.
• The number of men may actually be higher, because men are even more likely than
women to conceal hair pulling.
• A patient with chronic hair-pulling disorder is likely to be the only or oldest child in
the family.
• A childhood type of hair-pulling disorder occurs approximately equally in girls and
boys.
• An estimated 35 to 40 percent of patients with hair-pulling disorder chew or swallow
the hair that they pull out at one time or another. Of this group, approximately one-
third develop potentially hazardous bezoars-hairballs accumulating in the alimentary
tract.

Comorbidity
Significant comorbidity is found between hair-pulling disorder and obsessive-compulsive
disorder (OCD); anxiety disorders; Tourette's disorder; depressive disorders; eating
disorders; and various personality disorders-particularly obsessive-compulsive, borderline,
and narcissistic personality disorders.

Etiology

• Onset has been linked to stressful situations in more than one-fourth of all cases.
Disturbances in mother-child relationships, fear of being left alone, and recent object
loss are often cited as critical factors contributing to the condition.
• Substance abuse may encourage development of the disorder.
• Depressive dynamics are often cited as predisposing factors, but no particular
personality trait or disorder characterizes patients.
• Some see self-stimulation as the primary goal of hair pulling.
• Family members of hair-pulling disorder patients often have a history of tics,
impulse-control disorders, and obsessive-compulsive symptoms, further supporting
a possible genetic predisposition.
• A smaller volume of the left putamen and left lenticulate areas.
• A study of the genetics of trichotillomania reported a relationship between a
serotonin 2A (5-HT2A) receptor gene polymorphism (T l 02C) and trichotillomania.

Clinical Features

• The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-
5) includes diagnostic criteria from hair-pulling disorder.
• Before engaging in the behavior, patients with hair-pulling disorder may experience
an increasing sense of tension and achieve a sense of release or gratification from
pulling out their hair.
• All areas of the body may be affected, most commonly the scalp. Other areas
involved are eyebrows, eyelashes, and beard; trunk, armpits, and pubic area are less
commonly involved.
• Two types of hair pulling have been described.
o Focused pulling is the use of an intentional act to control unpleasant
personal experiences, such as an urge, bodily sensation (e.g., itching or
burning), or thought.
o In contrast, automatic pulling occurs outside the person's awareness and
most often during sedentary activities.
o Most patients have a combination of these types of hair pulling.
• Hair loss is characterized by short, broken strands appearing together with long,
normal hairs in the affected areas.
• No abnormalities of the skin or scalp are present.
• Hair pulling is not reported as being painful, although pruritus and tingling may occur
in the involved area.
• Trichophagy, mouthing of the hair, may follow the hair plucking.
• Complications of trichophagy include trichobezoars, malnutrition, and intestinal
obstruction. Patients usually deny the behavior and often try to hide the resultant
alopecia.
• Head banging, nail biting, scratching, gnawing, excoriation, and other acts of self-
mutilation may be present.

Course and Prognosis

The course of the disorder is not well known; both chronic and remitting forms occur. An
early onset (before age 6) tends to remit more readily and responds to suggestions, support,
and behavioral strategies. Late onset (after age 1 3) is associated with an increased
likelihood of chronicity and poorer prognosis than the early-onset form.

Treatment

• No consensus exists on the best treatment modality for hairpulling disorder.


• SSRIs
• Pimozide (Orap)
• Fluvoxamine (Luvox), citalopram (Celexa), venlafaxine (Effexor), naltrexone (Re Via),
and lithium (Eskalith).
• Case reports also indicate successful treatment with buspirone (BuSpar), clonazepam
(Klonopin), and trazodone (Desyrel).
• Successful behavioral treatments, such as biofeedback, self-monitoring,
desensitization, and habit reversal, have been reported.
• Insight- oriented psychotherapy
• Hypnotherapy

Excoriation Disorder

Excoriation or skin-picking disorder is characterized by the compulsive and repetitive


picking of the skin. It can lead to severe tissue damage and result in the need for various
dermatological treatments.

Aka., skin-picking syndrome, emotional excoriation, nervous scratching artifact,


epidermotillomania, and para-artificial excoriation.

Epidemiology

• Skin-picking disorder has lifetime prevalence between 1 to 5 percent in the general


population, about 1 2 percent in the adolescent psychiatric population, and occurs in
2 percent of patients with other dermatologic disorders.
• It is more prevalent in women than in men.

Comorbidity

• High rates of comorbidity with OCD.


• Hair-pulling disorder (trichotillomania, 38 percent), substance dependence (38
percent), major depressive disorder (32 to 58 percent), anxiety disorders (23 to 56
percent), and body dysmorphic disorder (27 to 45 percent). One study reported an
association of both borderline and obsessive-compulsive personality disorder (71
percent) in patients with skin-picking disorder.

Etiology

• The cause of skin-picking is unknown.


• Manifestation of repressed rage at authoritarian parents. These patients pick at their
skin and perform other self-destructive acts to assert themselves.
• Patients may pick as a means to relieve stress. For example, skin-picking has been
associated with marital conflicts, passing of loved ones, and unwanted pregnancies.
• According to psychoanalytic theory, the skin is an erotic organ, and picking at the
skin or scratching the skin leading to excoriations may be a source of erotic pleasure.
• Many patients begin picking at the onset of dermatological conditions such as acne
and continue to pick after the condition has cleared.
• Abnormalities in serotonin, dopamine, and glutamate metabolism.

Clinical Features

• The face is the most common site of skin-picking.


• Other common sites are legs, arms, torso, hands, cuticles, fingers, and scalp.
• Although most patients report having a primary picking area, many times they pick
other areas of the body in order for the primary area to heal.
• In severe cases, skin-picking can result in physical disfigurement and medical
consequences that require medical or surgical interventions (e.g., skin grafts or
radiosurgery).
• Patients may experience tension prior to picking and a relief and gratification after
picking.
• Up to 87 percent of patients report feeling embarrassed by the picking and 58
percent report avoiding social situations.
• Many patients use bandages, makeup, or clothing to hide their picking. Of skin-
picking patients, 1 5 percent report suicidal ideation due to their behavior and about
12 percent have attempted suicide.

Course and Prognosis

The onset of skin-picking disorder is either in early adulthood or between 30 and 45 years of
age. Onset in children before age 10 years has also been seen. The mean age of onset is
between 12 to 1 6 years of age. Typically, symptoms wax and wane over the course of the

patient's life.
Treatment

• Skin-picking disorder is difficult to treat.


• SSRIs
• Fluoxetine (Prozac)
• Naltrexone (Revia)
• Glutamatergic agents and lamotrigine (Lamictal) have also shown efficacy.
• CBT
• Psychotherapy
• Effective therapy requires both psychological and somatic treatment.

UNIT 4: SOMATOFORM DISORDERS: SOMATIC SYMPTOM DISORDER, ILLNESS ANXIETY


DISORDER, CONVERSION DISORDER.

Somatic Symptom Disorder

Somatic symptom disorder, also known as hypochondriasis, is characterized by 6 or more


months of a general and non-delusional preoccupation with fears of having, or the idea that
one has, a serious disease based on the person's misinterpretation of bodily symptoms. This
preoccupation causes significant distress and impairment in one's life; it is not accounted for
by another psychiatric or medical disorder; and a subset of individuals with somatic
symptom disorder has poor insight about the presence of this disorder.

Epidemiology

• The reported 6-month prevalence of this disorder is 4 to 6 percent, but it may be as


high as 15 percent.
• Men and women are equally affected by this disorder.
• Although the onset of symptoms can occur at any age, the disorder most commonly
appears in persons 20 to 30 years of age.
• Some evidence indicates that this diagnosis is more common among blacks than
among whites, but social position, education level, gender, and marital status do not
appear to affect the diagnosis.
• This disorder's complaints reportedly occur in about 3 percent of medical students,
usually in the first 2 years, but they are generally transient.

Comorbidity

• Somatic symptom disorder is sometimes a variant form of other mental disorders,


among which depressive disorders and anxiety disorders are most frequently
included.
• An estimated 80 percent of patients with this disorder may have coexisting
depressive or anxiety disorders.

Persons with this disorder augment and amplify their somatic sensations; they have low
thresholds for, and low tolerance of, physical discomfort. For example, what persons
normally perceive as abdominal pressure, persons with somatic symptom disorder
experience as abdominal pain. They may focus on bodily sensations, misinterpret them, and
become alarmed by them because of a faulty cognitive scheme.

Etiology

• Faulty cognitive scheme.


• Somatic symptom disorder can also be understood in terms of a social learning
model.
o The sick role offers an escape that allows a patient to avoid noxious
obligations, to postpone unwelcome challenges, and to be excused from
usual duties and obligations.
• The psychodynamic school of thought holds that aggressive and hostile wishes
toward others are transferred (through repression and displacement) into physical
complaints.
• The anger of patients with this disorder originates in past disappointments,
rejections, and losses, but the patients express their anger in the present by
soliciting the help and concern of other persons and then rejecting them as
ineffective.
• This disorder is also viewed as a defense against guilt, a sense of innate badness, an
expression of low self-esteem, and a sign of excessive self-concern.
• Pain and somatic suffering thus become means of atonement and expiation
(undoing) and can be experienced as deserved punishment for past wrongdoing
(either real or imaginary) and for a person's sense of wickedness and sinfulness.

Diagnostic Criteria

A. One or more somatic symptoms that are distressing or result in significant disruption of
daily life.

B. Excessive thoughts, feelings, or behaviors related to the somatic symptoms or associated


health concerns as manifested by at least one of the following:

1 . Disproportionate and persistent thoughts about the seriousness of one's symptoms.

2 . Persistently high level of anxiety about health or symptoms.

3 . Excessive time and energy devoted to these symptoms or health concerns.

C. Although any one somatic symptom may not be continuously present, the state of being
symptomatic is persistent (typically more than 6 months).

Specify if:

With predominant pain (previously pain disorder): This specifier is for individuals whose
somatic symptoms predominantly involve pain.

Specify if:

Persistent: A persistent course is characterized by severe symptoms, marked impairment,


and long duration (more than 6 months).

Specify current severity:

Mild: Only one of the symptoms specified in Criterion B is fulfilled.

Moderate: Two or more of the symptoms specified in Criterion B are fulfilled.


Severe: Two or more of the symptoms specified in Criterion B are fulfilled, plus there are
multiple somatic complaints (or one very severe somatic symptom).

Course and Prognosis

The course of the disorder is usually episodic; the episodes last from months to years and
are separated by equally long quiescent periods.

A good prognosis is associated with high socioeconomic status, treatment-responsive


anxiety or depression, sudden onset of symptoms, the absence of a personality disorder,
and the absence of a related nonpsychiatric medical condition. Most children with the
disorder recover by late adolescence or early adulthood.

Treatment

• Patients with somatic symptom disorder usually resist psychiatric treatment,


although some accept this treatment if it takes place in a medical setting and focuses
on stress reduction and education in coping with chronic illness.
• Group psychotherapy.
• Other forms of psychotherapy, such as individual insight-oriented psychotherapy,
behavior therapy, cognitive therapy, and hypnosis, may be useful.
• Frequent, regularly scheduled physical examinations help to reassure patients that
their physicians are not abandoning them and that their complaints are being taken
seriously.
• Pharmacotherapy

Illness Anxiety Disorder

Illness anxiety disorder is a new diagnosis in the fifth edition of Diagnostic and Statistical
Manual of Mental Disorders (DSM-5) that applies to those persons who are preoccupied
with being sick or with developing a disease of some kind. It is a variant of somatic symptom
disorder (hypochondriasis).

As stated in DSM-5: Most individuals with hypochondriasis are now classified as having
somatic symptom disorder; however, in a minority of cases, the diagnosis of illness anxiety
disorder applies instead. According to DSM-5, somatic symptom disorder is diagnosed when
somatic symptoms are present, whereas in illness anxiety disorder, there are few or no
somatic symptoms and persons are "primarily concerned with the idea they are ill." The
diagnosis may also be used for persons who do, in fact, have a medical illness but whose
anxiety is out of proportion to their diagnosis and who assume the worst possible outcome
imaginable.

Epidemiology

The prevalence of this disorder is unknown aside from using data that relate to
hypochondriasis, which gives a prevalence of 4 to 6 percent in a general medical clinic
population.

Etiology

• The etiology is unknown.


• The social learning model described for somatic symptom disorder may apply to this
disorder as well.
• The psychodynamic school of thought is also similar to somatic symptom disorder.
• Aggressive and hostile wishes toward others are transferred into minor physical
complaints or the fear of physical illness.
• The anger of patients with illness anxiety disorder, as in those with hypochondriasis,
originates in past disappointments, rejections, and losses.
• Similarly, the fear of illness is also viewed as a defense against guilt, a sense of innate
badness, an expression of low self-esteem, and a sign of excessive self-concern.
• The feared illness may also be seen as punishment for past either real or imaginary
wrongdoing.
• The nature of the person's relationships to significant others in his or her past life
may also be significant.
• A parent who died from a specific illness, for example, might be the stimulus for the
fear
• of developing that illness in the offspring of that parent.
• The type of the fear may also be symbolic of unconscious conflicts that are reflected
in the type of illness of which the person is afraid or the organ system selected (e.g.,
heart, kidney).

Diagnostic Criteria

A. Preoccupation with having or acquiring a serious illness.

B. Somatic symptoms are not present or, if present, are only mild in intensity. If another
medical condition is present or there is a high risk for developing a medical condition (e.g.,
strong family history is present), the preoccupation is clearly excessive or disproportionate.

C. There is a high level of anxiety about health, and the individual is easily alarmed about
personal health status.

D. The individual performs excessive health-related behaviors (e.g., repeatedly checks his or
her body for signs of illness) or exhibits maladaptive avoidance (e.g., avoids doctor
appointments and hospitals).

E. Illness preoccupation has been present for at least 6 months, but the specific ill ness that
is feared may change over that period of time.

F. The illness-related preoccupation is not better explained by another mental disorder,


such as somatic symptom disorder, panic disorder, generalized disorder, body dysmorphic
disorder, obsessive-compulsive disorder, or delusional disorder, somatic type.

Specify whether:

Care-seeking type: Medical care, including physician visits or undergoing tests and
procedures, is frequently used.

Care-avoidant type: Medical care is rarely used.

Course and Prognosis


Because the disorder is only recently described, there are no reliable data about the
prognosis. One may extrapolate from the course of somatic symptom disorder, which is
usually episodic.

Treatment

• As with somatic symptom disorder, patients with illness anxiety disorder usually
resist psychiatric treatment.
• Group psychotherapy
• Other forms of psychotherapy, such as individual
• insight-oriented psychotherapy, behavior therapy, cognitive therapy, and hypnosis,
may be useful.
• Pharmacotherapy may be of help in alleviating the anxiety generated by the fear that
the patient has about illness, especially if it is one that is life-threatening; but it is
only ameliorative and cannot provide lasting relief.

Pain Disorder

[Refer KAPLAN, AHUJA AND UG NOTES]


MODULE 5: SEXUAL DISORDERS AND PERSONALITY DISORDERS

UNIT 1: SEXUAL RESPONSE CYCLE, SEXUAL DYSFUNCTIONS: DELAYED EJACULATION,


ERECTILE DISORDER, FEMALE ORGASMIC DISORDER, FEMALE SEXUAL INTEREST /AROUSAL
DISORDER, GENITO PELVIC PAIN OR PENETRATION DISORDER, MALE HYPOACTIVE SEXUAL
DESIRE DISORDER, PREMATURE EJACULATION.

Sexual Dysfunction

According to ICD- 10, sexual dysfunction refers to a person's inability ''to participate in a
sexual relationship as he or she would wish." The essential features of sexual dysfunctions
are an inability to respond to sexual stimulation, or the experience of pain during the sexual
act. Dysfunction can be defined by disturbance in the subjective sense of pleasure or desire
usually associated with sex, or by the objective performance. In the (DSM-5), the sexual
dysfunctions include;

• male hypoactive sexual desire disorder,


• female sexual interest/ arousal disorder,
• erectile disorder,
• female orgasmic disorder,
• delayed ejaculation,
• premature (early) ejaculation,
• genito-pelvic pain/penetration disorder,
• substance/medication induced sexual dysfunction,
• Other specified sexual dysfunction, and
• unspecified sexual dysfunction.

Sexual dysfunctions are diagnosed only when they are a major part of the clinical picture. If
more than one dysfunction exists, they should all be diagnosed. Sexual dysfunctions can be:

• lifelong or acquired,
• generalized or situational,
• can result from psychological factors, physiological factors, combined factors, and
numerous stressors including prohibitive cultural mores, health and partner issues,
and relationship conflicts.
• Sexual dysfunctions due to a general medical condition, substance use, or adverse
effects of medication.

In DSM-5, specification of the severity of the dysfunction is indicated by noting whether the
patient's distress is mild, moderate, or severe.

Sexual dysfunctions are frequently associated with other mental disorders, such as
depressive disorders, anxiety disorders, personality disorders, and schizophrenia. Sexual
dysfunctions are usually self-perpetuating, with the patients increasingly subjected to
ongoing performance anxiety and a concomitant inability to experience pleasure.

[Refer Kaplan & Sadock, pg no. 575- 599]

[Refer Ahuja]
MODULE 6: SUBSTANCE RELATED DISORDERS AND NEUROCOGNITIVE DISORDERS

UNIT 1: ALCOHOL, OPIOID, CANNABIS

Alcohol- Related Disorder

Alcoholism is among the most common psychiatric disorders. Alcohol is a potent drug that
causes both acute and chronic changes in almost all neurochemical systems. Thus, alcohol
abuse can produce serious temporary psychological symptoms including depression,
anxiety, and psychoses. Long-term, escalating levels of alcohol consumption can produce
tolerance as well as such intense adaptation of the body that cessation of use can
precipitate a withdrawal syndrome usually marked by insomnia, evidence of hyperactivity of
the autonomic nervous system, and feelings of anxiety. Therefore, in an adequate
evaluation of life problems and psychiatric symptoms in a patient, the clinician must
consider the possibility that the clinical situation reflects the effects of alcohol.

Prevalence of Drinking

According to recent data published by the World Health Organization (WHO), the total per
capita consumption of alcohol by individuals above 15 years of age is 6.2 L of pure alcohol
per year, which equals 13.5 g of pure alcohol per day. However, there is a wide variation
between the WHO regions and member states. Nearly 5.1% of the global burden of disease
is attributable to alcohol consumption, and it causes nearly 3.3 million deaths every year.

The National Mental Health Survey of India 2015–16 found the prevalence of AUDs to be 9%
in adult men. In India, the alcohol-attributable fraction (AAF) of all cause deaths was found
to be 5.4%. Around 62.9% of all the deaths due to liver cirrhosis were attributable to alcohol
use.

Alcohol use is quite common in India both in rural and urban areas with prevalence rates as
per various studies varying from 23% to 74% in males in general and although it's not that
common in females but it has been found to be prevalent at the rate 24% to 48 % in females
in certain sections and communities.
At any time, two of three men are drinkers, with a ratio of persisting alcohol intake of
approximately 1.3 men to 1 .0 women, and the highest prevalence of drinking from the
middle or late teens to the mid-20s.

Men and women with higher education and income are most likely to imbibe, and, among
religious denominations, Jews have the highest proportion who consume alcohol but among
the lowest rates of alcohol dependence.

About 200,000 deaths each year are directly related to alcohol abuse. The common causes
of death among persons with the alcohol-related disorders are suicide, cancer, heart
disease, and hepatic disease.

Alcohol use and alcohol-related disorders are associated with about 50 percent of all
homicides and 25 percent of all suicides. Alcohol abuse reduces life expectancy by about 1 0
years, and alcohol leads all other substances in substance-related deaths.

Epidemiology
Comorbidity

• Other substance related disorder (multiple substance abuse)


• Anti-social personality disorder
• Mood disorder
• Anxiety disorder
• Suicide

Etiology

Psychological theories

• Alcohol- related disorder develop when alcohol is used to reduce tension, increase
feelings of power, and decrease the effects of psychological pain.
• The psychological theories are built, in part, on the observation among non-alcoholic
people that the intake of low doses of alcohol in a tense social setting or after a
difficult day can be associated with an enhanced feeling of well-being and an
improved ease of interactions. In high doses, especially at falling blood alcohol levels,
however, most measures of muscle tension and psychological feelings of
nervousness and tension are increased. Thus, tension-reducing effects of this drug
might have an impact most on light to moderate drinkers or add to the relief of
withdrawal symptoms, but play a minor role in causing alcoholism.

Psychodynamic theories

• Fixation at the oral stage of psychosexual development.

Behavioural theories

• Expectations about the rewarding effects of drinking, cognitive attitudes toward


responsibility for one's behavior, and subsequent reinforcement after alcohol intake
all contribute to the decision to drink again after the first experience with alcohol
and to continue to imbibe despite problems.

Sociocultural theories

• Sociocultural theories are often based on extrapolations from social groups that
have high and low rates of alcoholism.
• Environmental events, presumably including cultural factors, account for as much as
40 percent of the alcoholism risk.

Childhood history

• One or both parents AUD history.


• Children at high risk for alcohol-related disorders have been found to possess, on
average, a range of deficits on neurocognitive testing, low amplitude of the P300
wave on evoked potential testing, and a variety of abnormalities on
electroencephalography (EEG) recordings.
• A childhood history of attention-deficit/ hyperactivity disorder (ADHD), conduct
disorder, or both, increases a child's risk for an alcohol-related disorder as an adult.
• Personality disorders, especially antisocial personality disorder, as noted earlier, also
predispose a person to an alcohol-related disorder.

Genetic theories

• A threefold to fourfold increased risk for severe alcohol problems is seen in close
relatives of alcoholic people.
• 60% of concordance rate for monozygotic than dizygotic twins.
• In adoption studies a significantly enhanced risk for alcoholism in the offspring of
alcoholic parents, even when the children had been separated from their biological
parents close to birth and raised without any knowledge of the problems within the
biological family.
• Animal studies support the importance of a variety of yet-to-be-identified genes in
the free-choice use of alcohol, subsequent levels of intoxication, and some
consequences.

EFFECTS OF
ALCOHOL

EFFECTS ON PHYSIOLOGICAL
BRAIN EFFECTS
Alcohol Induced Disorder

• Alcohol Use Disorder


• Alcohol Intoxication
• Alcohol Withdrawal
• Withdrawal Seizures
• Delirium
• Alcohol Induced Persisting Dimentia
• Alcohol Induced Persisting Amnestic Disorder
• Alcohol Induced Psychotic Disorder
• Alcohol Induced Mood Disorder
• Alcohol Induced Anxiety Disorder
• Alcohol Induced Sexual Dysfunction
• Alcohol Induced Sleep Disorder
• Unspecified Alcohol Related Disorder
• Idiosyncratic Alcohol Intoxication
• Other Alcohol Related Neurological Disorders
• Fetal Alcohol Syndrome

[Refer Kapaln & Sadock, pg no: 628- 634]

Prognosis

Between 10 and 40 percent of alcoholic persons enter some kind of formal treatment
program during the course of their alcohol problems. A number of prognostic signs are
favorable. First iscthe absence of preexisting antisocial personality disorder or a diagnosis of
other substance abuse or dependence. Second, evidence of general life stability with a job,
continuing close family contacts, and the absence of severe legal problems also bodes well
for the patient. Third, if the patient stays for the fullccourse of the initial rehabilitation
(perhaps 2 to 4 weeks), the chances of maintaining abstinence are good. The combination of
these three attributes predicts at least a 60 percent chance for 1 or more years of
abstinence.
Alcoholic persons with severe drug problems (especially intravenous drug use or cocaine or
amphetamine dependence) and those who are homeless may have only a 10 to 15 percent
chance of achieving 1 year of abstinence, however.

Treatment

• Intervention- The goal in the intervention step, which has also been called
confrontation, is to break through feelings of denial and help the patient recognize
the adverse consequences likely to occur if the disorder is not treated. Intervention
is a process aimed at maximizing the motivation for treatment and continued
abstinence.
• Family- The family can be of great help in the intervention. Family members must
learn not to protect the patient from the problems caused by alcohol; otherwise, the
patient may not be able to gather the energy and the motivation necessary to stop
drinking. In addition, during the intervention stage, the family can suggest that the
patient meet with persons who are recovering from alcoholism, perhaps through AA,
and family members can meet with groups, such as Al-Anon, that reach out to family
members.
• Detoxification- The essential first step in detoxification is a thorough physical
examination. In the absence of a serious medical disorder or combined drug abuse,
severe alcohol withdrawal is unlikely. The second step is to offer rest, adequate
nutrition, and multiple vitamins, especially those containing thiamine.
• Rehabilitation- For most patients, rehabilitation includes three major components: (
1 ) continued efforts to increase and maintain high levels of motivation for
abstinence; (2) work to help the patient readjust to a lifestyle free of alcohol; and (3)
relapse prevention.
• Counseling
• Psychotherapy
• Medications- Benzodiazepines, Naltrexone, Disulfiram and Acamprosate.
• Alcoholic Anonymous (AA)- Members of AA have help available 24 hours a day,
associate with a sober peer group, learn that it is possible to participate in social
functions without drinking, and are given a model of recovery by observing the
accomplishments of sober members of the group.

Opioid- Related Disorder

[REST LEARN FROM KAPLAN]


PSYCHOMETRY
MODULE 2: METHODS OF ASSESSMENT

UNIT 1: BEHAVIOURAL ASSESSMENT, DATA COLLECTION METHODS: BRIEF OVERVIEW

Behavioural Assessment

Behavioural assessment concentrates on behaviour itself rather than on underlying traits,


hypothetical causes, or presumed dimensions of personality. It offers a practical alternative
to projective tests, self-report inventories, and other unwieldy techniques aimed at global
personality assessment. Typically, behavioural assessment is designed to meet the needs of
therapists and their clients in a quick and uncomplicated manner. Behavioural assessment
strategies tend to be simple, direct, behaviour- analytic, and continuous with treatment.
Behaviour therapists use a wide range of modalities to evaluate their clients. The methods
of behavioural assessment include behavioural observations, self-reports, parent ratings,
staff ratings, sibling ratings, judges’ ratings, teacher ratings, therapist ratings, nurses’
ratings, physiological assessment, biochemical assessment, biological assessment,
structured interviews, semi-structured interviews, and analogue tests.

Hersen and Bellack (1988) list 286 behavioural tests used in widely diverse problems and
disorders in children, adolescents, adults, and the geriatric population. In recent years, a
new form of behavioural assessment known as ecological momentary assessment has
become increasingly popular. Behavioural assessment is often—but not always— an integral
part of behaviour therapy. In many cases, the nature of behavioural assessment is dictated
by the procedures and goals of behaviour therapy. At present, the specific techniques of
behaviour therapy can be classified into four overlapping categories (Johnston, 1986):
exposure-based methods, cognitive behaviour therapies, self-control procedures, and social
skills training. Behavioural assessment is used in all of these approaches.

The Individuals with Disabilities Education Act (IDEA) Amendments mandate the use of
functional behavioural assessment (FBA) for disabled students. In behavioural assessment,
the observer plays a more active role in recording the data and, therefore, is much more
likely to make errors. Some of the problems include reactivity, drift, and expectancies
(Kazdin, 2004). [Kaplan & Saccuzzo, p. 199- 202]
Kratochwill et al. (1999) describe various behavioural assessment procedures as lying on a
continuum from direct to indirect assessment. Direct procedures for behavioural
assessment include self-monitoring, physiological recording, analogue assessment, and
direct observation and counting of discrete behavioural events.

Methods of Behavioural Assessment

1) Interviewing

Behavioural interviews focus on problem-solving strategies and define presenting


problems in terms of people’s actions rather than their states, traits, or psychodynamic
conflicts. Steps in Behavioural Interviewing are:

• Identify the problem and specify target behaviours.

• Identify and analyze relevant environmental factors.

• Develop a plan for intervention.

• Implement this plan.

• Evaluate the outcomes of treatment.

• Modify treatment as needed and re-evaluate outcomes.

The behavioural interview seeks to minimize the amount of inference used to obtain
data and looks primarily for current circumstances that trigger a behaviour. This search,
which varies from client to client, is the principal strength of behavioural interviewing.
Nevertheless, rapport with the child and with people who are influential in the child’s
life remains important. The interview gives practitioners an opportunity to interact at a
more personal level and to achieve an alliance and rapport less well-afforded by formal,
standardized testing. The behavioural interview is also eminently practical.

The lack of standard protocol for the interview considerably limits the reliability and
validity that can be obtained. The consistency of behaviour over time is a concern in
developing interventions and interview techniques without quantifiable outcomes do
not assess it accurately. This limitation has retarded research on behavioural
interviewing.

2) Self- report Inventories

Another procedure for behavioural assessment relies on an individual’s responses to a


set of standardized questions and has an objective scoring system and a normative
reference group. This self-report of cognitions, attitudes, feelings, and behaviour—
concurrent with the collection of interview data, ratings by others, and direct
observation of behaviour—introduces into the evaluation an additional component that
is objective and typically practical as well. For the most part, information about internal
experiences is accessible only through clients themselves. Thus, a self-report inventory is
particularly important in diagnosing anxiety disorders and other conditions with a strong
internal component.

Early in the development of behavioural assessment, practitioners avoided such


measures, asserting that they were antithetical to the concept of behavioural
assessments because only observable behaviours were regarded as acceptable data
(Ollendick & Greene, 1998). Now, self-report measures have attained widespread use in
behavioural assessment (Groth-Marnat, 1990; Kratochwill et al., 1999; Ollendick &
Greene, 1998). This change has occurred for two primary reasons. Test developers have
constructed a wide range of self-report inventories specifically for use in behavioural
assessment. In addition, reports of feelings, thoughts, and other covert activities have
gained recognition as behaviour and indeed have become central to cognitive-
behavioural approaches to assessment and intervention. Child self-report inventories
have emerged in response to a recognition that children’s perceptions of their
environment, and of their behaviour and its consequences, are important in their own
right and in behaviour change. Kratochwill et al. (1999) argue that, in behavioural
assessment, self-report scales are most useful for gathering data on a child’s cognitions
and subjective experiences, information that is often unobtainable through any other
means. Self-report scales can be highly specific and tied to a narrow set of concepts or
they can be of the omnibus variety, assessing a variety of constructs. Example: The
Behaviour Assessment System for Children (BASC) and the Child Behaviour Checklist
(CBCL), the Revised Children’s Manifest Anxiety Scale or RCMAS (Reynolds & Richmond,
1985).

3) Behaviour Rating Scales

Behaviour rating scales are typically omnibus, broad band scales: They provide for the
assessment of a wide range of behaviours in children and youth. Kratochwill et al. (1999)
and Ollendick and Greene (1998) together describe 10 advantages to the use of broad-
based behaviour rating scales,

i. Provide an overall description of the child’s behaviour.


ii. Elicit information on problems that may be overlooked in a behavioural interview
and during direct observation.
iii. Provide results that are easily quantified.
iv. Allow derivation or specification of clusters of behaviour that commonly co-occur
for a particular child.
v. Assist in the empirical derivation of behavioural clusters common among groups
of referred children.
vi. Provide a reliable basis for comparing pre- and post-treatment behaviour and
evaluating outcomes.
vii. Are a convenient means of obtaining data on the social validity of outcomes.
viii. Typically assess broad dimensions such as school functioning, as well as narrow
dimensions such as anxiety.
ix. Are a cost-effective, convenient, and minimally intrusive means of collecting
objective data.
x. Are useful in matching a child with a specific treatment.
xi. Assist in making differential diagnoses among disorders, not just in detecting the
presence of abnormal frequencies of behaviour.
xii. Permit normative, developmentally sensitive comparisons through empirical
methods rather than through clinical judgment or other subjective procedures.
xiii. Can be evaluated empirically as diagnostic tests using familiar psychometric
concepts such as reliability and validity.
xiv. Provide a clearly systematic method that can be used easily in the same way by
numerous clinicians and in numerous settings.
xv. May allow a prioritization of target behaviours, as with the BASC.

Despite these strengths and advantages, rating scales do present some potential
problems and should not be used in isolation. Behaviour ratings are impressionistic,
holistic ratings provided by a respondent (typically a parent or teacher) who may be
biased. The ratings received are thus summary judgments, albeit systematic and
standardized, made by someone knowledgeable about the child, and appropriate for a
comparison to a common referent, the ratings of the norming sample. In addition,
children behave differently under different circumstances and under the direction of
different individuals, even if their settings are similar. Practitioners should obtain ratings
from multiple respondents when possible in order to assess the specificity and
generalizability of the behavioural patterns detected.

Kratochwill et al. (1999) reprove behaviour rating scales, regarding them as designed to
detect the presence of negative behaviours (i.e., behavioural excesses and deficits) but
not positive behaviours or assets of the child.

4) Direct Observation and Recording


Direct observation and recording, typically through a counting procedure, is another
widely used procedure among behaviourally oriented practitioners. Ollendick &
Greene (1998) hail direct observation of a child’s behaviour in the natural
environment as the hallmark of child behavioural assessment. This form of
observation is also a unique contribution to assessment from the behavioural school
of psychology.
Characteristics:
• Behaviour is observed in a natural setting.
• Behaviour is recorded or coded as it occurs.
• Impartial, objective observers record behaviour.
• Behaviour is described in clear, crisp terms, requiring little or no inference by the
observer.
Direct observation can occur with or without a standard behaviour checklist or
classification scheme but is easier to accomplish using a standardized coding system,
such as the BASC Student Observation Scale (SOS). A standard format enhances
training, objectivity, and accuracy, but reduces flexibility. To promote objectivity, the
observer should be blind to the presence of any intervention plans that are in place.
Nevertheless, standardized codes for behaviour such as those contained in the BASC
SOS are the recommended practice (e.g., Kratochwill et al., 1999). Observers should
be as unobtrusive as possible or they may disrupt the natural setting and alter
naturally occurring behaviour patterns. Observation and coding of behaviour in
multiple settings (e.g., in two or more classrooms and with different teachers) also
are preferred because this method assists in evaluating the generalizability versus
the specificity of the behaviours seen. Direct observation of behaviour is the least
inferential of psychological assessment procedures, but issues of reliability are a
major concern—one that is allayed in part by multiple ratings in several settings.
5) Self- monitoring

Self-monitoring is a form of direct observation in which people monitor or observe


their own behaviour and then record it. An enormous literature in this area shows
that people are capable of being trained to observe and record their behaviour
accurately, if motivated to do so. It is very important to define the relevant
behaviours clearly and with much specificity. A small number of behaviours should
be specified as well, or the child may become overwhelmed or attend only to a
subset of the behaviours given to monitor. Self-monitoring can be an effective
means of collecting data on a few important behaviours’ frequency and settings.
Self-monitoring is obviously a form of self-report but is distinguished from the self-
report scales (discussed previously) by the contemporaneous nature of the
recording. In self-monitoring, the child codes or records the behaviour as it occurs,
whereas self-report scales are retrospective accounts of more numerous and varied
behaviours, which are typically less specified than behaviours subjected to self-
monitoring. There exist only a few standardized protocols for self-monitoring.

6) Analogue Observation
Analogue observation is a method of direct observation, but it occurs in a contrived,
carefully structured setting, designed specifically for the assessment. By contrast,
direct observation occurs in a naturalistic setting. In analogue assessment or
observation, after the setting has been structured, direct observation of behaviour
follows, using many other principles of observation.

7) Psychophysiological Assessment
Psychophysiological assessment is an important form of behavioural assessment that
entails a direct recording of physiologically observed changes in the body, such as
increases in heart rate or surges in brain activity. Physiological responses are
recorded in the presence of a specific stimulus, such as a flashing light, or during a
specific behavioural episode, such as a petit mal seizure. Standardized protocols are
commonplace in psychophysiological assessment. Electronic equipment, such as
electroencephalographs, electromyographs, and electrodermal measuring devices,
are also common and require careful calibration. Psychophysiological assessment is a
highly specialized area of behavioural assessment but is a powerful technology
useful in diagnosing many disorders.

A practitioner can use the SDH (Structured Developmental History) as a structured interview
with a parent or other caregiver, or a questionnaire that can be sent home for completion or
filled out in the practitioner’s office at school. When using the SDH as a questionnaire, the
practitioner should review the completed form carefully in case clarification or elaboration
is needed from the respondent.

UNIT 2: OBSERVATION – PURPOSE, TYPES-PARTICIPANT & NON-PARTICIPANT

[Refer A. K. Singh, p. 259]

UNIT 3: SURVEY- QUESTIONNAIRE- OPEN ENDED, CLOSED ENDED, FUNNEL TYPE, MAILED
QUESTIONNAIRES, INVENTORIES.

Questionnaire [Refer A. K. Singh, p. 247-252]


Survey [Refer A. K. Signh, p. 383-387]

Funnel Type Questionnaire

This technique involves starting with general questions, and then drilling down to a more
specific point in each. Usually, this will involve asking for more and more detail at each level.

Inventories

The term “test” most frequently refers to devices for measuring abilities or qualities for
which there are authoritative right and wrong answers. Such a test may be contrasted with
a personality inventory, for which it is often claimed that there are no right or wrong
answers. At any rate, in taking what often is called a test, the subjects are instructed to do
their best; in completing an inventory, they are instructed to represent their typical
reactions. A distinction also has been made that in responding to an inventory the subjects
control the appraisal, whereas in a test they do not. If a test is more broadly regarded as a
set of stimulus situations that elicit responses from which inferences can be drawn,
however, then an inventory is, according to this definition, a variety of test.

[Refer Ramsay & Reynolds, p. 12- ]

UNIT 4: BEHAVIOURAL RATING SCALE- BROAD BAND AND NARROW BAND SCALES,
CHARACTERISTICS OF RATING SCALES, TYPES OF RATING SCALE- GRAPHIC, NUMERIC,
DESCRIPTIVE AND COMPARATIVE RATING SCALES.

Behavioural Rating Scale

A Behavior Rating Scale (BRS) is a tool that can be used to quantitatively measure
behavior. BRS is one of the oldest assessment tools used in mental health, education, and
research. These scales typically assess problem behaviors, social skills, and emotional
functioning. They are widely employed in the assessment of personality development,
adaptive behavior, and social-emotional functioning. They aid in diagnostic decision making
and in planning treatment and education. Behaviour rating scales are typically omnibus,
broad band scales: They provide for the assessment of a wide range of behaviours.
Kratochwill et al. (1999) and Ollendick and Greene (1998) together describe 10
advantages to the use of broad-based behaviour rating scales,

i. Provide an overall description of the behaviour.


ii. Elicit information on problems that may be overlooked in a behavioural interview
and during direct observation.
iii. Provide results that are easily quantified.
iv. Allow derivation or specification of clusters of behaviour that commonly co-occur
for a particular child.
v. Assist in the empirical derivation of behavioural clusters common among groups
of referred people.
vi. Provide a reliable basis for comparing pre- and post-treatment behaviour and
evaluating outcomes.
vii. Are a convenient means of obtaining data on the social validity of outcomes.
viii. Typically assess broad dimensions such as school functioning, as well as narrow
dimensions such as anxiety.
ix. Are a cost-effective, convenient, and minimally intrusive means of collecting
objective data.
x. Are useful in matching a person with a specific treatment.
xi. Assist in making differential diagnoses among disorders, not just in detecting the
presence of abnormal frequencies of behaviour.
xii. Permit normative, developmentally sensitive comparisons through empirical
methods rather than through clinical judgment or other subjective procedures.
xiii. Can be evaluated empirically as diagnostic tests using familiar psychometric
concepts such as reliability and validity.
xiv. Provide a clearly systematic method that can be used easily in the same way by
numerous clinicians and in numerous settings.
xv. May allow a prioritization of target behaviours, as with the BASC.

Descriptive Rating Scale

Descriptive rating scale does not use number but divide the assessment into series of verbal
phrases to indicate the level of performance. In a descriptive rating scale, each answer
option is elaborately explained for the respondents. A numerical value is not always related
to the answer options in the descriptive rating scale. There are certain surveys, for example,
a customer satisfaction survey, which needs to describe all the answer options in detail so
that every customer has thoroughly explained information about what is expected from the
survey.

Comparative Rating scale

In comparative rating scale, we make relative judgements against other similar objects. The
respondents under this method directly compare two or more objects and make choices
among them. There are two generally used approaches of ranking scales viz.

a) Method of paired comparisons

Under it the respondent can express his attitude by making a choice between two
objects, say between a new flavour of soft drink and an established brand of drink.
Paired-comparison data may be treated in several ways. If there is substantial
consistency, we will find that if X is preferred to Y, and Y to Z, then X will consistently
be preferred to Z. If this is true, we may take the total number of preferences among
the comparisons as the score for that stimulus. It should be remembered that paired
comparison provides ordinal data, but the same may be converted into an interval
scale by the method of the Law of Comparative Judgement developed by L.L.
Thurstone. This technique involves the conversion of frequencies of preferences into
a table of proportions which are then transformed into Z matrix by referring to the
table of area under the normal curve. J.P. Guilford in his book “Psychometric
Methods” has given a procedure which is relatively easier. The method is known as
the Composite Standard Method.

b) Method of rank order


Under this method of comparative scaling, the respondents are asked to rank their
choices. This method is easier and faster than the method of paired comparisons
stated above. For example, with 10 items it takes 45 pair comparisons to complete
the task, whereas the method of rank order simply requires ranking of 10 items only.
The problem of transitivity (such as A prefers to B, B to C, but C prefers to A) is also
not there in case we adopt method of rank order. Moreover, a complete ranking at
times is not needed in which case the respondents may be asked to rank only their
first, say, four choices while the number of overall items involved may be more than
four, say, it may be 15 or 20 or more. To secure a simple ranking of all items involved
we simply total rank values received by each item. There are methods through which
we can as well develop an interval scale of these data. But then there are limitations
of this method. The first one is that data obtained through this method are ordinal
data and hence rank ordering is an ordinal scale with all its limitations. Then there
may be the problem of respondents becoming careless in assigning ranks particularly
when there are many (usually more than 10) items.

Others [Refer A. K. Singh, p. 262-288]

UNIT 5: INTERVIEW AND CASE STUDY- STRUCTURED, UNSTRUCTURED, TELEPHONIC


INTERVIEWS.

INTERVIEW

An interview is generally a research technique which involves asking open-ended questions


to converse with respondents and collect elicit data about a subject. In other words, an
interview is a face-to-face situation between the interviewer and the respondent which
intends to elicit some desired information from the later. It is a social process. The
interviewer in most cases is an expert who intends to understand respondent opinions in a
well-planned and executed series of questions and answers Interviews offer the researchers
with a platform to prompt their participants and obtain inputs in the desired detail.
FUNCTIONS

DESCRIPTION EXPLORATION

Provide insight into


Providing insights into
the unexplored
the interactive quality
dimensions of a topic/
of social life.
subject.

There are mainly three types of interview as elucidated below:

• Structured Interview (Patterned/ Formal interview)

Structured interviews are defined as research tools that are extremely rigid in their
operations are allows very little or no scope of prompting the participants to obtain
and analyse results. It is thus also known as a standardized interview. Questions in
this interview are pre-decided according to the required detail of information.
quantitative research method where the interviewer a set of prepared closed-ended
questions in the form of an interview schedule, which he/she reads out exactly as
worded. In other words, this is a kind of interview method where already prepared
questions are asked in a set order and answers are recorded in a standardized form.

Advantages:

• used on a large sample of the target population

• The interview procedure is made easy due to the standardization offered by


structured interviews.

• Since the structure of the interview is fixed, it often generates reliable results and is
quick to execute
Disadvantages:

• Limited scope of assessment of obtained results.

• The accuracy of information overpowers the detail of information.

• Respondents are forced to select from the provided answer options.

• Semi-structured Interview

Semi-structured interviews offer a considerable amount of leeway to the researcher


to probe the respondents along with maintaining basic interview structure. Even if it
is a guided conversation between researchers and interviewees – an appreciable
flexibility is offered to the researchers. A researcher can be assured that multiple
interview rounds will not be required in the presence of structure in this type of
research interview. The best application of semi-structured interview is when the
researcher doesn’t have time to conduct research and requires detailed information
about the topic.

ADV.:

• Researchers can express the interview questions in the format they prefer, unlike
the structured interview.

• Reliable qualitative data can be collected via these interviews.

DISADV.:

• Comparing two different answers becomes difficult as the guideline for conducting
interviews is not entirely followed. No two questions will have the exact same
structure and the result will be an inability to compare are infer results.

• Unstructured Interview (In-depth/ Informal Interview)

Unstructured interviews are usually described as conversations held with a purpose


in mind – to gather data about the research study. Unstructured interviews do not
use any set questions, instead, the interviewer asks open-ended questions based on
a specific research topic, and will try to let the interview flow like a natural
conversation. The interviewer modifies his or her questions to suit the candidate's
specific experiences. In other words, in unstructured interviews there are no pre-
determined questions nor is there any preset order of the questions.

Adv.:

• more flexible as questions can be adapted and changed depending on the


respondents’ answers. The interview can deviate from the interview schedule.

• The participants can clarify all their doubts about the questions and the researcher
can take each opportunity to explain his/her intention for better answers

• Unstructured interviews generate qualitative data through the use of open


questions. This allows the respondent to talk in some depth, choosing their own
words. This helps the researcher develop a real sense of a person’s understanding of
a situation.

• They also have increased validity because it gives the interviewer the opportunity to
probe for a deeper understanding, ask for clarification & allow the interviewee to
steer the direction of the interview etc.

Dis adv.:

• Time-consuming to conduct an unstructured interview and analyze the qualitative


data

• Employing and training interviewers is expensive, and not as cheap as collecting data
via questionnaires. For example, certain skills may be needed by the interviewer.
These include the ability to establish rapport and knowing when to probe.

Other types of interview methods are:

❖ Focus Group Interview

Focus group interview is a qualitative approach where a group of respondents are


interviewed together, used to gain an in‐depth understanding of social issues. The
method aims to obtain data from a purposely selected group of individuals rather
than from a statistically representative sample of a broader population. The role of
the interview moderator is to make sure the group interact with each other and do
not drift off-topic. Adv.:-

• Group interviews generate qualitative narrative data through the use of open
questions. This allows the respondents to talk in some depth, choosing their own
words. This helps the researcher develop a real sense of a person’s understanding of
a situation. Qualitative data also includes observational data, such as body language
and facial expressions.

• They also have increased validity because some participants may feel more
comfortable being with others as they are used to talking in groups in real life

Dis adv.:

• Group interviews are less reliable as they use open questions and may deviate from
the interview schedule making them difficult to repeat.

• Group interviews may sometimes lack validity as participants may lie to impress the
other group members. They may conform to peer pressure and give false answers.

❖ Methods of Research Interview

RESEARCH
INTERVIEW

E-MAIL/ WEB
PERSONAL TELEPHONIC
PAGE
INTERVIEW INTERVIEW
INTERVIEW

• Personal Interview
Personal interviews are one of the most used types of interviews, where the
questions are asked personally directly to the respondent. For this, a researcher can
have a guide online surveys to take note of the answers. A researcher can design
his/her survey in such a way that they take notes of the comments or points of view
that stands out from the interviewee. The advantages are:

• Higher response rate.

• When the interviewees and respondents are face-to-face, there is a way to adapt the
questions if this is not understood.

• More complete answers can be obtained if there is doubt on both sides or a


particular information is detected that is remarkable.

• The researcher has an opportunity to detect and analyze the interviewee’s body
language at the time of asking the questions and taking notes about it

Disadvantages

• They are time-consuming, extremely expensive.

• They can generate distrust on the part of the interviewee, since they may be self-
conscious and not answer truthfully.

• Contacting the interviewees can be a real problem

• Telephonic Interview

Telephonic interviews are widely used and easy to combine with online surveys to
carry out research effectively

Advantages:

• To find the interviewees it is enough to have their telephone numbers on hand.

• They are usually lower cost.

• The information is collected quickly


Disadvantages:

• Many times, researchers observe that people do not answer phone calls.

• Researchers also face that they simply do not want to answer.

• E-mail/Web page Interview

Online research is growing more and more because consumers are migrating to a
more virtual world and it is best for each researcher to adapt to this change. The
increase in people with Internet access has made it popular that interviews via email
or web page stand out among the types of interviews most used today. The
advantages are:

• Speed in obtaining data

• The respondents respond according to their time, at the time they want and in the
place they decide.

• Online surveys can be mixed with other research methods or using some of the
previous interview models. They are tools that can perfectly complement and pay for
the project.

• A researcher can use a variety of questions, logics, create graphs and reports
immediately.

The five steps in interview preparation are substantiated below:

1)Read background materials:

read and understand as much background information about the interviewees and their
organization as possible. As you read through this material, be particularly sensitive to the
language the organizational members use in describing themselves and their organization.
Another benefit of researching your organization is to maximize the time you spend in
interviews; without such preparation you may waste time asking general background
questions.
2)Establishing Interviewing Objectives:

Use the background information you gathered as well as your own experience to establish
interview objectives

3)Decide whom to interview:

When deciding whom to interview, include key people at all levels who will be affected by
the system in some manner. Strive for balance so that as many users’ needs are addressed
as possible

4)Prepare the interviewee:

Prepare the person to be interviewed by calling ahead or sending an email message and
allowing the interviewee time to think about the interview. Interviews should be kept to 45
minutes or an hour at the most. it is likely that the interviewees will resent the intrusion,
whether or not they articulate their resentment

5)Deciding on type of questions and structure:

Proper questioning techniques are the heart of interviewing. Questions have some basic
forms you need to know. The two basic question types are open-ended and closed. Each
question type can accomplish something a little different from the other, and each has
benefits and drawbacks.

Sources of error in Interviewing are:

i. Interview Validity

Many sources of interview error come from the extreme difficulty we have in making
accurate, logical observations and judgments. It has been demonstrated that
interviewers form an impression of the interviewee within the first minute or so and
spend the rest of the interview trying to confirm that impression. People apparently
tend to generalize judgments from a single limited experience (Li, Wang, & Zhang,
2002). In the interview, halo effects occur when the interviewer forms a favorable or
unfavorable early impression. The early impression then biases the remainder of the
judgment process (Howard & Ferris, 1996). Thus, with an early favorable impression
or positive halo, the interviewer will have difficulty seeing the negatives. Similarly,
with an early negative halo. People tend to judge on the basis of one outstanding
characteristic. Hollingworth in 1922 first called this error general standoutishness.
One prominent characteristic can bias the interviewer’s judgments and prevent an
objective evaluation. In an early classic paper, Burtt (1926) noted the tendency of
interviewers to make unwarranted inferences from personal appearance.

(recency effect, halo effect, standoutishness, judgement of personal appearance)

ii. Interview Reliability

Reliability refers to the stability, dependability, or consistency of test results. For


interview data, the critical questions about reliability have centered on inter-
interviewer agreement (agreement between two or more interviewers). As with the
validity studies, reliability coefficients for inter-interviewer agreement vary widely.
One reason for fluctuations in interview reliability is, in part, because interview
procedures vary considerably in their degree of standardization in terms of interview
development, administration, and/or scoring. Simply, different interviewers look for
different things, an argument echoed by others.

iii. Attitude of the interviewer

iv. Incomprehensibility of the questions asked

v. Lack of warmth in the situation of interview

vi. Lack of motivation in the respondents

vii. Duration of interview

(Refer A. K. Singh, pg no. 252-257)

CASE STUDY

Case study is a type of non-experimental/descriptive research. It is an in-depth study of one


situation or cases which may be one subject, group or event (Goode & Hatt, 1981; Best &
Kahn, 1992). In other words, case study is a form of qualitative descriptive research that is
used to look at individuals, a small group of participants, or a group as whole. There is a
detailed contextual analysis of events or conditions and their relationships incorporated. It
is a clinical study. Fredrick Le Play used this method for the first time.

Characteristics of a case study are as follows:

i. The case study is an approach which views a social unit as a whole.

ii. The social unit need not be an individual only but it may be a family, a social group, a
social institution or a community.

iii. In case study the unitary character of the social unit is maintained, It means that the
social unit, whatever it is, is studied as a whole.

iv. In case study the researcher not only tries to explain the complex behavioural
pattern of the social unit but also tries to locate those factors which have given rise
to such complex behavioural pattern (What & why).

v. Since case study is a descriptive study, no variables are manipulated here.

vi. In case study the researcher gathers data usually through methods of observation,
interview, questionnaire, opinionnaire and other psychological tests. Analysis of
recorded data from newspapers, courts, government agencies and other similar
sources is not uncommon.

Based upon the number of individuals studied case study is divided into:

1) Individual case study

2) Community case study

Based upon the purpose, case study is categorized as:

1) Deviant case analysis

2) Isolated clinical case analysis


Apart from this there are various other types of case study as elaborated below:

a. Explanatory Case Study

The explanatory case study focuses on an explanation for a question or a


phenomenon. A case study with a person or group would not be explanatory, as with
humans, there will always be variables. There are always small variances that cannot
be explained.

b. Exploratory Case Study

A type of case study that which is used to explore those situations in which the
intervention being evaluated has no single set of outcomes. The case study's goal is
to prove that further investigation is necessary.

c. Descriptive Case Study

To describe the intervention or phenomenon and the real-life context in which it is


occurred.

d. Multiple Case Study

It enables the researcher to explore differences within and between the cases.

e. Intrinsic Case Study

An intrinsic case study is the study of a case wherein the subject itself is the primary
interest. The "Genie" case is an example of this. The study wasn't so much about
psychology, but about Genie herself, and how her experiences shaped who she was.

f. Instrumental Case Study

It is used to accomplish something rather than understanding a particular situation.


Provides insight into an issue or helps to redefine a theory.

g. Collective Case Study

Similar in nature to multiple case studies.


The steps in conducting a case study are, viz,

1.Plan

2. Develop instruments

3.Train Data collectors

4.Collect Data

5. Analyze data

6.Diseeminate findings

(Refer A. K. Singh, pg no. 387-390)

Study ppts
MODULE 3: TEST CONSTUCTION

Kaufman and Kaufman (1983) provide a good model of the test definition process. In
proposing the Kaufman Assessment Battery for Children (K-ABC), a new test of general
intelligence in children, the authors listed six primary goals that define the purpose of the
test and distinguish it from existing measures:

1. Measure intelligence from a strong theoretical and research basis

2. Separate acquired factual knowledge from the ability to solve unfamiliar problems

3. Yield scores that translate to educational intervention

4. Include novel tasks

5. Be easy to administer and objective to score

6. Be sensitive to the diverse needs of preschool, minority group, and exceptional children
(Kaufman & Kaufman, 1983).

A psychological test is a standardized procedure to measure quantitatively or qualitatively


one or more than one aspect of a trait by means of a sample of verbal or non-verbal
behaviour.

According to Anastasi & Urbina (1997), a psychological test is “ essentially an objective and
standardized measure of sample of behaviour.”

UNIT 1: STEPS IN TEST CONSTRUCTION (Brief overview)


Defining the Test

Selecting a Scaling Method

Constructing the Items

Testing the Items

Revising the Test


Publishing the Test

(Refer A. K. Singh, pg no. 21-25)

UNIT 2: SCALING METHODS- METHOD OF EQUAL APPEARING INTERVALS BY THURSTONE,


METHOD OF SUMMATED RATING BY LIKERT, CUMULATIVE SCALING BY GUTTMAN

(Refer A. K. Singh, pg no. 326-336)

UNIT 3: CONSTRUCTING THE ITEMS- MEANING AND TYPES OF ITEMS, GUIDELINES FOR ITEM
WRITING

(Refer A. K. Singh, pg no. 30- 46)

UNIT 4: ITEM ANALYSIS- ITEM DIFFICULTY- METHOD OF JUDGMENT, EMPIRICAL METHOD

(Refer A. K. Singh, pg no. 47-52)

UNIT 5: ITEM DISCRIMINABILITY- TEST OF SIGNIFICANCE, CORRELATIONAL TECHNIQUE,


ITEM RESPONSE THEORY AND ITEM CHARACTERISTICS CURVE
(Refer A. K. Singh, pg no. 52- 67)

(Kaplan & Saccuzzo, pg no. 174- 179)

(Gregory, pg no. 147-148)

Item Characteristic Curve

UNIT 6: REVISING THE TEST, PUBLISHING THE TEST

( Refer Gregory, pg no. 150-153)


MODULE 5: NORMS

UNIT 1: PARTITION VALUES: PERCENTILES, QUARTILES

Partition values or fractiles are values that divide the same set of observations in
different ways. There are different types of fractiles, prominent values are illustrated and
elaborated below.

PARTITION
VALUES OR
FRACTILES

PERCENTILES QUARTILES DECILES

Percentiles

Percentiles are the specific scores or points within a distribution. Percentiles divide the total
frequency for a set of observations into hundredths. Instead of indicating what percentage
of scores fall below a particular score, as percentile ranks do, percentiles indicate the
particular score, below which a defined percentage of scores falls.

For example, The infant mortality rate in Italy is 5.51/1000. When calculating the percentile
rank, you exclude the score of interest and count those below (in other words, Italy is not
included in the count). There are 11 countries in this sample with infant mortality rates
worse than Italy’s. To calculate the percentile rank, divide this number of countries by the
total number of cases and multiply by 100:

Thus, Italy is in the 61st percentile rank, or the 61st percentile in this example is 5.51/1000
or 5.51 deaths per 1000 live births.
In summary, the percentile and the percentile rank are similar. The percentile gives the
point in a distribution below which a specified percentage of cases fall (5.51/1000 for Italy).
The percentile is in raw score units. The percentile rank gives the percentage of cases below
the percentile.

When reporting percentiles and percentile ranks, care must be taken in specifying the
population. A percentile rank is a measure of relative performance. These percentile ranks
are the percentage of scores that fall below the observed Z score.

Quartiles

The quartile system divides the percentage scale into four groups. In other words, quartiles
are points that divide the frequency distribution into equal fourths. The first quartile is the
25th percentile; the second quartile is the median, or 50th, percentile; and the third quartile
is the 75th percentile. These are abbreviated Q1, Q2, and Q3, respectively. One fourth of
the cases will fall below Q1, one half will fall below Q2, and three fourths will fall below Q3.
The interquartile range is the interval of scores bounded by the 25th and 75th percentiles.
In other words, the interquartile range is bounded by the range of scores that represents
the middle 50% of the distribution.

Deciles

The decile system divides the scale into 10 groups. In other words, deciles are similar to
quartiles except that they use points that mark 10% rather than 25% intervals. Thus, the top
decile, or D9, is the point below which 90% of the cases fall. The next decile (D8) marks the
80th percentile, and so forth.

UNIT 2: NORMS: DEVELOPMENT OF NORMS- STEPS- DEFINING TARGET POPULATION,


SELECTING SAMPLE, STANDARDIZING CONDITIONS FOR IMPLEMENTATION.

[Refer A. K. Singh]

UNIT 3: TYPES OF NORMS- AGE EQUIVALENT NORMS, GRADE EQUIVALENT NORMS,


PERCENTILE NORMS

[Refer A. K. Singh]
UNIT 4: NORM-REFERENCED AND CRITERION REFERENCED TESTS

[Refer A. K. Singh]

UNIT 5: STANDARD SCORE NORMS- WHY STANDARD SCORE NORMS?, NORMALISED


STANDARD SCORES- T SCORE, STANINE SCORE, DEVIATION IQ, STEN

[Refer A. K. Singh]

UNIT 6 : TEST MANUAL- USE, INFORMATION TO BE CONTAINED IN THE MANUAL-


DISSEMINATION OF INFORMATION, INTERPRETATION, VALIDITY, RELIABILITY,
ADMINISTRATION AND SCORING, SCALES AND NORMS

[Refer ppt, Arjun]


MODULE 6: APPLICATION AND ISSUES OF TESTING

UNIT 1: TESTING IN EDUCATIONAL SETTINGS - ACHIEVEMENT BATTERIES, TEACHER MADE


CLASS ROOM TESTS.

The purpose of testing in educational setting is to measure educational achievement and


aptitude in school children. All kinds of tests used in the educational setting- Intelligence,
special aptitude, multiple aptitude, personality, etc by counselors and school psychologists.

There are various types of psychological tests used in educational settings.

General Achievement Batteries- KG to 12TH

Stanford Achievement Batteries

The Stanford Achievement Test is one of the oldest of the standardized achievement tests
widely used in the school system (Gardner, Rudman, Karlsen, & Merwin, 1982). Now in its
10th edition, this test is well normed and criterion-referenced, with exemplary psychometric
documentation. It evaluates achievement in kindergarten through 12th grades in the
following areas: spelling, reading comprehension, word study and skills, language arts, social
studies, science, mathematics, and listening comprehension.

Metropolitan Achievement Test

Another well-standardized and psychometrically sound group measure of achievement is


the Metropolitan Achievement Test (MAT), which measures achievement in reading by
evaluating vocabulary, word recognition, and reading comprehension. Now in its eighth
edition, the MAT-8 was re-normed in 2000, and alternate versions of the test including
Braille, large print, and audio formats were made available for use with children having
visual limitations (Harcourt Educational Measurement, 2000).

The MAT-8 also measures mathematics by evaluating number concepts (e.g., measurement,
decimals, factors, time, money), problem solving (e.g., word problems), and computation
(addition, subtraction, multiplication, division). Spelling is evaluated on the MAT-8 in a
normal spelling test format in which the student is asked to spell an orally dictated word
presented in a sentence. Language skills are evaluated with a grammar test as well as a
measure of alphabetizing skills.

Students are also tested on their knowledge of science, geography, economics, history,
political science, anthropology, sociology, and psychology. The MAT-8 standardization
sample reflects a diverse nationwide student population. The sample was stratified by
school size, public versus non-public school affiliation, geographic region, socioeconomic
status, and ethnic background. Reliabilities of the total scores run in the high .90’s, while
those for the five major content areas range from .90 to .96.
Tests of Minimum Competency in Basic skills

ASER (Annual Status of Education Report)

ASER stands for Annual Status of Education Report. This is an annual survey that aims to
provide reliable annual estimates of children’s schooling status and basic learning levels for
each state and rural district in India. ASER is the largest citizen-led survey in India.
Unlike most other large-scale learning assessments, ASER is a household-based rather than
school-based survey. This design enables all children to be included – those who have never
been to school or have dropped out, as well as those who are in government schools,
private schools, religious schools or anywhere else.

In each rural district, 30 villages are sampled. In each village, 20 randomly selected
households are surveyed. This process generates a total of 600 households per district, or
about 300,000 households for the country as a whole. Approximately 600,000 children in
the age group 3-16 who are resident in these households are surveyed.

ASER tools and procedures are designed by ASER Centre, the research and assessment arm
of Pratham. The survey itself is coordinated by ASER Centre and facilitated by the Pratham
network. It is conducted by close to 30,000 volunteers from partner organizations in each
district. All kinds of institutions partner with ASER: colleges, universities, NGOs, youth
groups, women’s organizations, self-help groups and others.

Nature of information collected

o Information on schooling status is collected for all children in the age group 3-
16 living in sampled households.

o Children in the age group 5-16 are tested in basic reading and basic arithmetic. The
same test is administered to all children. The highest level of reading tested
corresponds to what is expected in Std 2; in 2012 this test was administered in 16
regional languages. The highest level of arithmetic tested corresponds to what is
expected in Std 3 or 4, depending on the state.

o Every year, some additional tests are also administered. These vary from year to
year. In 2007, 2009, 2012 and 2014, for example, children were tested in basic
English. In 2011 they were tested on their ability to solve everyday math problems.

o In addition, basic household information is collected every year. In recent years this
has included household size, parental education, and some information on
household assets.
o In 2005, 2007, and every year since 2009, ASER has included a visit to one
government primary school in each sampled village. Basic information is collected on
school infrastructure, enrollment, attendance, teachers and fund flows. Since 2010,
ASER has tracked selected Right to Education (RTE) indicators as well.

Teacher-made Classroom Test

Teacher-made tests are those that are constructed by teachers for use largely within their
classrooms. The effectiveness of such tests depends upon the skill of the teacher and his
knowledge of test construction. Items may come from any area of curriculum and they may
be modified according to the will of the teacher. Rules for administration and scoring are
determined by the teacher. Such tests are largely evaluated by the teachers themselves and
no particular norms are provided; however, they may be developed by the teacher for their
own class.

Tests for College Level

INDIA- UGC NET

SAT Reasoning Test

Formerly known as the Scholastic Aptitude Test, the SAT-I remains the most widely used of
the college entrance tests. The Scholastic Aptitude Test (SAT) is a
standardized test designed to measure important skills required for academic success at
tertiary level, and the current version contains three main sections measuring basic critical
reading, math, and writing skills, as well as open category not included in the final score. It is
widely used for college admissions in the United States.

The theoretical maximum is now 2400 instead of the original 1600. The new scoring is
actually likely to reduce interpretation errors, as interpreters can no longer rely on
comparisons with older versions (Liu, Cahn, & Dorans, 2006; Liu & Walker, 2007).

The modern test is 45 minutes longer, requiring 3 hours and 45 minutes to administer. This
longer time span would appear to reward stamina, determination, persistence, and
sustained attention, and may particularly disadvantage students with disabilities such as
attention-deficit/hyperactivity disorder (Lighthouse, 2006; Lindstrom & Gregg, 2007).

The verbal section is now called “critical reading,” presumably because of an increased focus
on reading comprehension. Indeed, reading comprehension questions dominate this
section, with 48 questions devoted to reading comprehension and 19 to sentence
completion. The analogies section has been completely eliminated.

The writing section is extremely similar to the older SAT Reasoning Test Subject Test:
Writing. Candidates write an essay and answer 49 multiple-choice questions on grammar.
The math section has eliminated much of the basic grammar school math questions, which
used to predominate this section. The modern math section includes algebra II, data
analysis, and statistics.

A major weakness of the original SAT as well as other major college entrance tests is
relatively poor predictive power regarding the grades of students who score in the middle
ranges.

Cooperative School and College Ability Tests (SCAT)

It was developed in 1955 and has not been updated. In addition to the college level, the
SCAT covers three precollege levels beginning at the fourth grade. The SCAT purports to
measure school-learned abilities as well as an individual’s potential to undertake additional
schooling. The School and College Ability Test (SCAT), is a standardized test conducted in
the United States that measures math and verbal reasoning abilities in gifted children.

Psychometric documentation of the SCAT, furthermore, is neither as strong nor as extensive


as that of the SAT. Another problem is that little empirical data support its major
assumption—that previous success in acquiring school learned abilities can predict future
success in acquiring such abilities. Even if this assumption were accurate—and it probably
is—grades provide about as much information about future performance as does the SCAT,
especially at the college level.

The American College Test


The ACT is another popular and widely used college entrance (aptitude) test. It was updated
in 2005, and is particularly useful for non-native speakers of English. The ACT test is a
curriculum-based education and career planning tool for high school students that assesses
the mastery of college readiness standards. The ACT produces specific content scores and a
composite. The content scores are in English, mathematical usage, social studies reading,
and natural science reading. In expressing results, the ACT makes use of the Iowa Test of
Educational Development (ITED) scale. Scores on this scale can vary between 1 and 36, with
a standard deviation of 5 and a mean of 16 for high-school students and a mean of 19 for
college aspirants.

The ACT compares with the SAT in terms of predicting college GPA alone or in conjunction
with high-school GPA (Stumpf & Stanley, 2002). In fact, the correlation between the two
tests is quite high—in the high .80’s (Pugh, 1968). However, internal consistency coefficients
are not as strong in the ACT, with coefficients in the mid .90’s for the composite and in the
high .70’s to high .80’s for the four content scores.

Graduate School Admission

INDIA- GMAT, University Entrance Exams

Graduate Record Examination Aptitude Test

The GRE is one of the most commonly used tests for graduate-school entrance. Offered
throughout the year at designated examination centers located mostly at universities and
colleges in the United States and numerous other countries worldwide, the GRE purports to
measure general scholastic ability. It is most frequently used in conjunction with GPA,
letters of recommendation, and other academic factors in the highly competitive graduate-
school selection process. The GRE contains a general section that produces verbal (GRE-V)
and quantitative (GRE-Q) scores. In 2002, the third section of the GRE, which evaluates
analytical reasoning (GRE-A), was changed from a multiple-choice format to an essay
format. It consists of two essays that require the test taker to analyze an argument based on
the evidence presented and to articulate and support an argument (Educational Testing
Service, 2002). In addition to this general test for all college majors, the GRE contains an
advanced section that measures achievement in at least 20 majors, such as psychology,
history, and chemistry.

In August 2011, the GRE revised General Test replaced the GRE® General Test (ETS, 2011).
According to ETS, the GRE features a new test taker–friendly design and new questions and
more closely reflects the kind of thinking required in graduate school. With a standard mean
score of 500 and a standard deviation of 100, the verbal section covers reasoning,
identification of opposites, use of analogies, and paragraph comprehension. The
quantitative section covers arithmetic reasoning, algebra, and geometry. However, the
normative sample for the GRE is relatively small. The psychometric adequacy of the GRE is
also less spectacular than that of the SAT, both in the reported coefficients of validity and
reliability and in the extensiveness of documentation. Nevertheless, the GRE is a relatively
sound instrument. The stability of the GRE based on Kuder-Richardson and odd-even
reliability is adequate, with coefficients only slightly lower than those of the SAT. However,
the predictive validity of the GRE is far from convincing.

Miller Analogies Test

A second major graduate-school entrance test is the Miller Analogies Test. Like the GRE, the
Miller Analogies Test is designed to measure scholastic aptitudes for graduate studies.
The Miller Analogies Test (MAT) assesses the analytical thinking ability of graduate school
candidates — an ability that is critical for success in both graduate school and professional
life. It is widely used in USA. However, unlike the GRE, the Miller Analogies Test is strictly
verbal. In 60 minutes, the student must discern logical relationships for 120 varied analogy
problems, including the most difficult items found on any test Knowledge of specific content
and a wide vocabulary are extremely useful in this endeavor. However, the most important
factors appear to be the ability to see relationships and a knowledge of the various ways’
analogies can be formed (by sound, number, similarities, differences, and so forth). Used in
a variety of specializations, the Miller Analogies Test offers special norms for various fields.

Odd–even reliability data for the Miller Analogies Test are adequate, with coefficients in the
high .80’s reported in the manual. Unfortunately, as with the GRE, the Miller Analogies Test
lacks predictive validity support. Despite a substantial correlation with the GRE (coefficients
run in the low .80’s), validity coefficients

reported in the manual for grades vary considerably from sample to sample and are only
modest (median in the high .30’s).

Like the GRE, the Miller Analogies Test has an age bias. Miller Analogies Test scores
overpredicted the GPAs of a 25- to 34-year-old group and underpredicted the GPAs of a 35-
to 44-year-old group. However, it also overpredicted achievement for a 45-year-old group
(House & Keeley, 1996).

The Law School Admission Test

The LSAT provides a good example of tests for professional-degree programs. LSAT
problems require almost no specific knowledge. Students of any major can take it without
facing bias. It is a speed test. The Law School Admission Test is a half-day standardized test
administered seven times each year at designated testing centers throughout the
world. This test is available in India as well.

The LSAT contains three types of problems: reading comprehension, logical reasoning, and
analytical reasoning. Reading comprehension problems are similar to those found on the
GRE. The student is given four 450-word passages followed by approximately seven
questions per passage. The content of the passages may be drawn from just about any
subject—history, the humanities, the women’s movement, African American literature,
science, and so forth. Each passage is purposefully chosen to be complicated and densely
packed with information. The questions that follow may be long and complicated. Students
may be asked what was not covered as well as to draw inferences about what was covered.
All of this must be done in 35 minutes.

Approximately half of the problems on the LSAT are logical-reasoning problems. These
provide a test stimulus as short as four lines or as long as half a page and ask for some type
of logical deduction. The questions are a middle difficulty item.

It was last revised in 1991. The LSAT is psychometrically sound, with reliability coefficients in
the .90’s. It predicts first-year GPA in law school. Its content validity is exceptional in that
the skills tested on the LSAT resemble the ones needed for success in the first year of law
school.

Diagnostic and Prognostic Testing- testing strengths and weaknesses within a subject
matter domain and suggesting causes eg. Stanford Diagnostic Mathematical test, California
Diagnostic Reading and Math test.

Assessment in Early Childhood Education- School readiness- abilities found to be important


in learning to read. Eg. School Readiness Test (SRT), Metropolitan Readiness Tests, Boehm
Test of Basic Concepts

UNIT 2: TESTING IN OCCUPATIONAL SETTINGS – ASSESSMENT OF PERFORMANCE,


PREDICTION OF JOB PERFORMANCE, OCCUPATIONAL USES OF TESTS.

Assessment of Performance

[Refer Gregory, pg no: 470- 476]

Prediction of Job Performance

[Refer Kaplan and Saccuzzo, pg no: 501- 518]

Occupational Uses of Tests

[Refer Gregory, pg no: 453- 454]

Include various tests from ppt and Gregory

Job Analysis

[Refer Kaplan and Sacuzzo, pg no: 525- 529]

UNIT 3: TESTING IN CLINICAL AND COUNSELLING SETTINGS- INTELLIGENCE TESTS, NEURO


PSYCHOLOGICAL ASSESSMENT, BEHAVIOURAL ASSESSMENT, CAREER ASSESSMENT

Intelligence Tests
Stanford-Binet Intelligence Scales: Fifth Edition (SB5)

With a lineage that goes back to the Binet-Simon scale of 1905, the Stanford-Binet: Fifth
Edition (SB5) has the oldest and perhaps the most prestigious pedigree of any individual
intelligence test. In the Stanford-Binet: Fifth Edition (SB5) five factors of intelligence is
assessed in two distinct domains. The five factors—derived from modern cognitive theories
such as Carroll (1993) and Baddeley (1986)—are fluid reasoning, knowledge, quantitative
reasoning, visual-spatial processing, and working memory. The two domains are verbal and
non-verbal. When these five factors of intelligence are “crossed” with the two domains
(nonverbal and verbal), the result is an instrument with 10 subtests.

Thus, the SB5 provides a number of different perspectives on the cognitive functioning of an
examinee: 10 subtest scores (mean of 10, SD of 3), three IQ scores (the familiar Full-Scale IQ,
Verbal IQ, and Nonverbal IQ), as well as five factor scores (Fluid Reasoning, Knowledge,
Quantitative Reasoning, Visual-Spatial Processing, and Working Memory). The IQ and factor
scores are normed to a mean of 100 and SD of 15.
The SB5 maintains the historical tradition of this instrument by using a routing procedure to
estimate the general cognitive ability of the examinee before proceeding to the remainder
of the test. The purpose of the routing procedure is to identify the appropriate starting
points for subsequent subtests. The routing items are both nonverbal (object series and
matrices) and verbal (vocabulary). These items also provide the Abbreviated IQ, sometimes
used for screening purposes.

Features of SB5

• Partition of intelligence into Full Scale IQ, Verbal IQ, and Nonverbal IQ.
• The test now includes extensive high-end items, designed to assess the highest level
of gifted performance.
• Many of these items are updates from very early editions of the Stanford-Binet,
when the instrument was renowned for its very high ceiling.
• At the other extreme, improved low-end items provide better assessment for very
young children (as young as age 2) and adults with mental retardation.
• In addition, the items and subtests that contribute to the Nonverbal IQ do not
require expressive language, which makes this part of the test ideal for assessing
individuals with limited English, deafness, or communication disorders.
• The developers of the SB5 also screened test items for fairness based on religious as
well as traditional concerns. Expert panels examined the entire test on fairness
issues related to the standard variables (gender, race, ethnicity, and disability) and
religious tradition (Christian, Jewish, Muslim, Hindu, and Buddhist backgrounds).
• This is the first time in the history of intelligence testing that religious tradition has
been considered in test development.
• Finally, the Working Memory factor, consisting of both verbal and nonverbal
subtests, shows promise in helping to assess and understand children with attention-
deficit/hyperactivity disorder.

The SB5 is suitable for children age 2 through adults age 85 and older. Advantages:

• Shorter, efficient test administration.


• The use of modern item response theory in the design of SB5 allows for greater
precision of measurement.

• developmental delays, and very gifted persons.

Detroit Tests of Learning Aptitude-4

[Refer Gregory, pg no: 197- 198]

The Cognitive Assessment System-II

[Refer Gregory, pg no: 198- 200]

OTHER TESTS IN UG BOOK

Neuro Psychological Assessment


[Refer Gregory, pg no: 424- 451]

Behavioural Assessment

[Refer Gregory, pg no: 420- 423]

Career Assessment

Successful assessment for career guidance requires ongoing interaction with clients. Career
counseling extends well beyond mere testing. Avoiding the “test and tell” trap is vital. Even
so, the use of appropriate assessment

tools can be helpful, sometimes even essential. The number of instruments available for
career assessment is huge, and new tools emerge every year.

Career Beliefs Inventory

Krumboltz (1991) created the Career Beliefs Inventory to identify and measure attitudes and
beliefs that might block career development. People firmly hold to self-limiting beliefs that
prevent them from finding a satisfying job or career.

The Career Beliefs Inventory (CBI) was designed to increase the awareness of clients to
underlying career beliefs and to gauge the potential influence of these beliefs on
occupational choice and life satisfaction.

The CBI can be taken individually or administered in a group setting to persons in grade 8 or
higher. The paper-and-pencil test can be hand-scored, but computer-scoring is preferable
because it yields an elegant 12-page report. Hand scoring is also confusing and likely to
introduce errors.

The 96 test items, all in Likert format, are grouped into 25 scales organized under the
following five headings:

1. Your Current Career Situation. Four Scales: Employment Status, Career Plans, Acceptance
of Uncertainty, and Openness.
2. What Seems Necessary for Your Happiness. Five scales: Achievement, College Education,
Intrinsic Satisfaction, Peer Equality, and Structured Work Environment.

3. Factors that Influence Your Decisions. Six scales: Control, Responsibility, Approval of
Others, Self-other Comparisons, Occupation/ College Variation, and Career Path Flexibility.

4. Changes You Are Willing to Make. Three scales: Post-training Transition, Job
Experimentation, and Relocation.

5. Effort You Are Willing to Initiate. Seven scales: Improving Self, Persisting While Uncertain,
Taking Risks, Learning Job Skills, Negotiating/ Searching, Overcoming Obstacles, and
Working Hard.

The inventory can be administered to individuals within the age range 12- 75. Initial test–
retest reliability data for the CBI are mixed, with one-month reliabilities ranging from .30s to
the .70s for the high school sample. Internal consistencies were likewise modest, with
coefficients mainly in the range of .40 to .50. This might be due to the small number of items
for some scales, as few as two items for several scales. Fuqua and Newman (1994)
recommend that the CBI could be improved if additional items were added to some of the
scales.

Walsh (1996) supplemented the original standardization sample for the CBI with nearly 600
additional participants. She reported more promising results, with internal consistencies
ranging from the low .30s to the high .80s, with a mean coefficient alpha of .57 for the CBI
scale scores. Regarding validity, results of factor analyses did find reproducible clusters of
beliefs, but these did not correspond to the scale clusters provided in the CBI reports. She
suggests that the practical application of the CBI might rest with exploring client beliefs at
the level of the individual items (Walsh, Thompson, & Kapes, 1996).

In a study of convergent validity correlating CBI results with data from four other personality
and vocational inventories, Holland, Johnston, Asama, and Polys (1993) reported at least
moderate construct validity for most of the CBI scales. They concluded that the test seems
to be measuring variance in career variables not assessed by other instruments. In addition,
significant correlation of some CBI scales with the State-Trait Anxiety Inventory indicated
that certain self-limiting and irrational beliefs caused emotional discomfort.

Interest assessment promotes two compatible goals: life satisfaction and vocational
productivity. It is nearly self-evident that a good fit between individual interests and chosen
vocation will help foster personal life satisfaction. After all, when work is interesting, we are
more likely to experience personal fulfilment as well. In addition, persons who are satisfied
with their work are more likely to be productive. Thus, employees and employers both
stand to gain from the artful application of interest assessment.

Strong Interest Inventory-Revised (SII-R)

The Strong Interest Inventory was first constructed by E. K. Strong (1927). Then it
was named as the Strong Vocational Interest Blank (SVIB). The 1974 version of SVIB
introduced by Campbell is called the Strong-Campbell Interest Inventory (SCII). In its
current form the SCII consists of 325 items and is divided into seven parts. The seven
parts of SCII are: Occupations (131 items), School Subjects (36 items), Activities (81
items), Amusements (39 items), Types of People (24 items), Preferences between
Two Activities (30 items) and Your Characteristics (14 items).

[Refer Gregory, pg no: 488- 490]

Vocational Preference Inventory

[Refer Gregory, pg no: 490- 491]

Self-Directed Search

The Self- Directed Search (SDS) has been developed by Holland (1985, 1992). It is a
self-administered, self-scored and self-interpreted vocational counselling
instrument. The SDS contains 228 items. A set of 66 items grouped into six scales
with 11 items each describe activities. A set of another 66 items assess competencies
which are again grouped into six scales of 11 items each. Occupations are evaluated
by six scales of 14 items each.

[Refer Gregory, pg no: 491- 493]

Campbell Interest and Skill Survey

[Refer Gregory, pg no: 493- 494]

[Refer A. K. Singh, pg no: 189- 192]

UNIT 4: COMPUTER-ASSISTED PSYCHOLOGICAL ASSESSMENT

The application and use of computers in testing have been a major development in the field
(Florell, 2011; Mills, 2002; Clauser, 2002; Wainer, 2000). For testing, one can use computers
in two basic ways: (1) to administer, score, and even interpret traditional tests and (2) to
create new tasks and perhaps measure abilities that traditional procedures cannot record.

In 1966, a Rogerian therapist named Eliza marked the beginning of a new phase in
psychological testing and assessment (Epstein & Klinkenberg, 2001). Eliza was a computer
program developed by Dr. Joseph Weizenbaum to emulate the behavior of a
psychotherapist. Weizenbaum had produced the program in an attempt to show that
human-computer interaction was superficial and ineffective for therapy. Dr. Weizenbaum
discovered that sessions with Eliza engendered positive emotions in the clients who had
actually enjoyed the interaction and attributed human characteristics to the program. The
research by Weizenbaum gave credence to the theory that human–computer interaction
may be beneficial and opened the door for further study.

[Refer Kaplan & Saccuzo, pg no: 427- 440]

[Refer A. K. Singh, pg no: 120- 122]


UNIT 5: ETHICAL AND SOCIAL CONSIDERATIONS IN PSYCHOLOGICAL TESTING – USER
QUALIFICATION AND PROFESSIONAL COMPETENCE, PROTECTION OF PRIVACY, TEST
RELATED FACTORS, RESPONSIBILITIES OF TEST PUBLISHERS.

User Qualification

According to APA (2001) the user qualification for psychological assessment is divided
broadly into two domains, viz, (a) generic psychometric knowledge and skills that serve as a
basis for most of the typical uses of tests and (b) specific qualifications for the responsible
use of tests in particular settings or for specific purposes (e.g., health care settings or
forensic or educational decision making).

I. Psychometric and Measurement Knowledge- In general, it is important for test users


to understand classical test theory and, when appropriate or necessary, item
response theory (IRT). When test users are making assessments on the basis of IRT,
such as adaptive testing, they should be familiar with the concepts of item
parameters (e.g., item difficulty, item discrimination, and guessing), item and test
information functions, and ability parameters (e.g., theta).
1) Descriptive statistics- test users should have knowledge in common
descriptive statistics relevant to test use like frequency distributions,
descriptive statistics characterizing the normal curve (e.g., kurtosis,
skewness), measures of central tendency (e.g., mean, median, and mode),
measures of variation (e.g., variance and standard deviation), indices of
relationship (e.g., correlation coefficient), and scales, scores, and
transformations. In other words, they should have the ability to define, apply,
and interpret concepts of descriptive statistics.
2) Reliability and measurement error- Test users should understand issues of
test score reliability and measurement error as they apply to the specific test
being used, as well as other factors that may influence test results, and the
appropriate interpretation and application of different measures of reliability
(e.g., internal consistency, test-retest reliability, interrater reliability, and
parallel forms reliability).
3) Validity and meaning of test scores- Responsibility for validation belongs both
to the test developer, who provides evidence in support of test use for a
particular purpose, and to the test user, who ultimately evaluates that
evidence, other available data, and information gathered during the testing
process to support interpretation of test scores. Test users have a particularly
important role in evaluating validity evidence when the test is used for
purposes different from those investigated by the test developer.
4) Normative interpretation of test scores- Test users should understand how
differences between the test taker and the particular normative group affect
the interpretation of test scores.
5) Selection of appropriate test- Test users should select the best test or test
version for a specific purpose and should have knowledge of testing practice
in the context area and of the most appropriate norms when more than one
normative set is available.
6) Test administration procedures- Knowledge about procedural requirements,
confidentiality of test information, communication of results, and test
security is important for many testing applications, as is familiarity with
standardized administration and scoring procedures and understanding a test
user's ethical and legal responsibilities and the legal rights of test takers.
II. Test User Qualification in Specific Context
The context in which psychological tests are used includes both the setting and the
purpose of testing. Test user qualifications vary across settings, as well as within
settings, depending on the purpose of testing.
Regardless of the setting, psychological tests are typically used for the following
purposes:
• Classification-to analyze or describe test results or conclusions in relation to a
specific taxonomic system and other relevant variables to arrive at a
classification or diagnosis.
• Description-to analyze or interpret test results to understand the strengths
and weaknesses of an individual or group. This information is integrated with
theoretical models and empirical data to improve inferences.
• Prediction-to relate or interpret test results with regard to outcome data to
predict future behavior of the individual or group of individuals.
• Intervention planning-to use test results to determine the appropriateness of
different interventions and their relative efficacy within the target
population.
• Tracking-to use test results to monitor psychological characteristics over
time.
There are five major contexts in which tests are commonly used: employment,
education (both individual and large-scale testing), vocational and career counseling,
health care, and forensic assessment.
1) Employment context- Many employers use tests as part of the assessment
process to develop work-related information and recommendations or
decisions about people who work for them or are seeking employment with
them. Test users in this context should have not only the qualifications
identified as core knowledge and skills but also an understanding of the work
setting, the work itself, and the worker characteristics required of the work
situation. They should strive to know what skills, abilities, or other individual
difference characteristics enable people to perform effectively (as defined in
a variety of ways) in a particular work setting. Test users should consider the
strengths and weaknesses of different methods for determining the human
requirements of the work situation and how to conduct such job, work, or
practice analyses. They also should consider and, where appropriate, obtain
legal advice about employment law and relevant court decisions.
2) Educational context- The results of psychological tests often serve as relevant
information to guide educational decisions about both students and
programs. Psychological tests are used in a variety of educational settings,
including preschools, elementary and secondary schools, higher education,
technical schools, business training programs, counseling centers, health and
mental health settings that offer educational services, and educational
consulting practices. Psychological tests are typically used to acquire
information about students to make informed decisions about such issues as
student admissions and placement, educational programming, student
performance, and teacher or school effectiveness.
3) Career and Vocational Counseling Context- Psychological testing in the career
and vocational counseling context is used to help people make appropriate
educational, occupational, retirement, and recreational choices and to assess
difficulties that impede the career decisionmaking process. Career and
vocational counselors integrate their knowledge of career demands with
information about beliefs, attitudes, values, personalities, mental health, and
abilities, with the goal of promoting beneficial career development, life
planning, and decision making. The individual's self-knowledge about values,
strengths, weaknesses, motivation, psychological characteristics, and
interests also is relevant.
4) Health care Context- Health care is the provision of services aimed at
enhancing the physical or mental well-being of individuals or at dealing with
behaviors, emotions, or issues that are associated with suffering, disease,
disablement, illness, risk of harm, or risk of loss of independence. Health care
assessment commonly occurs in private practice, rehabilitation, medical or
psychiatric inpatient or outpatient settings, schools, EAPs, and other settings
that address health care needs. Psychological tests are used as part of the
assessment process to develop health-related information and
recommendations or decisions. Those who use tests for this purpose should
have thorough grounding both in the core knowledge and skills enumerated
earlier and in the specialized knowledge, training, or experience of specific
substantive areas of health care.
5) Forensic Context- In forensic settings, psychological tests are used to gather
information and develop recommendations about people who are involved in
legal proceedings. Test users in forensic settings should possess a working
knowledge of the functioning of the administrative, correctional, or court
system in which they practice. They should strive to be familiar with the
statutory, administrative, or case law in the specific legal context where the
testing occurs or, where appropriate, obtain legal advice on the pertinent
laws. They should strive to communicate test results in a way that is useful
for the finder of fact (i.e., the judge, the administrative body, or the jury).
This includes communicating verbally with lawyers, writing formal reports,
and giving sworn testimony in deposition or court.

Professional Competence

According to APA (2020), the guidelines for professional competence in psychological


assessment and evaluation are as follows:

1) Psychologists who conduct psychological testing, assessment, and evaluation strive


to develop and maintain their own competence. This includes competence with
selection, use, interpretation, integration of findings, communication of results, and
application of measures.
2) Psychologists who conduct psychological testing, assessment, and evaluation seek
appropriate training and supervised experience in relevant aspects of testing,
assessment, and psychological evaluation.
3) Psychologists who conduct psychological testing, assessment, and evaluation strive
to be mindful of the potential negative impact and subsequent outcome of those
measures on clients/patients/ examinees/employees, supervisees, other
professionals, and the general public.
4) Psychologists strive to consider the multiple and global settings (e.g., forensic,
education, integrated care) in which services are being provided.

Psychometric and Measurement Knowledge

5) Psychologists who provide psychological testing, assessment, and evaluation


demonstrate knowledge in and seek to appropriately apply psychometric principles
and measurement science as well as the effects of external sources of variability
such as context, setting, purpose, and population.

Selection, Administration and Scoring of Tests

6) Psychologists who conduct psychological testing, assessment, and evaluation


endeavor to select (a) assessment tools that demonstrate sufficient validity evidence
for their uses, sufficient score reliability, and sound psychometric properties and (b)
measures that are fair and appropriate for the evaluation purpose, population,
setting, and context at hand.
7) Psychologists who conduct psychological testing, assessment, and evaluation strive
to use multiple sources of relevant and reliable clinical information collected
according to established principles and methods of assessment.
8) Psychologists who conduct psychological testing, assessment, and evaluation strive
to be aware of the need for test selection, scoring, and administration to reflect the
appropriate normative comparison, situational influences, effort, and standardized
administration as indicated.

Diverse, Underrepresented and Vulnerable Populations

9) Psychologists who conduct psychological testing, assessment, and evaluation strive


to practice with cultural competence.
10) Psychologists who conduct psychological testing, assessment, and evaluation aspire
to ensure awareness of individual differences, various forms of biases or potential
biases, cultural attitudes, population appropriate norms, and potential misuse of
data.
11) Psychologists who conduct psychological testing, assessment, and evaluation
endeavor to recognize the nature of and relationship among individual, cohort, and
group differences.
12) Psychologists who conduct psychological testing, assessment, and evaluation seek to
consider the unique issues that may arise when test instruments and assessment
approaches designed for specific populations are used with diverse populations.

Training and Supervisory Qualifications and Role

13) Psychologists who educate and train others in testing, assessment, and evaluation
strive to maintain their own competence in training and supervision and competency
in assessment practice.
14) Psychologists who supervise employees or individuals who lack training in testing,
assessment, and evaluation strive to ensure that supervision ultimately provides
examinees/clients with testing, assessment, and evaluation that meets the ethical
and professional standard of care and scope of practice.

Technology

15) Psychologists who use technology when testing, assessing, or evaluating


psychological status strive to remain aware of technological advances; of the
influence of technology on assessment; and of standard practice, laws, and
regulations in telepsychology.
16) Psychologists who conduct services using technology for online or in-person testing,
assessment, and evaluation make every effort to ensure their own competency.
17) Psychologists who use technology-based assessment instruments are encouraged to
take reasonable steps to ensure the security, transmission, storage, and disposal of
data. Psychologists also strive to ensure that security measures are in place to
protect data and information related to their clients/patients/ examinees from
unintended access, misuse, or disclosure.

Protection of Privacy

When people respond to psychological tests, they have little idea what is being revealed,
but they often feel that their privacy has been invaded in a way not justified by the test’s
benefits. There are two sides to the issue. Dahlstrom (1969b) argued that the issue of
invasion of privacy is based on serious misunderstandings. He states that because tests have
been oversold, the public doesn’t realize their limitations. Psychological tests are so limited
that they cannot invade one’s privacy. Another issue, according to Dahlstrom (1969b), is the
ambiguity of the notion of invasion of privacy. It isn’t necessarily wrong, evil, or detrimental
to find out about a person. The person’s privacy is invaded when such information is used
inappropriately. Psychologists are ethically and often legally bound to maintain
confidentiality and do not have to reveal any more information about a person than is
necessary to accomplish the purpose for which testing was initiated. Furthermore,
psychologists must inform subjects of the limits of confidentiality. As Dahlstrom (1969b)
noted, subjects must cooperate in order to be tested. If the subjects do not like what they
hear, they can simply refuse to be tested.
The ethical code of the APA (1992, 2002) includes confidentiality. Guaranteed by law in
most states that have laws governing the practice of psychology, this principle means that,
as a general rule, personal information obtained by the psychologist from any source is
communicated only with the person’s consent. Exceptions include circumstances in which
withholding information causes danger to the person or society, as well as cases that
require subpoenaed records. Therefore, people have the right to know the limits of
confidentiality and to know that test data can be subpoenaed and used as evidence in court
(Benjamin & Gollan, 2003; Kocsis, 2011) or in employment decisions (Ones et al., 1995).

According to APA Ethics Code (2017), privacy and confidentiality includes the following:

• Maintaining Confidentiality

Psychologists have a primary obligation and take reasonable precautions to protect


confidential information obtained through or stored in any medium, recognizing that the
extent and limits of confidentiality may be regulated by law or established by institutional
rules or professional or scientific relationship.

• Discussing the Limits of Confidentiality

(a) Psychologists discuss with persons (including, to the extent feasible, persons who are
legally incapable of giving informed consent and their legal representatives) and
organizations with whom they establish a scientific or professional relationship (1) the
relevant limits of confidentiality and (2) the foreseeable uses of the information generated
through their psychological activities. (See also Standard 3.10, Informed Consent.)

(b) Unless it is not feasible or is contraindicated, the discussion of confidentiality occurs at


the outset of the relationship and thereafter as new circumstances may warrant. (c)
Psychologists who offer services, products, or information via electronic transmission inform
clients/patients of the risks to privacy and limits of confidentiality.

• Recording

Before recording the voices or images of individuals to whom they provide services,
psychologists obtain permission from all such persons or their legal representatives.
• Minimizing Intrusions on Privacy

(a) Psychologists include in written and oral reports and consultations, only information
germane to the purpose for which the communication is made.

(b) Psychologists discuss confidential information obtained in their work only for
appropriate scientific or professional purposes and only with persons clearly concerned with
such matters.

• Disclosures

(a) Psychologists may disclose confidential information with the appropriate consent of the
organizational client, the individual client/patient, or another legally authorized person on
behalf of the client/patient unless prohibited by law.

(b) Psychologists disclose confidential information without the consent of the individual only
as mandated by law, or where permitted by law for a valid purpose such as to (1) provide
needed professional services; (2) obtain appropriate professional consultations; (3) protect
the client/patient, psychologist, or others from harm; or (4) obtain payment for services
from a client/patient, in which instance disclosure is limited to the minimum that is
necessary to achieve the purpose. (See also Standard 6.04e, Fees and Financial
Arrangements.)

• Consultations

When consulting with colleagues, (1) psychologists do not disclose confidential information
that reasonably could lead to the identification of a client/patient, research participant, or
other person or organization with whom they have a confidential relationship unless they
have obtained the prior consent of the person or organization or the disclosure cannot be
avoided, and (2) they disclose information only to the extent necessary to achieve the
purposes of the consultation. (See also Standard 4.01, Maintaining Confidentiality.)

• Use of Confidential Information for Didactic or Other Purposes

Psychologists do not disclose in their writings, lectures, or other public media, confidential,
personally identifiable information concerning their clients/patients, students, research
participants, organizational clients, or other recipients of their services that they obtained
during the course of their work, unless (1) they take reasonable steps to disguise the person
or organization, (2) the person or organization has consented in writing, or (3) there is legal
authorization for doing so.

Test Related Factors

• Construct equivalence (information concerning the influence of psychological


characteristics such as motivation, attitudes, and stereotype threat on test
performance
• Orientations and values that may alter the definition of the constructs(s) being
assessed and how those factors may affect the interpretation of test results
• Requirements of the testing environment and how that may affect the performance
of different groups
• Test bias
• Laws and public policies concerning use of tests that may have implications for test
selection, as well as administration and interpretation
• Procedures for examining between-groups differences in test performance
• Empirical literature concerning differential validity for racial or cultural groups.

Responsibilities of Test Publishers

According to the Rights and Responsibilities of Test Takers: Guidelines and Expectations
(Joint Committee on Testing Practices [JCTP], 2000); Standards for Educational and
Psychological Testing (American Educational Research Association [AERA], American
Psychological Association [APA], National Council on Measurement in Education [NCME],
1999); Guidelines for Computerized-Adaptive Test Development and Use in Education (ACE,
1995); and the Code of Fair Testing Practices in Education1 (JCTP, 1988).

Test takers have the right to tests that meet contemporary professional standards relating
to technical quality and fairness (AERA et al. 1999; ATP, 2000; PES, 1995). In collaboration
with the test sponsor, test developers are responsible for providing test takers with clear
and concise information on the intended purpose of the examination, the proposed uses
and interpretations of test scores, and sample test content including a representative set of
items and questions. Test developers meet these obligations by constructing quality
examinations, reporting information supporting test score use, preparing test taker and test
administrator materials, and modifying published examinations to maintain their currency
and relevance.

• Define intended purpose of test.

The initial step in constructing an examination is to delineate the assessment


objective. This process can be guided by theory or an analysis of content or job
functions, and it includes a description of the test content and constructs to be
measured. Once established, the purpose of the examination provides a foundation
for subsequent test construction, and scoring and evaluation activities. The goals of
the examination should be provided to test takers to inform their decisions regarding
test participation and preparation.

• Build quality tests.


Test developers have a primary responsibility to construct examinations that meet
current professional technical standards and guidelines (cf., AERA et al. 1999; ATP,
2000; PES, 1995). The test development process is guided by test specifications that
describe the format and structure of items, intended test-taker responses, and
scoring procedures. It is the test developer’s role to ensure that the format and
content of the examination support the stated purpose of the examination. For
example, the exclusive use of multiple-choice items on an examination designed to
assess oral communication skills would violate this obligation.
A key component in the test development process involves the review of test
content by qualified experts. A field test of the examination to a representative
sample of individuals from the target test-taker population is another important
element in test preparation. Documentation of major test-assembly activities should
be made available to test takers to facilitate an evaluation of test quality.
Test developers have a major obligation to minimize “construct-irrelevant test
variance” (Messick, 1989). Construct-irrelevant test variance reduces the
effectiveness of the examination in assessing the construct of interest and distorts
the interpretation of test scores. This distortion may either deflate or enhance test-
taker performance. For example, test items on an arithmetic computation test that
require a high degree of reading comprehension skill may decrease test-taker scores.
On the other hand, clues based on the length and structure of multiple-choice item
responses that assist test takers to select the correct answer without possessing the
target knowledge or skill may inflate test scores. Careful editing of test content,
adequate testing time, and the use of standardized testing procedures diminish
construct-irrelevant test variance and yield more accurate test scores.
• Report findings supporting test score use.
Test publishers are responsible for providing confirmatory evidence of the reliability
of scores and the validity of inferences based on interpretations of test performance.
Test takers and other key stakeholders (e.g., employers, admissions officers,
counselors) use this information to make a reasoned judgment of the quality of the
examination and its intended purpose and limitations.
Documentation made available by test developers typically describes the nature of
the test and its purpose, the test development and scoring processes, validity and
reliability evidence, and guidelines for score interpretation (AERA et al. 1999).
Messick (1989) and Kane (2001) provide excellent summaries of contemporary
procedures for the conduct and reporting of validity studies, and Feldt and Brennan
(1989) and Brennan (2001) present similar guidance for reliability studies. Examples
of appropriate test score interpretations and illustrations of improper test score use
aid test takers and others in understanding the nature and meaning of score data.
• Publish test administration and score use guidelines.
Test developers promote fair testing practices for test takers by establishing
standardized testing and scoring procedures, and communicating these procedures
to test sponsors, administrators, and test takers. Administration guidelines should
include enough detail to permit others to replicate the conditions under which
validity, reliability, and normative data were collected (AERA et al. 1999). Test
administration information made available to test takers typically includes a
summary of the test directions, testing time allowances, and policies covering the
use of adjunct aids such as calculators and reference materials. Testing instructions
are often supplemented with practice exercises for test takers prior to the
operational testing period as a means of reducing construct-irrelevant test variance.
The impact of test-taking strategies and test-taker guessing behavior on test scores
should be made known to test takers as a means of increasing measurement fidelity
of the intended construct. The methods used to score, scale, and equate test results,
and the procedures in place for establishing passing scores and norms, if applicable
to the testing program, should be clearly and concisely communicated to test takers.
Illustrations and examples of score reports should be provided as a means of
promoting appropriate score use and interpretation.
• Review and modify published tests.
In some testing situations, the administration procedures recommended by test
developers are modified to accommodate the special needs of test takers with
disabling conditions. In other circumstances, the test administration procedures may
be adapted to meet an emergent condition (e.g., answering in a test booklet instead
of completing an answer sheet). Where feasible, the test developer should inform
test takers and test administrators of the comparability of scores obtained under
different test administration conditions. Common test modifications designed for
individuals with disabilities should be anticipated, where feasible, during the test
development process. These include modifying test presentation format, response
format, timing, and test setting (AERA et al. 1999). These modifications should be
clearly described in an accompanying technical manual, along with the rationale for
the modification and any limitations that use of the modified assessment would have
on the inferences that may be drawn from test results. Test developers are
responsible for evaluating the effects of changes in the content areas or constructs
assessed by the examination, and modifying the examination, as necessary, to
maintain the validity of score interpretations. The mere passage of time may not be
a sufficient cause for withdrawing or amending a test. However, item exposure and
curriculum changes are factors that may reduce the validity of test scores, and it is
the test developer’s obligation to monitor the effects of these and other factors on
the quality of the examination.
Scheduled test evaluation and revision activities can effectively maintain the
currency and relevance of test content, and minimize the impact of item exposure.
The administration of multiple forms of an examination with minimal overlap among
versions of the test is another effective means of maintaining the validity of test
score inferences. The distribution of sample tests to test takers, trainers, and
educators for little or no fee assists in this effort by eliminating the incentive of
retaining actual items or questions from current examination forms.
PERSONALITY AND PERSONAL
DEVELOPMENT
MODULE 1: PERSONALITY

UNIT 1: DEFINITION AND THE CONCEPT OF SELF AND PERSONALITY

The self is an elusive concept which is never completely captured by any of the theorists. It
is more than the ego, more than the sum total of the factors that make up the individual; it
is less limited than the personality but contains it. Self is an individual’s awareness of his or
her own personal characteristics and level of functioning (Ciccarelli, White & Mishra, 2018).
Self-concept is the image of oneself that develops from interactions with important
significant people in one’s life. Self can be active as well as passive.

The field of personality addresses three issues:

(1) human universals,


It discusses about the universal features of human nature and what is generally true
of people.
(2) individual differences
It addresses on the way individuals differ from one another and the existence of a
basic set of individual differences.
(3) individual uniqueness
It explains uniqueness of an individual personality in scientific manner.

Different philosophers and psychologists indeed use the word personality differently. The
differences reflect their differing theoretical beliefs. But all personality psychologists use the
term personality to refer to psychological qualities that contribute/ to an individual’s
enduring and distinctive patterns/ of feeling, thinking, and behaving. Here, “enduring”
indicates that personality characteristics are qualities that are at least somewhat consistent
across time and across different situations of a person’s life. “Distinctive” means that
personality psychology addresses psychological features that differentiate people from one
another. Then, “contribute to” means that the personality psychologist searches for
psychological factors that causally influence, and thus at least partly explain, an individual’s
distinctive and enduring tendencies. Finally, “thinking, feeling and behaving” indicates the
mental life, their emotional experiences, and their social behaviour.
UNIT 2: PERSONALITY DEVELOPMENT- CRITICAL PERIODS/INFLUENCES IN DEVELOPMENT

UNIT 3: INTRODUCTION TO STRUCTURE AND DYNAMICS OF PERSONALITY

Structure of Personality

The concept of personality structure refers to stable, enduring aspects of personality. They
represent the building blocks of personality theory. In other words, the enduring qualities
that define the individual and distinguish individuals from one another are what the
psychologist refers to as personality structures. Structure of personality can be of two types,
viz,

a) Unit of Analysis
It provides different basic variables in their scientific models of personality structure.
The idea of units of analysis is important for understanding how personality theories
differ. One popular unit of analysis is that of a personality trait (disposition). The
word trait generally refers to a consistent style of emotion or behaviour that a
person displays across a variety of situations. A different unit of analysis is type. The
concept of type refers to the clustering of many different traits. Many psychologists
use units of analysis other than trait or type concepts. One prominent alternative is
to think of personality as a system. A system is a collection of highly interconnected
parts whose overall behaviour reflects not only the individual parts, but their
organization.
b) Hierarchy
A second consideration in the study of personality structure is that of hierarchy.
Theories of personality differ in the extent to which they view the structures of
personality as being organized hierarchically, with some structural units being higher
in order and therefore controlling the function of other units. Many well-known
systems are hierarchical, with higher-level subsystems regulating lower-level ones.
Theories that focus on personality traits also are hierarchical. A small set of basic
traits organizes lower-level personality tendencies.

Dynamics of Personality

Living beings are dynamic in nature. Personality process or dynamics refers to psychological
reactions that change dynamically, that is, that change over relatively brief periods of time.
Even though you are the same person from one moment to the next, your thoughts,
emotions, and desires often change rapidly and dramatically. This involves psychological
growth and development. Every theory should explain on psychological growth, obstacles
and methods to overcome.

Note: Obstacles to growth- ways in which growth is delayed, thwarted, turned aside,
prevented or perverted. Therapy helps overcome obstacles and promote growth.

UNIT 4: INTRODUCTION TO THE VIEW OF HUMAN NATURE

An important aspect of any personality theory is the image of human nature formulated by
the theorist. Each theorist has a conception of human nature that addresses a number of
fundamental questions, issues that focus on the core of what it means to be human. The
various images of human nature offered by the theorists allow for a meaningful comparison
of their views. So, the nine basic views on human nature are elaborated below.

1. Freedom vs. Determinism


The theorists concerned in freedom as the basic human nature believes in the
degree of internal freedom human beings possess in directing and controlling their
everyday behaviour. Determinist theorists depict human behaviour as being
controlled by definable factors.
2. Rationality vs. Irrationality
This continuum speaks about the degree to which our reasoning powers are capable
of influencing our everyday behaviour. For example, in Freud’s view human beings
are basically irrational and he has developed his personality theories on the basis of
this view of human nature.
3. Holism vs. Elementalism
Theorists who believe in holism assumes that human beings can be understood as
total entities. The more on fragments the organism, the more one is dealing with
abstractions and not the living human being. Elementalism says behaviour can be
explained only by investigating each specific; fundamental aspect of it
independently. One shouldn’t deny the specific factors underlying the overall
behaviour of people.
4. Constitutionalism vs. Environmentalism
It is based on the nature-nurture controversy. Personality theorists based on
constitutionalism view holds that human nature is shaped by genetic and biological
factors. On the other hand, environmentalist theorists depict person’s personality is
shaped by environment forces.
5. Changeability vs. Unchangeability
Personality is subject to continuous change. So, theorists who believe in
changeability view that personality is dynamic. Unchangeability posits the existence
of an enduring core of personality structure which underlie the individual’s
behaviour for once in life.
6. Subjectivity vs. Objectivity
Subjectivity is the assumption that each person inhabits a highly personal, subjective
world of experience that is the major influence upon his/her behaviour. Similarly,
objectivity is the assumption that human behaviour is largely the result of external
and definable factors acting upon the person.
7. Proactivity vs. Reactivity
Proactivity theorists in personality assume that the sources of all behaviour reside
within the person. Reactivity is the assumption that the real causes of human
behaviour are completely external to the person, that behaviour is simply a series of
response to external stimuli.
8. Homeostasis vs. Heterostasis
Personality theorists based on the view of homeostasis assume that individuals are
motivated primarily to reduce tension and maintain an internal state of equilibrium.
Consequently, heterostasis is the assumption that individuals are motivated
primarily towards growth, stimuli seeking and self-actualisation.
9. Knowability vs. Unknowability
Knowability is the assumption that principles governing human behaviour will
eventually be discovered through scientific inquiry. Unknowability is the basic
assumption that human behaviour transcendent the potential of scientific
understanding.
10. Optimism vs. Pessimism
Some theorists’ views of the human personality are positive and hopeful, depicting
us as humanitarian, altruistic, and socially conscious. Other theorists find few of
these qualities in human beings, either individually or collectively. Theorists believing
in optimism assumes that human beings are basically good and pessimist theorists
believe that human nature is essentially evil, irrational and destructive in nature.
MODULE 2: PSYCHODYNAMIC PERSPECTIVES

UNIT 1: FREUD

Sigmund Freud, the father of psychoanalysis and psychology, was born on May 6, 1856, in
Freiberg in Moravia. Freud’s thinking was an original synthesis of his exposure to
philosophical ideas, his training in scientific rigor, and his own contact with the unconscious.
So, his intellectual antecedents can be divided into the following;

INTELLECTUAL
ANTECEDENTS

THE
PHILOSOPHY BIOLOGY
UNCONSCIOUS

• PHILOSOPHY

Freud was influenced by the German romantic poet Clemens Brentano and also
introduced to the ideas of Friedrich Nietzsche ( Godde, 1991a ). Freud’s ideas are
also close to those of Arthur Schopenhauer. They overlap in their view of the will,
the importance of sexuality in determining behaviour, the domination of reason by
the emotions, and the centrality of repression—the nonacceptance of what one
experiences ( Godde, 1991b ).

• BIOLOGY

Some of Freud’s faith in the biological origins of consciousness may be traced to


Brücke’s positions. Charcot demonstrated that it was possible to induce or relieve
hysterical symptoms with hypnotic suggestion.
• THE UNCONSCIOUS
Freud did not discover the unconscious. The ancient Greeks and the Sufis, among
others, recommended the study of dreams.

Major Concepts

His major concepts include a structural breakdown of the parts of the mind, its
developmental stages, what it does with energy, and what drives it.

a) Psychic determinism

Freud assumed that we have no discontinuities in mental life and that all thought
and all behaviour have meaning. He contended that nothing occurs randomly. There
is a cause, even multiple causes, for every thought, feeling, memory, or action. Every
mental event is brought about by conscious or unconscious intention and is
determined by the events that have preceded it.

b) The Mind as an Energy System

Freud’s theory of personality is fundamentally a theory of mind—a scientific model


of the overall architecture of mental structures and processes. to Freud the body is a
mechanistic energy system . It follows, then, that the mind, being part of the body,
also is a mechanistic energy system. The mind gets mental energies from the overall
physical energies of the body. The 19th-century physicist Hermann von Helmholtz
had presented the principle of conservation of energy: Matter and energy can be
transformed but not destroyed. This interested Freud and he believed to call this
form of energy as psychic energy. He held that psychic energy may be transformed
into physiological energy and vice versa.

c) Levels of Mind
i. Conscious

Consciousness is self-evident. the conscious is only a small portion of the


mind; it includes only what we are aware of in any given moment. In other
words, conscious is the sum total of the individual’s experience at any given
moment and the capacity of the individual to know the external objects and
influence them.

ii. Preconscious

the preconscious is a part of the unconscious. Accessible portions of memory


are part of the preconscious. for example, of everything a person did
yesterday, a middle name, street addresses, etc. The preconscious is like a
holding area for the memories of a functioning consciousness. In other
words, preconscious is the storehouse of surface memories which are really
recallable though are not conscious at the moment.

iii. Unconscious

Within the unconscious are instinctual elements that have never been
conscious and are never accessible to consciousness. In addition, certain
material has been barred—censored and repressed—from consciousness.
This material is neither forgotten nor lost, but neither is it remembered; it
still affects consciousness, but indirectly. So, unconscious is a dynamic
process that cannot reach out consciousness in spite of its effectiveness and
intensity and which cannot be brought to consciousness by any act of will or
act of memory though it remains dynamic and always tries to come out.
Decades-old memories, when released into consciousness, have lost none of
their emotional force.

d) Impulses

Impulse ( trieb in German) has been incorrectly translated in older textbooks as


“instinct” (Bettelheim, 1982 , pp. 87 – 88). Impulses or drives are pressures to act
without conscious thought toward particular ends. Such impulses are “the ultimate
cause of all activity” ( Freud, 1940 , p. 5 ). Freud labelled the physical aspects of
impulses “needs” and the mental aspects of impulses “wishes.” Needs and wishes
propel people to take action. All impulses have four components: a source, an aim,
an impetus, and an object. The source, where the need arises, may be a part or all of
the body. The aim is to reduce the need until no more action is necessary, that is, to
give the organism the satisfaction it now desires. The impetus is the amount of
energy, force, or pressure used to satisfy or gratify the impulse. This is determined
by the urgency of the underlying need. The object of an impulse is whatever thing or
action allows satisfaction of the original desire. For example, consider the way in
which these components will appear in a hungry (impulse) person.

SOURCE •Hypothalamus

AIM •Satiety

•Energy/
pressure
IMPETUS to satisfy
the
hunger

OBJECT •Food

Freud assumed that a normal, healthy pattern aims to reduce tension to previously
acceptable levels. The complete cycle of behaviour from relaxation to tension and
activity and back to relaxation is called a tension-reduction model. Tensions are
resolved by returning the body to the state of equilibrium that existed before the
need arose.

e) Basic impulses
Freud developed two descriptions of basic impulses. The early model described two
opposing forces: the sexual or life-maintaining eros (more generally, the erotic or
physically gratifying) and the aggressive or destructive thanatos. Later, he described
these forces more globally as either life supporting or death (and destruction)
encouraging.
f) Libido and Aggressive Energy

Each of these generalized impulses has a separate source of energy. Libido (from the
Latin word for wish or desire) is the energy available to the life impulses. “Its
production, increase or diminution, distribution, and displacement should afford us
possibilities for explaining the psychosexual phenomena observed” ( Freud, 1905a ,
p. 118 ). One characteristic of libido is its “mobility”—the ease with which it can pass
from one area of attention to another. The libido can be attached to or invested in
objects; a concept Freud called cathexis. Aggressive energy, or the death impulse,
has no special name. It has been assumed to have the same general properties as
libido.

g) Cathexis
Cathexis is the process by which the available libidinal energy in the psyche is
attached to or invested in a person, idea, or thing. Libido that has been cathected is
no longer mobile and can no longer move to new objects. It is rooted in whatever
part of the psyche has attracted and held it. The German word Freud used,
Besetzung, means both “to occupy” and “to invest.” If you imagine your store of
libido as a given amount of money, cathexis is the process of investing it. Once a
portion has been invested or cathected, it remains there, leaving you with that much
less to invest elsewhere.

Structure of Personality

The personality is made up of three major systems, viz,

1. The id

The id is the original core out of which the rest of the personality emerges. It based
on the pleasure principle. It is biological in nature and contains the reservoir of
psychic energy for the whole personality. The id itself is primitive and unorganized.
“The logical laws of thought do not apply in the id” ( Freud, 1933 , p. 73 ). Moreover,
the id is not modified as one grows and matures. The id is not changed by experience
because it is not in contact with the external world. Its goals are simple and direct:
reduce tension, increase pleasure, and minimize discomfort. The id strives to do this
through reflex actions and by using other portions of the mind. To accomplish the
aim of avoiding pain and obtaining pleasure, the id has two processes: reflex actions
and primary processes. Reflex actions are automatic and inborn reactions like
sneezing and blinking; they usually reduce tension immediately. The primary process
involves a somewhat more complicated psychological reaction. It attempts to
discharge tension by forming an image of an object that will remove the tension.
This hallucinatory experience in which the desired object is present in the form of
memory image is called wish-fulfilment. It represents the inner world of subjective
experience and has no knowledge of objective reality. The contents of the id are
almost entirely unconscious. They include primitive thoughts that have never been
conscious and thoughts that have been denied, found unacceptable to
consciousness. According to Freud, experiences denied or repressed can still affect a
person’s behaviour with undiminished intensity without being subject to conscious
control.

2. The Ego

The ego is the part of the psyche in contact with external reality. The ego operates
according to reality principle. The ego originally develops out of the id, as the infant
becomes aware of its own identity, to serve and placate the id’s repeated demands.
In order to accomplish this, the ego protects the id but also draws energy from it. It
has the task of ensuring the health, safety, and sanity of the personality. Its principal
characteristics include control of voluntary movement and those activities that tend
toward self-preservation. It becomes aware of external events, relates them to past
events, then through activity either avoids the condition, adapts to it, or modifies the
external world to make it safer or more comfortable. The ego’s activities are to
regulate the level of tension produced by internal or external stimuli. A rise in
tension is felt as discomfort, while a lowering of tension is felt as pleasure. Therefore,
the ego pursues pleasure and seeks to avoid or minimize pain. Thus, the ego is
originally created by the id in an attempt to cope with stress. Ego tries to obtain
pleasure and avoid pain by means of secondary process. The secondary process is
realistic thinking. By means of secondary process the ego formulates a plan, usually
by some kind of action, to see whether or not it will work. This is called reality
testing. In order to perform its role efficiently, the ego has control over all the
cognitive and intellectual functions; these higher mental processes are placed at the
service of the secondary process. The act of dating provides an example of how the
ego controls sexual impulses. The id feels tension arising from unfulfilled sexual
arousal and, without the ego’s influence, would reduce this tension through
immediate and direct sexual activity. Within the confines of a date, however, the ego
can determine how much sexual expression is possible and how to establish
situations in which sexual contact is most fulfilling. The principal role of the ego is to
mediate between the instinctual/impulsive requirements of the organism and the
conditions of the surrounding environment. It has three harsh masters: the id,
external reality and the superego.

3. Superego

This last part of the personality’s structure develops from the ego. It operates on the
moralistic principle. The superego serves as a judge or censor over the activities and
thoughts of the ego. It is the repository of moral codes, standards of conduct, and
those constructs that form the inhibitions for the personality. Freud describes three
functions of the superego: conscience, self-observation, and the formation of ideals.
The superego develops, elaborates, and maintains the moral code of an individual.
The superego has two subsystems: conscience and ego ideal. As conscience, the
superego acts to restrict, prohibit, or judge conscious activity, but it also acts
unconsciously. The unconscious restrictions are indirect, appearing as compulsions
or prohibitions. Ego ideal consists of the approved moral codes and standards of
conduct. The mechanism by which conscience and ego ideals are incorporated is
called introjection (the child takes in or introjects the moral standards of the parents.
The conscience punishes the person by making him/her feel guilty; the ego-ideal
rewards the person by making him/her feel proud). The main functions of superego
are:

a. to inhibit the impulses of id, particularly those of a sexual or aggressive nature,


since these are the impulses whose expression is most highly condemned by
society.
b. to persuade the ego to substitute moralistic goals for realistic ones.
c. to strive for perfection.
The id, ego and superego work together as a team under the administrative leadership of
the ego. The overarching goal of the psyche is to maintain—and when it is lost, to regain—
an acceptable level of dynamic equilibrium that maximizes the pleasure of tension
reduction. The energy used originates in the primitive, impulsive id. The ego exists to deal
realistically with the basic drives of the id. It also mediates between the demands of the id,
the restrictions of the superego, and external reality. The superego, arising from the ego,
acts as a moral brake or counterforce to the practical concerns of the ego. It sets guidelines
that define and limit the ego’s flexibility. In a very general way, the id may be thought of as
the biological component, the ego as the psychological component and the superego as the
social component of personality. Psychoanalysis, the therapeutic method that Freud
developed, has a primary goal to strengthen the ego, to make it independent of the overly
strict concerns of the superego, and to increase its capacity to become aware of and control
material formerly repressed or hidden in the id.

Levels of Personality

Freud’s original conception divided personality into three levels: the conscious, the
preconscious, and the unconscious. The conscious, as Freud defined the term, corresponds
to its ordinary everyday meaning. It includes all the sensations and experiences of which we
are aware at any given moment. Freud considered the conscious a limited aspect of
personality because only a small portion of our thoughts, sensations, and memories exists in
conscious awareness at any time. He likened the mind to an iceberg. The conscious is the
portion above the surface of the water—merely the tip of the iceberg.

More important, according to Freud, is the unconscious, that larger, invisible portion below
the surface. This is the focus of psychoanalytic theory. Its vast, dark depths are the home of
the instincts, those wishes and desires that direct our behaviour. The unconscious contains
the major driving power behind all behaviours and is the repository of forces we cannot see
or control.

Between these two levels is the preconscious. This is the storehouse of memories,
perceptions, and thoughts of which we are not consciously aware at the moment but that
we can easily summon into consciousness. We often find our attention shifting back and
forth from experiences of the moment to events and memories in the preconscious.

Psychosexual Stages of Personality

For Freud, the first few years of life are decisive for the formation of personality. So
important did Freud consider childhood experiences that he said the adult personality was
firmly shaped and crystallized by the fifth year of life. The stages are termed “psychosexual”
because it is the sexual urges that drive the acquisition of psychological characteristics. A
person’s unique character type develops in childhood largely from parent–child interactions.
The child tries to maximize pleasure by satisfying the id demands, while parents, as
representatives of society, try to impose the demands of reality and morality. each stage is
defined by an erogenous zone of the body. In each developmental stage a conflict exists
that must be resolved before the infant or child can progress to the next stage. Sometimes a
person is reluctant or unable to move from one stage to the next because the conflict has
not been resolved or because the needs have been so supremely satisfied by an indulgent
parent that the child doesn’t want to move on. In either case, the individual is said to be
fixated at this stage of development. In fixation, a portion of libido or psychic energy
remains invested in that developmental stage, leaving less energy for the following stages.
The stages of psychosexual development of personality are substantiated below.

1. The Oral Stage


Age: 1-2 years
Erogenous zone: mouth

The oral stage begins at birth, when both needs and gratification primarily involve
the lips, tongue, and somewhat later, the teeth. The basic drive of the infant is not
social or interpersonal; it is simply to take in nourishment and to relieve the tensions
of hunger, thirst, and fatigue. During feeding and when going to sleep, the child is
soothed, cuddled, and rocked. The child associates both pleasure and the reduction
of tension with these events. The mouth is the first area of the body that the infant
can control; most of the libidinal energy available is initially directed or focused in
this area. Since the oral stage occurs at a time when the baby is almost completely
dependent upon its mother for sustenance, feelings of dependency arise during this
period. As the child matures, other parts of the body develop and become important
sites of gratification. However, some energy remains permanently affixed or
cathected to the means for oral gratification. Adults have well-developed oral habits
and a continued interest in maintaining oral pleasures. Eating, sucking, chewing,
smoking, biting, and licking or smacking one’s lips are physical expressions of these
interests. Constant nibblers, smokers, and those who often overeat may be partially
fixated in the oral stage. The late oral stage, after teeth have appeared, includes the
gratification of the aggressive instincts. Biting the breast, which causes the mother
pain and leads to the actual withdrawal of the breast, is an example of this kind of
behaviour. Adult sarcasm, tearing at one’s food, and gossip have been described as
being related to this developmental stage. It is normal to retain some interest in oral
pleasures. Oral gratification can be looked upon as pathological only if it is a
dominant mode of gratification, that is, if a person is excessively dependent on oral
habits to relieve anxiety or tension unrelated to hunger or thirst.

THE ORAL STAGE


(0-2 yrs)

Oral sucking Oral biting


periods (birth- 8 period (6-18
months) months)

2. The Anal Stage


Age: 1.5-4 years
Erogenous zone: anus

As the child grows, new areas of tension and gratification come into awareness.
Between the ages of 2 and 4, children generally learn to control their anal sphincter
and bladder. The child pays special attention to urination and defecation. Toilet
training prompts a natural interest in self-discovery. The rise in physiological control
is coupled with the realization that such control is a new source of pleasure. In
addition, children quickly learn that the rising level of control brings them attention
and praise from their parents. Adult characteristics that are associated with partial
fixation at the anal stage are excessive orderliness, parsimoniousness, and obstinacy.
Depending on the particular method of toilet training used by the mother and her
feelings concerning defecation, the consequences of this training may have far-
reaching effects upon the formation of specific traits and values. If the mother is very
strict and repressive in her methods, then the child may hold back its faeces and
become constipated. This can also lead to the development of a retentive character.
On the other hand, if the mother is the type of person who pleads with her child to
have a bowel movement and who praises the child extravagantly for doing so, the
child will acquire the notion of its importance. Part of the confusion that can
accompany the anal stage arises from the apparent contradiction between lavish
praise and recognition, on the one hand, and the idea that toilet behaviour is “dirty”
and should be kept a secret, on the other. Having been praised for producing it, the
child may be surprised and confused if the parents react with disgust.

THE ANAL STAGE


(1.5- 4 yrs)

Anal expulsice Anal retentive


period period (12
(8months-3 yrs) months- 4 yrs)

3. The Phallic Stage


Age: 3-6 years
Erogenous zone: genitals

Freud maintained that this stage is best characterized as phallic, because it is the
period when a child becomes aware either of having a penis or of lacking one. This is
the first stage in which children become conscious of sexual differences. The sexual
excitement is linked in the child’s mind with the close physical presence of the
parents. The craving for this contact becomes increasingly more difficult for the child
to satisfy; the child is struggling for the intimacy that the parents share with each
other. This stage is characterized by the child’s wanting to get into bed with the
parents and becoming jealous of the attention the parents give to each other. Freud
concluded from his observations that during this period both males and females
develop fears about sexual issues. Freud saw children in the phallic stage reacting to
their parents as potential threats to the fulfilment of their needs. Thus, for the boy
who wishes to be close to his mother, the father takes on some of the attributes of a
rival. At the same time, the boy wants his father’s love and affection, for which his
mother is seen as a rival. The child is in the untenable position of wanting and
fearing both parents. In boys, Freud called this conflict the Oedipus complex. He also
fears his father and is afraid that he, a child, will be castrated by him. The anxiety
around castration (castration anxiety), the fear and love for the father as well as the
love and sexual desire for the mother, can never be fully resolved. In childhood, the
entire complex is repressed. Castration anxiety induces a repression of the sexual
desire for the mother and hostility towards the father. It also helps to bring about an
identification of the boy with his father. Among the first tasks of the developing
superego are to keep this disturbing conflict out of consciousness and to protect the
child from acting it out. The Electra complex is a psychoanalytic term used to
describe a girl’s romantic feelings toward her father and anger toward her mother. It
is comparable to the Oedipus complex. Freud believed a young girl is initially
attached to her mother. After she discovers that she does not have a penis, she
begins to resent her mother who she blames for her “castration,” and becomes
attached to her father. The girl then begins to identify with her mother out of fear of
losing her love. While the term Electra complex is frequently associated with Freud,
it was actually Carl Jung who coined the term in 1913. Freud actually rejected the
term and felt it overemphasized similarities between men and women. Instead,
Freud used the term feminine Oedipus attitude to describe the Electra complex: the
girl wishes to possess her father, and she sees her mother as the major rival. While
boys repress their feelings partly out of fear of castration, girls repress their desires
in a less severe and less total fashion. This lack of intensity allows the girl to “remain
in the Oedipus situation for an indefinite period” ( Freud, 1933 , p. 129 ).

4. The Latency Period


Age: 6 years – puberty

Whatever form the resolution of the struggle actually takes, most children seem to
modify their attachment to their parents sometime after 5 years of age and turn to
relationships with peers and to school activities, sports, and other skills. It is a time
when the unresolvable sexual desires of the phallic stage are successfully repressed
by the superego. This is the period when the child develop intellectually physically
and socially. Psychic energy is latent or at its minimum. It is a time when the
unresolvable sexual desires of the phallic stage are successfully repressed by the
superego. Here, sexuality makes no progress. The latency period is not a
psychosexual stage of development. The sex instinct is dormant, temporarily
sublimated in school activities, hobbies, and sports and in developing friendships
with members of the same sex.

5. The Genital Stage


Age: puberty onwards
Erogenous zone: genitals

The final period of biological and psychological development, the genital stage,
occurs with the onset of puberty and the consequent return of libidinal energy to the
sexual organs. Now boys and girls are made aware of their separate sexual identities
and begin to look for ways to fulfil their erotic and interpersonal needs. Freud
believed that homosexuality, at this stage, resulted from a lack of adequate
development. Freud believed that the conflict during this period is less intense than
in the other stages. The adolescent must conform to societal sanctions and taboos
that exist concerning sexual expression, but conflict is minimized through
sublimation. In this stage the person becomes transformed from a pleasure-seeking,
narcissistic infant into a reality oriented, socialized unit. The principal biological
function of the genital stage is that if reproduction; the psychological aspects help to
achieve this end by providing a certain measure of stability and security.

Penis Envy

Freud’s ideas about women, based initially on biological differences between men and
women. The one and the most controversial view is the penis envy. Penis envy —the girl’s
desire for a penis and her related realization that she is “lacking” one—is a critical juncture
in female development. In other words, the envy the female feels toward the male because
the male possesses a penis; this is accompanied by a sense of loss because the female does
not have a penis. This theory proposes that girl’s penis envy persists as a feeling of
inferiority and predisposes her to jealousy. The girl blames her mother for her supposedly
inferior condition and consequently comes to love her mother less. She may even hate the
mother for what she imagines the mother did to her. She comes to envy her father and
transfers her love to him because he possesses the highly valued sex organ. Penis envy is the
female counterpart of castration anxiety. “The discovery that she is castrated is a turning
point in a girl’s growth. Three possible lines of development diverge from it: one leads to
sexual inhibition and to neurosis, the second to a modification of character in the sense of
masculinity complex, and the third to normal femininity” (Freud, 1933, p. 126). On the other
hand, the boy fears that may lose it (castration anxiety). Penis envy and castration anxiety
are together called castration complex. Her perpetual desire for a penis, or “superior
endowment,” is, in the mature woman, converted to the desire for a child, particularly for a
son, “who brings the longed-for penis with him” (1933). The woman is never decisively
forced to renounce her Oedipal strivings out of castration anxiety. As a consequence, the
woman’s superego is less developed and internalized than the men.

Dynamics of Personality

The dynamics of personality to a large extent governed by the necessity for gratifying one’s
needs by means of transactions with objects in the external world. The individual’s
customary reaction to external threats of pain and destruction with which it is not prepared
to cope is to become afraid. The threatened person is ordinarily a fearful person.
Overwhelmed by the excessive stimulation that the ego is unable to bring under control, the
ego becomes flooded with anxiety.

Anxiety

The psyche’s major problem is how to cope with anxiety. Anxiety is triggered by an
expected or foreseen increase in tension or displeasure; it can develop in any situation (real
or imagined) when the threat to some part of the body or psyche is too great to be ignored
or mastered. Events with a potential to cause anxiety include but are not limited to the
following:

1. Loss of a desired object—for example, a child deprived of a parent, close friend, or pet.

2. Loss of love—for example, rejection, failure to win back the love or approval of someone
who matters to you.

3. Loss of identity—for example, castration fears or loss of self-respect.

4. Loss of love for self—for example, superego disapproval of traits, as well as any act that
results in guilt or self-hate.

Freud has identified three types of anxiety as illustrated below;

ANXIETY

Objective Neurotic Moral


Anxiety Anxiety Anxiety
Objective anxiety (reality anxiety) occurs in response to real, fear-inducing, external
threats. The ego fears losing literal control, for example, a hiker who runs from a bear. In
neurotic anxiety, conflict is felt due to a clash between the id and the ego. For example, a
woman fears that her sexual attraction (id) toward her male co-worker will overcome her
conscious control (ego). Finally, in moral anxiety, the ego and superego conflict. For
example, a student’s superego demands that all of his assignments are perfectly error free,
a standard his ego cannot meet. Overall, during each type of anxiety, the ego is faced with
the demanding task of balancing the realities of the world, the impulses of the id, and the
demands of the superego.

Humans attempt to lessen their anxiety in two general ways. The first is to deal with the
situation directly. We overcome obstacles, either confront or run from threats, and resolve
or come to terms with problems in order to minimize their impact. The alternative approach
is defensive: either the situation is distorted, or it is directly denied. The ego protects the
whole personality against the threat by falsifying the nature of the threat. The ways in which
we accomplish the distortions are called defense mechanisms.

NOTE: Anxiety that cannot be dealt with by effective measures is said to be traumatic. It
reduces the person to a state of infantile helplessness. In fact, prototype of all later anxiety
is the birth trauma.

UNIT 2: ALFRED ADLER

[Refer Frager; photostat]


UNIT 3: JUNG

Carl Gustav Jung was born in Switzerland on July 26, 1875. Jungian psychology focuses on
establishing and fostering the relationship between conscious and unconscious processes.
One of Jung’s central concepts is individuation, his term for a process of personal
development that involves establishing a connection between the ego and the self. The ego
is the center of consciousness; the self is the center of the total psyche, including both the
conscious and the unconscious. Jung recognized constant interplay between the two. They
are not separate but are two aspects of a single system. Individuation is the process of
developing wholeness by integrating all the various parts of the psyche.

Books:

Memories, Dreams, Reflections (1961) – Autobiography

The Psychology of Dementia Praecox (1907) – First book

Symbols of Transformation (1912) – Challenged Freud’s basic idea

Jung as a Neo-Freudian

The first point on which Jung came to disagree with Freud was the role of sexuality. Jung
broadened Freud’s definition of libido by redefining it as a more generalized psychic energy
that includes sex but is not restricted to it. The second major area of disagreement concerns
the direction of the forces that influence personality. Whereas Freud viewed human beings
as prisoners or victims of past events, Jung argued that we are shaped by our future as well
as our past. We are affected not only by what happened to us as children, but also by what
we aspire to do in the future. The third significant point of difference revolves around the
unconscious. Rather than minimizing the role of the unconscious, Jung placed an even
greater emphasis on it than Freud did. He probed more deeply into the unconscious and
added a new dimension: the inherited experiences of human and prehuman species
(Collective unconscious). Although Freud had recognized this phylogenetic aspect of
personality (the influence of inherited primal experiences), Jung made it the core of his
system of personality. He combined ideas from history, mythology, anthropology, and
religion to form his image of human nature.

Intellectual Antecedents

Freud and Psychoanalysis

Although Jung was already a practicing psychiatrist before he met Freud, Freud’s theories
were clearly among the strongest influences on Jung’s thinking. Freud’s The Interpretation
of Dreams (1900) inspired Jung to attempt his own approach to dream and symbol analysis.
Freud’s theories of unconscious processes also gave Jung his first glimpse into the
possibilities of systematically analyzing the dynamics of mental functioning. Jung’s personal
unconscious is similar to Freud’s conception of the unconscious. The contents of the
collective unconscious, also known as the impersonal or transpersonal unconscious, are
universal and not rooted in our personal experience. This concept is perhaps Jung’s greatest
departure from Freud, as well as his most significant contribution to psychology.

Goethe and Nietzsche

Goethe’s Faust had a major influence on Jung’s understanding of the psyche and provided
an insight into the power of evil and its relation to growth and self-insight. Nietzsche also
had a profound effect on Jung. He believed that Nietzsche’s work possessed great
psychological insight even though Nietzsche’s fascination with power tended to distort his
portrait of the mature and free human being. Jung saw Nietzsche and Freud as
representatives of the two greatest themes in Western culture—power and eros. He
believed that both men had unfortunately become so deeply involved in these two vital
themes that they were almost obsessed by them.
Alchemy and Gnosticism

Jung searched for Western traditions that dealt with the development of consciousness. He
was especially interested in the symbols and concepts used to describe this process. Jung
found invaluable ideas in gnosticism, a mystical movement from early Christianity. Jung also
discovered the Western alchemical literature, long dismissed as magical, prescientific
nonsense. He interpreted the alchemical treatises as representations of inner change and
purification disguised in chemical and magical metaphors.

“Only after I had familiarized myself with alchemy did, I realize that the unconscious is a
process, and that the psyche is transformed or developed by the relationship of the ego to
the content of the unconscious” (Jung, 1936b, p. 482).

Eastern Thought

Jung discovered that Eastern descriptions of spiritual growth, inner psychic development,
and integration closely corresponded to the process of individuation that he had observed
in his Western patients. Jung was particularly interested in the mandala as an image of the
self and of the individuation process. (Mandala is the Sanskrit word for circle, or a circular
design or diagram frequently used in meditation and other spiritual practices.) He found
that his patients spontaneously produced mandala drawings even though they were
completely unfamiliar with Eastern art or philosophy. Mandalas tend to appear in the
drawings of patients who have made considerable progress in their own individuation. The
center of the drawing stands for the self, which comes to replace the limited ego as the
center of the personality, and the circular diagram as a whole represents the balance and
order that develops in the psyche as the individuation process continues. Jung’s ideas were
strongly affected by India and Indian thought. Jung strongly held that most spiritual
traditions, East and West, have become rigid systems imposed on the individual rather than
ways of eliciting each individual’s own unique pattern of inner growth.

Major Concepts
In Jung’s view, the total personality, or psyche, is composed of several distinct systems or
structures that can influence one another.

The Attitudes: Introversion and Extraversion

Jung found that individuals can be characterized as either primarily inward-oriented or


primarily outward-oriented. The introvert is more comfortable with the inner world of
thoughts and feelings. The extravert feels more at home with the world of objects and other
people. No one is a pure introvert or a pure extravert. Jung compared the two processes to
the heartbeat, with its rhythmic alternation between the cycle of contraction (introversion)
and the cycle of expansion (extraversion). However, each individual tends to favor one or
the other attitude and operates more often in terms of the favored attitude. In other words,
according to Jung, everyone has the capacity for both attitudes, but only one becomes
dominant in the personality. The dominant attitude then tends to direct the person’s
behavior and consciousness. The nondominant attitude remains influential, however, and
becomes part of the personal unconscious, where it can affect behavior. If the ego is
predominantly extraverted in its relation to the world, the personal unconscious will be
introverted.

Introverts see the world in terms of how it affects them, and extraverts are more concerned
with their impact upon the world. Extraverts are open, sociable, and socially assertive,
oriented toward other people and the external world. Introverts are withdrawn and often
shy, and they tend to focus on themselves, on their own thoughts and feelings. Introverts
tend to be introspective in nature. One danger for such people is that as they become
immersed in their inner world, they may lose touch with the world around them. The
extraverted attitude orients the person towards the external, objective world; the
introverted attitude orients the person towards the inner, subjective world.

Introversion and extraversion are mutually exclusive. The ideal is to be flexible and to adopt
whichever attitude is more appropriate in a given situation—to operate in dynamic balance
between the two and not develop a fixed, rigid way of responding to the world.
The Functions: Thinking, Feeling, Sensation, Intuition [Jung’s Typological Theory]

He proposed additional distinctions among people based on the differences in the strategies
people employ to acquire and process information called psychological functions. There are
four fundamental psychological functions: thinking, feeling, sensing and intuiting.

Thinking and feeling are alternative ways of forming judgments and making decisions.
Thinking is concerned with objective truth, judgment, and impersonal analysis. Thinking is
ideational and intellectual. By thinking, humans try to comprehend the nature of the world
and themselves. Those individuals in whom the thinking function predominates are the
greatest planners; however, they tend to hold on to their plans and abstract theories even
when confronted by new and contradictory evidence.

Feeling is focused on value. It may include judgments of good versus bad and right versus
wrong. Feeling is the evaluative function. The feeling function gives humans their subjective
experiences of pleasure and pain, of anger, fear, sorrow, joy and love.

Jung classified sensation and intuition together as ways of gathering information. Sensation
refers to a focus on direct sense experience, perception of details, and concrete facts: what
one can see, touch, and smell. Tangible, immediate experience is given priority over
discussion or analysis of experience. Sensing types tend to respond to the immediate
situation and deal effectively and efficiently with all sorts of crises and emergencies. They
generally work better with tools and materials than do any of the other types. Sensing is the
perceptual or reality function. It yields concrete facts or representation of the world.

Intuition is a way of comprehending perceptions in terms of possibilities, past experience,


future goals, and unconscious processes. The implications of experience are more important
to intuitives than the actual experience itself. Strongly intuitive people add meaning to their
perceptions so rapidly that they often cannot separate their interpretations from the raw
sensory data. Intuitives integrate new information quickly, automatically relating past
experience and relevant information to immediate experience. Because it often includes
unconscious material, intuitive thinking appears to proceed by leaps and bounds. In other
words, intuition is perception by way of unconscious processes and subliminal contents. The
intuitive person goes beyond facts, feelings and ideas in his/her search for the essence of
reality.

Here thinking and feeling are called rational functions. Sensation and intuition are
considered to be irrational functions.

Each function may be experienced in an introverted or an extraverted fashion. Generally,


one of the functions is more conscious, developed, and dominant. Jung called this the
superior function. It operates out of the dominant attitude (either extraversion or
introversion). Usually one of the four functions is more highly differentiated than the other
three and plays a predominant role in consciousness. One of the other three remaining
functions is generally deep in the unconscious and less developed. Jung called this the
inferior function. It is the least developed function in an individual. Inferior function is the
least conscious and the most primitive, or undifferentiated. It is repressed and expresses
itself in dreams and fantasies. Jung said that it is through our inferior function, that which is
least developed in us, that we see God. By struggling with and confronting inner obstacles,
we can come closer to the Divine. For the individual, a combination of all four functions
results in a well-rounded approach to the world.

Unfortunately, no one develops all four functions equally well. Each individual has one
dominant function and one partially developed auxiliary function. The other two functions
are generally unconscious and operate with considerably less effectiveness. The more
developed and conscious the dominant and auxiliary functions, the more deeply
unconscious are their opposites. One’s function type indicates the relative strengths and
weaknesses and the style of activity one tends to prefer. Jung’s typology is especially useful
in helping us understand social relationships; it describes how people perceive in alternate
ways and use different criteria in acting and making judgments.

Thinking “What does this mean?”

Feeling “What value does this have?”


Sensing “What exactly am I perceiving?”

Intuiting “What might happen, what is possible?”

Jung’s Psychological Types

Jung proposed eight psychological types, based on the interactions of the two attitudes and
four functions.

The extraverted thinking type lives strictly in accordance with society’s rules. These people
tend to repress feelings and emotions, to be objective in all aspects of life, and to be
dogmatic in thoughts and opinions. They may be perceived as rigid and cold. They tend to
make good scientists because their focus is on learning about the external world and using
logical rules to describe and understand it.

The extraverted feeling type tends to repress the thinking mode and to be highly emotional.
These people conform to the traditional values and moral codes they have been taught.
They are unusually sensitive to the opinions and expectations of others. They are
emotionally responsive and make friends easily, and they tend to be sociable and
effervescent. Jung believed this type was found more often among women than men.

The extraverted sensing type focuses on pleasure and happiness and on seeking new
experiences. These people are strongly oriented toward the real world and are adaptable to
different kinds of people and changing situations. Not given to introspection, they tend to
be outgoing, with a high capacity for enjoying life.

The extraverted intuiting type finds success in business and politics because of a keen ability
to exploit opportunities. These people are attracted by new ideas and tend to be creative.
They are able to inspire others to accomplish and achieve. They also tend to be changeable,
moving from one idea or venture to another, and to make decisions based more on hunches
than on reflection. Their decisions, however, are likely to be correct.
The introverted thinking type does not get along well with others and has difficulty
communicating ideas. These people focus on thought rather than on feelings and have poor
practical judgment. Intensely concerned with privacy, they prefer to deal with abstractions
and theories, and they focus on understanding themselves rather than other people. Others
see them as stubborn, aloof, arrogant, and inconsiderate.

The introverted feeling type represses rational thought. These people are capable of deep
emotion but avoid any outward expression of it. They seem mysterious and inaccessible and
tend to be quiet, modest, and childish. They have little consideration

for others’ feelings and thoughts and appear withdrawn, cold, and self-assured.

The introverted sensing type appears passive, calm, and detached from the everyday world.
These people look on most human activities with benevolence and amusement. They are
aesthetically sensitive, expressing themselves in art or music, and tend to repress their
intuition.

The introverted intuiting type focuses so intently on intuition that people of this type have
little contact with reality. These people are visionaries and daydreamers— aloof,
unconcerned with practical matters, and poorly understood by others. Considered odd and
eccentric, they have difficulty coping with everyday life and planning for the future.

Extraverted thinking Logical, objective, dogmatic

Extraverted feeling Emotional, sensitive, sociable; more typical


of women than men

Extraverted sensing Outgoing, pleasure-seeking, adaptable


Extraverted intuiting Creative, able to motivate others and to
seize opportunities

Introverted thinking More interested in ideas than in people

Introverted feeling Reserved, undemonstrative, yet capable of


deep emotion

Introverted sensing Outwardly detached, expressing themselves


in aesthetic pursuits

Introverted intuiting More concerned with the unconscious than


with everyday reality

The Unconscious

Jung emphasizes that the unconscious cannot be known and thus must be described in
relationship to consciousness. Consciousness, he believes, theoretically has no limit.
Furthermore, Jung divides the unconscious into the personal unconscious and the collective
unconscious.

Personal Unconscious

The material in the personal unconscious comes from the individual’s past. This formulation
corresponds to Freud’s concept of the unconscious. The personal unconscious is composed
of memories that are painful and have been repressed, as well as memories that are
unimportant and have simply been dropped from conscious awareness. The personal
unconscious also holds parts of the personality that have never come to consciousness. It is
also similar to Freud’s conception of the preconscious. The personal unconscious is a region
adjoining the ego. There is a great deal of two-way traffic between the personal
unconscious and ego. (For example, our attention can wander readily from this printed page
to a memory of something we did yesterday. All kinds of experiences are stored in the
personal unconscious; it can be likened to a filing cabinet. Little mental effort is required to
take something out, examine it for a while, and put it back, where it will remain until the
next time, we want it or are reminded of it.)

Complexes are a part of personal unconscious. A complex is a core or pattern of emotions,


memories, perceptions, and wishes in the personal unconscious organized around a
common theme such as power or status. This means that he or she is preoccupied with that
theme to the point where it influences behavior. By directing thoughts and behavior in
various ways, the complex determines how the person perceives the world. Complexes may
be conscious or unconscious. Those that are not under conscious control can intrude on and
interfere with consciousness. The person with a complex is generally not aware of its
influence, although other people may easily observe its effects. Some complexes may be
harmful, but others can be useful. For example, a perfection or achievement complex may
lead a person to work hard at developing particular talents or skills. Jung believed that
complexes originate not only from our childhood and adult experiences, but also from our
ancestral experiences, the heritage of the species contained in the collective unconscious. In
other words, a complex is an organized group or constellation of feelings, thoughts,
perceptions and memories that exist in the personal unconscious.

[Mother complex]

Collective Unconscious

The collective unconscious is Jung’s boldest and most controversial concept. Jung identifies
the collective, or transpersonal unconscious as the center of all psychic material not derived
from personal experience. Its contents and images appear to be shared with people of all
time periods and all cultures, and it reflects humanity’s collective evolutionary history. Jung
postulates that the infant mind already possesses a structure that moulds and channels all
further development and interaction with the environment (against tabula rasa). This basic
structure is essentially the same in all infants. Although we develop differently and become
unique individuals, the collective unconscious is common to all people and therefore
exhibits the same basic pattern in everyone. We are born with a psychological heritage as
well as a biological heritage, according to Jung. This heritage is passed to each new
generation (Both are important determinants of behavior and experience). Thus, Jung linked
each person’s personality with the past, not only with childhood but also with the history of
the species. That is collective unconscious, which results from experiences that are common
to all people, also includes material from our prehuman and animal ancestry. It is the source
of our most powerful ideas and experiences.

People have always had a mother figure, for example, and have experienced birth and
death. They have faced unknown terrors in the dark, worshipped power or some sort of
godlike figure, and feared an evil being. The universality of these experiences over countless
evolving generations leaves an imprint on each of us at birth and determines how we
perceive and react to our world. These ideas had not been transmitted or communicated
orally or in writing from one culture to another. In addition, Jung’s patients, in their dreams
and fantasies, recalled and described for him the same kinds of symbols he had discovered
in ancient cultures. He could find no other explanation for these shared symbols and themes
over such vast geographical and temporal distances than that they were transmitted by and
carried in each person’s unconscious mind.

So, the collective unconscious is the storehouse of latent memory traces inherited from
one’s ancestral past, a past that includes not only the racial history of humans as a separate
species but also their prehuman or animal ancestry as well. All human beings have more or
less the same collective unconscious.

[Read photostat of Hall, Lindzey and Campbell]


Archetype

Archetypes are inherited predispositions to respond to the world in certain ways or images
of universal experiences contained in the collective unconscious. They are primordial
images—representations of the instinctual energies of the collective unconscious, which are
based on universal human themes and concerns. In other words, structural components of
collective unconscious are called by various names: archetypes, dominants, primordial
images, imagoes, behaviour patterns and mythological images. Jung postulated the idea of
archetypes from experiences his patients reported—dreams and fantasies that included
remarkable ideas and images whose content could not be traced to the individual’s past
experience. He also discovered a close correspondence between patients’ dream contents
and the mythical and religious themes found in many widely scattered cultures.

According to Jung, the archetypes are structure-forming elements within the unconscious.
These elements give rise to the archetypal images that dominate both individual fantasy life
and the mythologies of an entire culture. A wide variety of symbols can be associated with a
given archetype. For example, the mother archetype embraces not only each individual’s
real mother but also all mother figures and nurturant figures. This archetype group includes
women in general, mythical images of women, such as Venus, the Virgin Mary, and Mother
Nature, and supportive and nurturant symbols, such as the church and paradise. The
mother archetype encompasses positive features and also negative ones, such as the
threatening, domineering, or smothering mother. Among the archetypes Jung proposed are
the hero, the mother, the child, God, death, power, and the wise old man. Archetypes
function as a highly charged autonomous centers of energy that tend to produce in each
generation the repetition and elaboration of these same experiences.

Each of the major structures of the personality is also an archetype. These structures include
the ego, the persona, the shadow, the anima (in men), the animus (in women), and the
self. Archetypes form the infrastructure of the psyche.

The Ego
The ego is the conscious mind. The ego is the center of consciousness and one of the major
personality archetypes. The ego provides a sense of consistency and direction in our
conscious lives. It tends to oppose whatever might threaten this fragile consistency of
consciousness and tries to convince us that we must always consciously plan and analyze
our experiences. According to Jung, the psyche at first consists only of the unconscious.
Similar to Freud’s view, Jung’s ego arises from the unconscious and brings together various
experiences and memories, developing the division between unconscious and conscious.
The ego has no unconscious elements, only conscious contents derived from personal
experience. We are led to believe that the ego is the central element of the psyche, and we
come to ignore the other half of the psyche, the unconscious.

The Persona

The term persona comes from the Latin, meaning “mask,” or “false face,”. According to Jung
(1945), Persona is a mask adopted by the person /in response to the demands of social
convention and tradition / and to his/ her inner archetypal needs. Our persona is the
appearance we present to the world. It is the character we assume; through it, we relate to
others. The persona includes our social roles, the kind of clothes we choose to wear, and
our individual styles of expressing ourselves. In order to function socially at all, we have to
play a part in ways that define our roles. Even those who reject such adaptive devices
invariably employ other roles, roles that represent rejection. The persona has both negative
and positive aspects. A dominant persona can smother the individual, and those who
identify with their persona tend to see themselves only in terms of their superficial social
roles and facades. In fact, Jung called the persona the “conformity archetype.” As part of its
positive function, it protects the ego and the psyche from the varied social forces and
attitudes that impinge on them. The persona is a valuable tool for communication.

The persona can often be crucial to our positive development. As we begin to play a certain
role, our ego gradually comes to identify with it. This process is central to personality
development. This process is not always positive, however. As the ego identifies with the
persona, people start to believe that they are what they pretend to be (inflation of the
persona). According to Jung, we eventually have to withdraw this identification and learn
who we are in the process of individuation. Otherwise, he or she is resorting to deception. In
the first instance, the person is deceiving others; in the second instance, the person is
deceiving himself or herself. Minority group members and other social outsiders in
particular are likely to have problems with their identities because of cultural prejudice and
social rejection of their personas.

The Shadow

According to Jung (1948), the shadow archetype consists of the animal instincts / that
humans inherit in their evolution / from lower forms of life. In other words, shadow is an
archetypal form that serves as the focus for material that has been repressed from
consciousness; its contents include tendencies, desires, and memories rejected by the
individual as incompatible with the persona and contrary to social standards and ideals. The
shadow contains all the negative tendencies the individual wishes to deny, including our
animal instincts, as well as undeveloped positive and negative qualities. The stronger our
persona is and the more we identify with it, the more we deny other parts of ourselves. The
shadow represents what we consider inferior in our personality and also that which we have
neglected and never developed in ourselves. In dreams, a shadow figure may appear as an
animal, a dwarf, a vagrant, or any other low-status figure. Jung found that repressed
material is organized and structured around the shadow, which becomes, in a sense, a
negative self or the shadow of the ego.

If the material from the shadow is allowed back into consciousness, it loses much of its
primitive and frightening quality. The shadow is most dangerous when unrecognized. Then
the individual tends to project his or her unwanted qualities onto others or to become
dominated by the shadow without realizing it. Images of evil, the devil, and the concept of
original sin are all aspects of the shadow archetype. The more the shadow material is made
conscious, the less it can dominate. As the shadow is made more conscious, we regain
previously repressed parts of ourselves. Also, the shadow is not simply a negative force in
the psyche. It is a storehouse for instinctual energy, spontaneity, and vitality, and a major
source of our creative energies. Therefore, if the shadow is totally suppressed, the psyche
will be dull and lifeless. It is the ego’s function to repress the animal instincts enough so that
we are considered civilized while allowing sufficient expression of the instincts to provide
creativity and vigor. Like all archetypes, the shadow is rooted in the collective unconscious,
and it can allow the individual access to much of the valuable unconscious material rejected
by the ego and the persona. The animal instincts do not disappear when they are
suppressed. Rather, they lie dormant, awaiting a crisis or a weakness in the ego so they can
gain control. When that happens, the person becomes dominated by the unconscious.

Modern Jungians have written about the “light shadow,” the positive aspects of our
personality seen as incompatible with our sense of self. This often includes qualities like
charm, beauty, intelligence, qualities we then tend to project onto others.

The Anima and Animus

The anima and animus archetypes refer to Jung’s recognition that humans are essentially
bisexual. On the biological level, each sex secretes the hormones of the other sex as well as
those of its own sex. On the psychological level, each sex manifests characteristics,
temperaments, and attitudes of the other sex by virtue of centuries of living together. These
opposite sex characteristics aid in the adjustment and survival of the species because they
enable a person of one sex to understand the nature of the other sex. The archetypes
predispose us to like certain characteristics of the opposite sex; these characteristics guide
our behavior with reference to the opposite sex. Jung insisted that both the anima and the
animus must be expressed. A man must exhibit his feminine as well as his masculine
characteristics, and a woman must express her masculine characteristics along with her
feminine ones. Otherwise, these vital aspects will remain dormant and undeveloped,
leading to a one-sidedness of the personality.

According to Jung (1945, 1954), the feminine archetype in man is called the anima, the
masculine archetype in woman is called the animus. It is the unconscious structure that
complements the persona. The masculine and feminine characteristics are found in both
the sexes. Thus, to the extent that a woman consciously defines herself in feminine terms,
her animus will include those unrecognized tendencies and experiences that she has defined
as masculine. For a woman, the process of psychological development entails entering into
a dialogue between her ego and her animus. The animus or anima initially seems to be a
wholly separate personality. As the animus/anima and its influence on the individual are
recognized, it assumes the role of liaison between conscious and unconscious until it
gradually becomes integrated into the self. Jung views the quality of this union of opposites
as the major step in individuation. As long as our anima or animus is unconscious, not
accepted as part of our self, we will tend to project it outward onto people of the opposite
sex.

According to Jung, the child’s opposite-sex parent is a major influence on the development
of the anima or animus. All relations with the opposite sex, including parents, are strongly
affected by the projection of anima or animus fantasies. This archetype is one of the most
influential regulators of behavior. It appears in dreams and fantasies as figures of the
opposite sex. It is oriented primarily toward inner processes, just as the persona is oriented
to the outer. Jung also called this archetype the “soul image.” Because it has the capacity to
bring us in touch with our unconscious forces, it is often the key to unlocking our creativity.

The Self

Jung has called the self the central archetype, the archetype of psychological order and the
totality of the personality. The self is the midpoint of personality, around which all of the
other systems are constellated (conscious and unconscious). It holds these systems together
and provides the personality with unity, equilibrium and stability. That is, it involves bringing
together and balancing all parts of the personality. The self is life’s ultimate goal, a goal
that people constantly strive for but seldom reach.

It is the union of the conscious and the unconscious that embodies the harmony and
balance of the various opposing elements of the psyche. The self directs the functioning of
the whole psyche in an integrated way. The self is depicted in dreams or images
impersonally (as a circle, mandala, crystal, or stone) or personally (as a royal couple, a divine
child, or other symbol of divinity). The chief symbol of the self is ‘the mandala’ or the magic
circle. It may first appear in dreams as a tiny, insignificant image, because the self is so
unfamiliar and undeveloped in most people. The development of the self does not mean
that the ego is dissolved. The ego remains the center of consciousness, an important
structure within the psyche. It becomes linked to the self as the result of the long, hard work
of understanding and accepting unconscious processes. The ego receives the light from the
Self.

Symbols

According to Jung, the unconscious expresses itself primarily through symbols. Although no
specific symbol or image can ever fully represent an archetype (which is a form without
specific content), the more closely a symbol conforms to the unconscious material
organized around an archetype, the more it evokes a strong, emotionally charged response.
Jung is concerned with two kinds of symbols: individual and collective. By individual
symbols, Jung means “natural” symbols that are spontaneous productions of the individual
psyche, rather than images or designs created deliberately by an artist. Collective symbols
are often religious images such as the cross, the six-pointed Star of David, and the Buddhist
wheel of life.

Symbolic terms and images represent concepts that we cannot completely define or fully
comprehend. Symbols always have connotations that are unclear or hidden from us. A
symbol may represent the individual’s psychic situation, and it is that situation at a given
moment.

Active Imagination

Active imagination refers to any conscious effort to produce material directly related to
unconscious processes, to relax our usual ego controls without allowing the unconscious to
take over completely. The process of active imagination differs for each individual. Some
people use drawing or painting most profitably, whereas others prefer to use conscious
imagery, or fantasy, or another form of expression.
Jung valued the use of active imagination as a means of facilitating self-understanding
through work with symbols. He encouraged his patients to paint, sculpt, or employ other art
forms as ways to explore their inner depths. Active imagination is not passive fantasy but an
attempt to engage the unconscious in a dialogue with the ego through symbols.

Dreams

For Jung, dreams play an important complementary (or compensatory) role in the psyche.
The widely varied influences in our conscious life tend to distract us and to mould our
thinking in ways often unsuitable to our personality and individuality. According to Jung
(1964), The general function of dreams is to try to restore our psychological balance by
producing dream material that re-establishes, in a subtle way, the total psychic
equilibrium.

Dreams deal with symbols that have more than one meaning, which prevents a simple,
mechanical system for dream interpretation. Any attempt at dream analysis must take into
account the attitudes, experiences, and background of the dreamer. It is a joint venture
between the analyst and the analysand.

Jeremy Taylor, a well-known authority on Jungian dreamwork, postulates certain basic


assumptions about dreams (1992, p. 11):

1. All dreams come in the service of health and wholeness.

2. No dream comes simply to tell the dreamer what he or she already knows.

3. Only the dreamer can say with certainty what meanings a dream may hold.

4. There is no such thing as a dream with only one meaning.

5. All dreams speak a universal language, a language of metaphor and symbol.

Jung encourages us to befriend our dreams and to treat them not as isolated events but as
communications from the unconscious. This process creates a dialogue between conscious
and unconscious and is an important step in the integration of the two.
Dynamics of Personality

Psychological growth

Individuation

According to Jung, every individual naturally seeks individuation, or self-


development. Jung believed that the psyche has an innate urge toward wholeness.
This idea is similar to Maslow’s concept of self-actualization, but it is based on a
more complex theory of the psyche than Maslow’s. “Individuation means becoming
a single, homogeneous being, and, insofar as ‘individuality’ embraces our innermost,
last, and incomparable uniqueness, it also implies becoming one’s own self. We
could therefore translate individuation as ‘coming to selfhood’ or ‘self-realization’”
(Jung, 1928b, p. 171). Individuation is a natural, organic process. It is the unfolding of
our basic nature, and is a fundamental drive in each of us.

Individuation is a process of achieving wholeness and thus moving toward greater


freedom. The process includes development of a dynamic relationship between the
ego and the self, along with the integration of the various parts of the psyche: the
ego, persona, shadow, anima or animus, and other archetypes. As people become
more individuated, these archetypes may be seen as expressing themselves in more
subtle and complex ways.

As an analyst, Jung found that those who came to him in the first half of life were
concerned primarily with external achievement and the attainment of the goals of
the ego. Older patients who had fulfilled such goals reasonably well tended to seek
individuation—to strive for inner integration rather than outer achievement—and to
seek harmony with the totality of the psyche.

From the ego’s point of view, growth and development consist of integrating new
material into one’s consciousness; this process includes acquiring knowledge of the
world and of oneself. Growth, for the ego, is essentially expanding conscious
awareness. Individuation, by contrast, is the development of the self, and self’s goal
is to unite consciousness and the unconscious.

Unveiling the Persona

The persona was only a mask for the collective psyche. Fundamentally the persona is
nothing real: it is a compromise between individual and society as to what a man
should appear to be. In becoming aware of the limitations and distortions of the
persona, we become more independent of our culture and our society.

Early in the individuation process, should unveiling the developed persona begin.
One should learn to view it as a useful tool rather than as an essential part of
ourselves. Although the persona has important protective functions, it is also a mask
that hides the self and the unconscious.

Confronting the Shadow

We can become free of the shadow’s influence to the extent that we accept the
reality of the dark side in each of us and simultaneously realize that we are more
than the shadow.

Confronting the Anima or Animus

Deal with this archetype as a real person or persons whom we can communicate
with and learn from. Anima or animus figures have considerable autonomy and that
they are likely to influence or even dominate us if we either ignore them or blindly
accept their images and projections as our own.
Developing the Self

The goal and culmination of the individuation process is the development of the self.
“The self is our life’s goal, for it is the completest expression of that fateful
combination we call individuality” (Jung, 1928b, p. 238). The self, replaces the ego as
the midpoint of the psyche. Awareness of the self brings unity to the psyche and
helps to integrate conscious and unconscious material. The ego is still the center of
consciousness, but it is no longer seen as the nucleus of the entire personality. Thus,
as we continue to develop, our individuation might be represented as a spiral in
which we keep confronting the same basic questions, each time in a more refined
form.

Obstacles to growth

Individuation, consciously undertaken, is a difficult task, and the individual must be


relatively psychologically healthy to handle the process. The ego must be strong
enough to undergo tremendous changes, to be turned inside out in the process of
individuation. This process is especially difficult because it is an individual enterprise,
often carried out in the face of the rejection or, at best, indifference of others.

The Persona

Each stage in the individuation process has its difficulties. First is the danger of
identification with the persona. Those who identify with the persona may try to
become “perfect,” unable to accept their mistakes or weaknesses, or any deviations
from their idealized self-concepts. Individuals who fully identify with the persona
tend to repress any tendencies that do not fit their self-image and attribute such
behaviors to others; the job of acting out aspects of the repressed, negative identity
is assigned to other people.
The Shadow

The shadow can also become a major obstacle to individuation. People who are
unaware of their shadows can easily act out harmful impulses without ever
recognizing them as wrong or without any awareness of their own negative feelings.
In such people, an initial impulse to harm or do wrong is instantly rationalized as
they fail to acknowledge the presence of such an impulse in themselves. Ignorance
of the shadow may also result in an attitude of moral superiority and projection of
the shadow onto others.

(For example, some of those loudly in favor of the censorship of pornography seem
to be fascinated by the materials they want to ban; they may even convince
themselves of the need to “study” carefully all the available pornography in order to
be effective censors.)

The Anima/Animus

Confronting the anima or animus brings with it the problem of relating to the
collective unconscious. In the man, the anima may produce sudden emotional
changes or moodiness. In the woman, the animus may manifest itself as irrational,
rigidly held opinions. The content of the anima or animus is the complement of our
conscious conception of ourselves as masculine or feminine—which, in most people,
is strongly determined by cultural values and socially defined sex roles.

An individual exposed to collective material faces the danger of becoming engulfed


by it. According to Jung, this outcome can take one of two forms. First is the
possibility of ego inflation, in which the individual claims all the virtues and
knowledge of the collective psyche. The opposite reaction is that of ego impotence;
the person feels that he or she has no control over the collective psyche and
becomes acutely aware of unacceptable aspects of the unconscious—irrationality,
negative impulses, and so forth.
Ego Inflation

When the individual deals with the anima and animus, tremendous energy is
unleashed. This energy can be used to build up the ego instead of developing the
self. Jung has referred to this as identification with the archetype of the mana-
personality. (Mana is a Melanesian word for the energy or power that emanates
from people, objects, or supernatural beings; it is the energy that has an occult or
bewitching quality.) The ego identifies with the archetype of the wise man or wise
woman, the sage who knows everything. The mana-personality is dangerous
because it is a false exaggeration of power. Individuals stuck at this stage try to be
both more and less than they really are: more, because they tend to believe that
they have become perfect, holy, or even godlike; but actually less, because they have
lost touch with their essential humanity and the fact that no one is infallible,
flawless, and perfectly wise.

Jung sees temporary identification with the archetype of the self or the mana-
personality as being almost inevitable in the individuation process. The best defense
against the development of ego inflation is to remember one’s essential humanity
and to stay grounded in the reality of what one can and must do, not what one
should do or be.

Structure

[Refer Frager]

Therapist

[Refer Frager]

Development of Personality
Jung did not specify in detail, as Freud did, the stages through which the personality passes
from infancy to adulthood. Jung regarded old age as period of relative unimportance, when
the old person gradually sinks into unconscious. He described four stages of development.
Jung took a longer view of personality than Freud, who concentrated on the early years of
life and foresaw little development after the age of five. Jung also did not posit sequential
stages of growth.

Childhood

The child’s life is determined by instinctual activities necessary for survival. That is the ego
begins to develop in early childhood, at first in a primitive way because the child has not yet
formed a unique identity. Behaviour during childhood is also governed by parental
demands. Obviously, then, parents exert a great influence on the formation of the child’s
personality. They can enhance or impede personality development by the way they behave
toward the child. Parents might try to force their personalities on the child, desiring him or
her to be an extension of themselves. Or they might expect their child to develop a
personality different from their own as a way of seeking vicarious compensation for their
deficiencies. The ego begins to form substantively only when children become able to
distinguish between themselves and other people or objects in their world. The emotional
problems experienced by young children generally reflect “disturbing influences in the
home” (Jung, 1928). In sharp contrast to Freud, Jung did not emphasize the determining
power of childhood for e=subsequent behaviour.

Young Adulthood

It is not until puberty that the psyche assumes a definite form and content. This period,
which Jung called our psychic birth, is marked by difficulties and the need to adapt. Not
only sexuality emerge, but the child becomes differentiated from his/her parents. Childhood
fantasies must end as the adolescent confronts the demands of reality. From the teenage
years through young adulthood, we are concerned with preparatory activities such as
completing our education, beginning a career, getting married, and starting a family. Our
focus during these years is external, our conscious is dominant, and, in general, our primary
conscious attitude is that of extraversion. The aim of life is to achieve our goals and
establish a secure, successful place for ourselves in the world. Thus, young adulthood should
be an exciting and challenging time, filled with new horizons and accomplishments.

Middle Age

Major personality changes occur between the ages of 35 and 40. This period of middle age
was a time of personal crisis for Jung and many of his patients. By then, the adaptational
problems of young adulthood have been resolved. The typical 40-year-old is established in a
career, a marriage, and a community. At this point, a very different concern emerges, the
need for meaning. People need to find a purpose for their lives and a reason for their
existence. In Jung’s terminology, they change from an extraverted to an introverted
attitude, and they move toward self-realization. This is the time for a “middle-aged crisis”.

The more Jung analysed this period, the more strongly he believed that such drastic
personality changes were inevitable and universal. Middle age is a natural time of transition
in which the personality is supposed to undergo necessary and beneficial changes. Ironically,
the changes occur because middle-aged persons have been so successful in meeting life’s
demands. These people had invested a great deal of energy in the preparatory activities of
the first half of life, but by age 40 that preparation was finished and those challenges had
been met. Although they still possess considerable energy, the energy now has nowhere to
go; it has to be rechannelled into different activities and interests.

Jung noted that in the first half of life we must focus on the objective world of reality—
education, career, and family. In contrast, the second half of life must be devoted to the
inner, subjective world that heretofore had been neglected. The attitude of the personality
must shift from extraversion to introversion. The focus on consciousness must be tempered
by an awareness of the unconscious. Our interests must shift from the physical and material
to the spiritual, philosophical, and intuitive. A balance among all facets of the personality
must replace the previous one-sidedness of the personality (that is, the focus on
consciousness). If we are successful in integrating the unconscious with the conscious, we
are in a position to attain a new level of positive psychological health, a condition Jung
called individuation. [Shultz]

UNIT 3: ERIK ERIKSON

Erik Homburger Erikson has extended the insights of psychoanalysis through cross-cultural
studies of child rearing, psychological biographies of great men and women, and by
analyzing the interaction of psychological and social dynamics. Erikson’s life-span theory of
ego development has had enormous influence within psychology and related fields. He is
also the founder of modern psychohistory. He was born on June 15, 1902.

He made three major contributions to the study of personality: (1) that along with Freud’s
psychosexual developmental stages, the individual simultaneously goes through
psychosocial and ego-development stages, (2) that personality development continues
throughout life, and (3) that each stage of development can have both positive and negative
outcomes.

INTELLECTUAL ANTECEDENTS

Psychoanalysis

Throughout his career, Erikson viewed himself as a psychoanalyst. His application of


psychoanalysis to new areas and his incorporation of recent advances in anthropology,
psychology, and other social sciences inevitably led Erikson to develop ideas significantly
different from Freud’s basic theories. However, Erikson’s writings reveal his indebtedness to
Freud. Rather than label himself neo-Freudian, Erikson preferred the more neutral term
post-Freudian. Erikson’s work on in-depth psychological biographies and on child and adult
development was essentially psychoanalytic in nature.

Other Cultures

Erikson’s work with the Sioux (Plains hunters’ tribe) and Yuroks (sedentary fishing society)
had an important influence on his thinking. His field studies revealed his remarkable ability
to enter the worldviews and modes of thinking of cultures far different from his own.
Erikson’s later theoretical developments evolved partly from his cross-cultural observations.
He found that Freud’s theories of pregenital stages of development were intrinsically
related to the technology and worldview of Western culture. Erikson’s own theoretical focus
on healthy personality development strongly reflected his first-hand knowledge of other
cultures.

MAJOR CONCEPTS

The core of Erikson’s work is his eight-stage model of human development, a model that
extends psychoanalytic thinking beyond childhood to cover the entire human life cycle. Each
stage has psychological, biological, and social components, and each stage builds on the
stages that precede it. Another significant contribution of Erikson’s was his pioneering work
on psychohistory and psychobiography, which applied his clinical insight to the study of
major historical personalities and their impact on their societies.

An Epigenetic Model of Human Development

Erikson’s model of the stages of human development—a model he called epigenetic —is the
first psychological theory to detail the human life cycle from infancy to adulthood and old
age. According to Erikson, the psychological growth of the individual proceeds in a manner
similar to that of an embryo. Epigenesis suggests that each element develops on top of
other parts (epi means “upon” and genesis means “emergence”). Erikson’s model is
structurally similar to that of embryonic growth in that the emergence of each successive
stage is predicated on the development of the previous one.

Erikson’s scheme of human development has two basic underlying assumptions: (1) That the
human personality in principle develops according to steps predetermined in the growing
person’s readiness to be driven forward, to be aware of, and to interact with a widening
social radius; and (2) that society, in principle, tends to be so constituted as to meet and
invite the succession of potentialities for interaction and attempts to safeguard and to
encourage the proper rate and the proper sequence of their unfolding. ( 1963 , p. 270 )

Each stage is characterized by a specific developmental task, or crisis, that must be resolved
in order for the individual to proceed to the next stage. The strengths and capacities
developed through successful resolution at each stage affect the entire personality. They
can be influenced by either later or earlier events. However, these psychological capacities
are generally affected most strongly during the stage in which they emerge. Each stage is
systematically related to all the others, and the stages must occur in a given sequence.

Crisis in Development

Each stage has a period of crisis in development in which the strengths and skills that form
essential elements of that stage are developed and tested. By crisis, Erikson means a turning
point, a critical moment, such as the crisis in a fever. When it is resolved successfully, the
fever breaks and the individual begin to recover. Crises are special times in the individual’s
life, “moments of decision between progress and regression, integration and retardation”
(Erikson, 1963). Each stage is a crisis in learning—allowing for the attainment of new skills
and attitudes. The crisis may not seem dramatic or critical; often, the individual can see only
later that a major turning point was reached and passed.
Virtue in Development

Erikson has pointed out that successful resolution of the crisis at each stage of human
development promotes a certain psychosocial strength or virtue. Erikson uses the term
virtue in its old sense, as in the virtue of a medicine. It refers more to potency than morality.
Ideally, the individual emerges from each crisis with an increased sense of inner unity,
clearer judgment, and greater capacity to function effectively.

Eight Stages of Human Development

[ Refer Frager, pg no. 157-165]

Modes of Relating to the Environment

Whereas Freud based his description of the stages of human development on specific
organ-related experiences, Erikson’s stages are based on more general styles of relating to
and coping with the environment. Although, according to Erikson, these styles of behavior
are often initially developed through a particular organ, they refer to broad patterns of
activity. For instance, the mode learned in the first stage, basic trust versus basic mistrust, is
to get —that is, the ability to receive and to accept what is given. (This stage corresponds to
Freud’s oral stage.) At this time, the mouth is the primary organ of interchange between the
infant and the environment. However, an adult who is fixated on getting may exhibit forms
of dependency unrelated to orality.

In the second stage, autonomy versus shame and doubt, the modes are to let go and to hold
on. As with Freud’s anal stage, the modes fundamentally relate to retention and elimination
of feces; however, the child also alternates between possessing and rejecting parents,
favorite toys, and so on.
The mode of the third stage, initiative versus guilt, Erikson calls to make. In one sense, the
child is “on the make,” focused on the conquest of the environment. Play is important, from
making mud pies to imitating the complex sports and games of older children.

The fourth stage, industry versus inferiority, includes the modes to do well and to work. No
single organ system associates with this stage; rather, productive work and accomplishment
are central.

Erikson does not discuss in detail the modes involved in the remaining stages. These later
stages, not as closely related to Freud’s developmental stages, seem less rooted in a
particular activity or organ mode.

Identity

Erikson developed the concept of identity in greater detail than the other concepts he
incorporated in the eight stages. He first coined the phrase identity crisis to describe the
mental state of many of the soldiers he treated at Mt. Zion Veterans Rehabilitation Clinic in
San Francisco in the 1940s. These men were easily upset by any sudden or intense stimulus.
Their egos seemed to have lost any shock-absorbing capacity. Their sensory systems were in
a constant “startled” state, thrown off by external stimuli, as well as by a variety of bodily
sensations, including hot flashes, heart palpitations, intense headaches, and insomnia.
There was a distinct loss of ego identity. The sameness and continuity and the belief in one’s
social role were gone” (Erikson, 1968).

The term identity brings together the theories of depth psychology with those of cognitive
psychology and ego psychology (Erikson, 1993). Early Freudian theory tended to ignore the
important role of the ego as, in Erikson’s terms, “a selective, integrating, coherent and
persistent agency central to personality function” (Erikson, 1964). Erikson has wisely
avoided giving the term identity a single definition.

Erikson spells out these aspects of identity as follows:


1. Individuality —a conscious sense of one’s uniqueness and existence as a separate,
distinct entity.

2. Sameness and continuity —a sense of inner sameness, a continuity between what one
has been in the past and what one promises to be in the future, a feeling that one’s life has
consistency and meaningful direction.

3. Wholeness and synthesis —a sense of inner harmony and wholeness, a synthesis of the
self-images and identifications of childhood into a meaningful whole that produces a sense
of harmony.

4. Social solidarity —a sense of inner solidarity with the ideals and values of society or a
subgroup within it, a feeling that one’s identity is meaningful to significant others and
corresponds to their expectations and perceptions.

Erikson describes identity in the transition from childhood to adulthood. Erikson found that
the development of a sense of identity frequently follows a “psychosocial moratorium,” a
period of time-out in which the individual may be occupied with study, travel, or a clearly
temporary occupation. This provides time to reflect and to develop a new sense of
direction, new values, and new purpose. The moratorium may last for months or even
years.

Identity Development

Erikson (1980) emphasized that the development of a sense of identity has both
psychological and social aspects:

1. The individual’s development of a sense of personal sameness and continuity is based,


in part, on a belief in the sameness and continuity of a worldview shared with significant
others.

2. Although many aspects of the search for a sense of identity are conscious, unconscious
motivation may also play a major role. At this stage, feelings of acute vulnerability may
alternate with high expectations of success.
3. A sense of identity cannot develop without certain physical, mental, and social
preconditions (outlined in Erikson’s developmental stages). Also, achievement of a sense of
identity must not be unduly delayed, because future stages of development depend on it.
Psychological factors may prolong the crisis as the individual seeks to match unique gifts to
existing social possibilities. Social factors and historical change may also postpone adult
commitment.

4. The growth of a sense of identity depends on the past, present, and future. First, the
individual must have acquired a clear sense of identification in childhood. Second, the
adult’s choice of vocational identification must be realistic in light of available opportunities.
Finally, the adult must retain a sense of assurance that his or her chosen roles will be viable
in the future, in spite of inevitable changes, both in the individual and in the outside world.

Erikson has pointed out that problems of identity are not new, though they may be more
widespread today than ever before. Many creative individuals have wrestled with the
question of identity as they carved out new careers and social roles for themselves. Some
especially imaginative people were responsible for major vocational innovations, thus
offering new role models for others. Freud, for example, began his career as a conventional
doctor and neurologist. Only in midcareer did he devise a new role for himself (and for
many others) by becoming the first psychoanalyst.

Psychohistory

Erikson expanded psychoanalysis by studying major historical personalities. By analyzing


their psychological growth and development, he came to understand the psychological
impact they had on their generation. Psychohistorical analysis is the application of Erikson’s
life-span theory, along with psychoanalytic principles, to the study of historical figures.

Psychobiography
Erikson made a major contribution to historical research by applying the methods used in
psychoanalytic case histories to a reconstruction of the life of historical figures. He
combined clinical insight with historical and social analysis in developing the new form of
psychobiography. Erikson realized that in making the transition from case history to life
history, the psychoanalyst must broaden his or her concerns and take into account the
subject’s activities in the context of the opportunities and limitations of the outside world.
This appreciation of the interaction of psychological and social currents in turn affected
Erikson’s theoretical work. In addition to his books on Martin Luther and Mohandas Gandhi,
Erikson’s psychobiographies included studies of Maxim Gorky, Adolf Hitler, George Bernard
Shaw, Sigmund Freud, Thomas Jefferson, and Woodrow Wilson.

One major difference exists between psychological biographies and case histories. In a case
history, the therapist usually tries to determine why the patient has developed mental or
emotional problems. In a psychological biography, the investigator tries to understand the
subject’s creative contributions, often made in spite of conflicts, complexes, and crises.

The Study of ‘Great’ Individuals

[Refer Frager, pg no. 169]

DYNAMICS

Psychological Growth

The focus on positive characteristics developed at each stage distinguishes Erikson’s schema
from Freud’s and from those of many other personality theorists. Erikson views basic
strengths, or virtues, as more than psychological defenses against mental illness or
negativity and more than simply as attitudes of nobility or morality. These virtues are
inherent strengths and are characterized by a sense of potency and positive development.
• Hope is the virtue of the first stage, trust versus mistrust.
• Will is the strength that arises from the crisis of autonomy versus shame and doubt.
• Purpose is rooted in the initiative versus guilt stage.
• Competence is the strength resulting from the stage of industry versus inferiority.
• Fidelity comes from identity versus identity confusion.
• Love is the virtue that develops from intimacy.
• Care originates in generativity.
• Wisdom is derived from the crisis of integrity versus despair.

Obstacles to Growth

The individual can successfully resolve the crisis at each stage, or leave the crisis unresolved
in some ways. Erikson points out that successful resolution is always a dynamic balance of
some sort. A clear example of an unsuccessful resolution is the formation of a sense of
negative identity.

[Refer Frager, pg no. 170]

STRUCTURE

[Refer Frager, pg no. 173]

UNIT 4: KAREN HORNEY

Karen Horney was born Karen Danielsen in a suburb of Hamburg on September 15, 1885.
The object of therapy for Horney is to help people relinquish their defenses—which alienate
them from their true likes and dislikes, hopes, fears, and desires—so that they can get in
touch with what she called the real self. Her emphasis on self-realization as the source of
healthy values and the goal of life established Horney as one of the founders of humanistic
psychology.

INTELLECTUAL ANTECEDENTS

Psychoanalysis and Sigmund Freud

Horney acknowledged her deep debt to Freud, who had provided the foundation for all
subsequent psychoanalytic thought. It is not difficult to see why the young Karen Horney
was attracted to psychoanalysis. She suffered from many mysterious complaints and
impaired ability to function. Although certain aspects of Freudian theory fit Horney’s
experience well, others did not. By the early 1920s, she began to propose modifications in
the light of her observations of her female patients and her own experiences as a woman.
Perhaps the most important factor in Horney’s initial dissent was that she came to see
psychoanalytic theory as reproducing and reinforcing the devaluation of the feminine, from
which she had suffered in childhood. Disturbed by the male bias of psychoanalysis, she
dedicated herself to proposing a woman’s view of the differences between men and women
and the disturbances in the relations between the sexes. This eventually led to development
of a psychoanalytic paradigm quite different from Freud’s. She valued Freud’s accounts of
repression, reaction formation, projection, displacement, rationalization, and dreams; and
she believed Freud had provided indispensable tools for therapy in the concepts of
transference, resistance, and free association.

Alfred Adler

Despite the fact that she gave him little credit as an intellectual antecedent, important
similarities arise between her later thinking and his. Although she tended to characterize
Adler as superficial, she recognized his importance as an intellectual antecedent,
acknowledging that he was the first to see the search for glory.
In the terms of her culture, she was behaving like a man by studying medicine and believing
in sexual freedom. According to Horney’s Adlerian self-analysis, she needed to feel superior
because of her lack of beauty and her feminine sense of inferiority, which led her to try to
excel in a male domain. But her low self-esteem made her afraid she would fail, so she
avoided productive work, as do “women in general” (Horney, 1980, p. 251), and
experienced disproportionate anxiety over exams. Her fatigue was at once a product of her
anxiety, an excuse for withdrawing from competition with men, and a means of concealing
her inferiority and gaining a special place for herself by arousing concern.

Other Intellectual Influences

While still in Germany, Horney began to cite ethnographic and anthropological studies, as
well as the writings of the philosopher and sociologist Georg Simmel, with whom she
developed a friendship. After she moved to the United States, her sense of the differences
between central Europe and America made her receptive to the work of such sociologists,
anthropologists, and culturally oriented psychoanalysts as Erich Fromm, Max Horkheimer,
John Dollard, Harold Lasswell, Edward Sapir, Ruth Benedict, Ralph Linton, Margaret Mead,
Abraham Kardiner, and Harry Stack Sullivan, with most of whom she had personal
relationships. In response to these influences, Horney argued not only that culture is more
important than biology in the generation of neuroses but also that pathogenic conflict
between the individual and society is the product of a bad society rather than inevitable,
as Freud had contended. Following Bronislaw Malinowski, Felix Boehm, and Erich Fromm,
Horney regarded the Oedipus complex as a culturally conditioned phenomenon; and
following Harry Stack Sullivan, she saw the needs for “safety and satisfaction” as more
important than sexual drives in accounting for human behavior.

Although at first, she saw conceptions of psychological health as relative to culture, in the
late 1930s she developed a definition of health universal in nature. Drawing on W. W.
Trotter’s Instincts of the Herd in Peace and War (1916), she described emotional well-being
as “a state of inner freedom in which ‘the full capacities are available for use’” (1939). The
central feature of neurosis was now self-alienation, loss of contact with “the spontaneous
individual self” (1939). Horney gave Erich Fromm primary credit for this new direction in her
thinking, but other important influences were William James and Søren Kierkegaard. In her
descriptions of the “real self,” she was inspired by James’s account of the “spiritual self” in
Principles of Psychology (1890), and in her discussions of loss of self, she drew on
Kierkegaard’s The Sickness unto Death (1849). Horney also cited Otto Rank’s (1978) concept
of “will” as an influence on her ideas about the real self, and in her later work she invoked
the Zen concept of “wholeheartedness.”

It is difficult to determine why Horney shifted from an emphasis on the past to one on the
present, but she acknowledged the influence of Harald Schultz-Henke and Wilhelm Reich,
analysts whom she knew from her days in Berlin. The Adlerian mode of analysis she had
employed in her diary and to which she returned also focused on the present.

MAJOR CONCEPTS

[refer Schultz, pg no. 155- 172]

Need for Safety

Horney agreed with Freud, in principle, about the importance of the early years of childhood
in shaping the adult personality. Horney believed that social forces in childhood, not
biological forces, influence personality development. Instead, the social relationship
between the child and his or her parents is the key factor. Horney thought childhood was
dominated by the safety need, by which she meant the need for security and freedom from
fear (Horney, 1937). Whether the infant experiences a feeling of security and an absence of
fear is decisive in determining the normality of his or her personality development. A child’s
security depends entirely on how the parents treats the child. The major way parents
weaken or prevent security is by displaying a lack of warmth and affection for the child.

Parents can act in various ways to undermine their child’s security and thereby induce
hostility. These parental behaviors include obvious preference for a sibling, unfair
punishment, erratic behavior, promises not kept, ridicule, humiliation, and isolation of the
child from peers. Horney suggested that children know whether their parents’ love is
genuine. The child may feel the need to repress the hostility engendered by the parents’
undermining behaviors for reasons of helplessness, fear of the parents, need for genuine
love, or guilt feelings. Horney placed great emphasis on the infant’s helplessness. Unlike
Adler, however, she did not believe all infants necessarily feel helpless, but when these
feelings do arise, they can lead to neurotic behavior. Children’s sense of helplessness
depends on their parents’ behavior. If children are excessively kept in a dependent state,
then helplessness will be encouraged. The more helpless children feel, the less they dare to
oppose or rebel against the parents. This means that the child will repress the resulting
hostility.

Paradoxically, love can be another reason for repressing hostility toward parents. In this
case, parents tell their children how much they love them and how greatly they are
sacrificing for them, but the warmth and affection the parents profess are not honest. Guilt
is yet another reason child repress hostility. They are often made to feel guilty about any
hostility or rebelliousness. They may be made to feel unworthy, wicked, or sinful for
expressing or even harboring resentments toward their parents. The more guilt the child
feels, the more deeply repressed will be the hostility.

Basic Anxiety

Horney defined basic anxiety as an “insidiously increasing, all-pervading feeling of being


lonely and helpless in a hostile world” (Horney, 1937). It is the foundation on which later
neuroses develop, and it is inseparably tied to feelings of hostility. In Horney’s words, we
feel “small, insignificant, helpless, deserted, endangered, in a world that is out to abuse,
cheat, attack, humiliate, betray” (1937). In childhood we try to protect ourselves against
basic anxiety in four ways:

a) Securing affection and love


By securing affection and love from other people, the person is saying, in effect, “If
you love me, you will not hurt me.” There are several ways by which we may gain
affection, such as trying to do whatever the other person wants, trying to bribe
others, or threatening others into providing the desired affection.
b) Being submissive
Being submissive as a means of self-protection involves complying with the wishes
either of one particular person or of everyone in our social environment. Submissive
persons avoid doing anything that might antagonize others. They dare not criticize or
give offense. They must repress their personal desires and cannot defend against
abuse for fear that such defensiveness will antagonize the abuser. Most people who
act submissive believe they are unselfish and self-sacrificing. Such persons seem to
be saying, “If I give in, I will not be hurt.”
c) Attaining power
By attaining power over others, a person can compensate for helplessness and
achieve security through success or through a sense of superiority. Such persons
seem to believe that if they have power, no one will harm them.

These three self-protective devices have something in common; by engaging in any


of them the person is attempting to cope with basic anxiety by interacting with other
people.

d) Withdrawing

The fourth way of protecting oneself against basic anxiety involves withdrawing from
other people, not physically but psychologically. Such a person attempts to become
independent of others, not relying on anyone else for the satisfaction of internal or
external needs. For example, if someone amasses a houseful of material possessions,
then he or she can rely on them to satisfy external needs. Unfortunately, that person
may be too burdened by basic anxiety to enjoy the possessions. He or she must
guard the possessions carefully because they are the person’s only protection
against anxiety. The withdrawn person achieves independence with regard to
internal or psychological needs by becoming aloof from others, no longer seeking
them out to satisfy emotional needs. The process involves a blunting, or minimizing,
of emotional needs. By renouncing these needs, the withdrawn person guards
against being hurt by other people.

The four self-protective mechanisms Horney proposed have a single goal: to defend against
basic anxiety. They motivate the person to seek security and reassurance rather than
happiness or pleasure. They are a defense against pain, not a pursuit of well-being. Another
characteristic of these self-protective mechanisms is their power and intensity. Horney
believed they could be more compelling than sexual or other physiological needs. These
mechanisms may reduce anxiety, but the cost to the individual is usually an impoverished
personality.

Often, the neurotic will pursue the search for safety and security by using more than one of
these mechanisms and the incompatibility among the four mechanisms can lay the
groundwork for additional problems.

Neurotic Needs and Trends (and rest)

Refer Schultz, pg no. 158- 170

UNIT 4: ERICH FROMM

Erich Fromm, like Alfred Adler and Karen Horney, argued that we are not inflexibly shaped
by instinctive biological forces, as Freud had proposed. Instead, Fromm suggested that
personality is influenced by the social and cultural forces that affect the individual within a
culture and by the universal forces that have influenced humanity throughout history.
Fromm was a psychoanalyst, philosopher, historian, anthropologist, and sociologist. Fromm
was born in Frankfurt, Germany, into an Orthodox Jewish family in 1900. The word fromm in
German means “pious” or “religious,” terms that clearly characterized Fromm’s childhood
years.
MAJOR CONCEPTS

(Refer Schultz, pg no. 177- 188)

UNIT 3: MURRAY

According to Henry Murray (1951), “Personality may be biologically defined as the governing
organ, or superordinate institution of the body. As such, it is located in the brain. No brain,
no personality”. Murray’s personality represents a process of development from birth to
death (Murray, 1968). He believed that personality integrate and directs the person’s
behaviour.

To highlight the intimate relationship between the cerebral physiology and personality,
Murray (1938) coined the term regnancy. The physiological and neurological brain activities
underlying personality functioning are termed regnant processes.

MAJOR CONCEPTS

Principles of Personology

(Refer Schultz, pg no. 198-199)

The Division of Personality

PERSONALITY

ID SUPER EGO EGO IDEAL EGO


(Refer Schultz, pg no. 199- 200)

Need Theory

This is the central concept in Murray’s theory. Needs are motivators or directors of
behaviour. Murray (1938) defined “need” as a hypothetical construct which stands for a
force in the brain region, a force either internally or externally instigated which organizes
other psychological processes.

A need involves a physicochemical force in the brain that organizes and directs intellectual
and perceptual abilities. Needs may arise either from internal processes such as hunger or
thirst, or from events in the environment. Needs arouse a level of tension that the organism
tries to reduce by acting to satisfy them. Needs energize and direct behavior. They activate
behavior in the appropriate direction to satisfy the needs. Murray’s research led him to
formulate a list of 20 needs (Murray, 1938). Not every person has all of these needs. Over
the course of your lifetime you may experience all these needs, or there may be some needs
you never experience. Some needs support other needs, and some oppose other needs.

Types of Needs

NEEDS

VISCEROGENIC(12) PSYCHOGENIC (28)


(Physiological/ (Non- physical/
Primary needs) Secondary needs)

After an elaborative series of investigations at the Harvard Psychological Clinic in the 1930s,
Murray proposed a list of 12 viscerogenic (physiological) needs and 28 psychogenic
(nonphysical) needs as the basis for human behaviour. Primary needs (viscerogenic needs)
arise from internal bodily states and include those needs required for survival (such as food,
water, air, and harm avoidance), as well as such needs as sex and sentience. Secondary
needs (psychogenic needs) arise indirectly from primary needs, in a way Murray did not
make clear, but they have no specifiable origin within the body. They are called secondary
not because they are less important but because they develop after the primary needs.
Secondary needs are concerned with emotional satisfaction and include most of the needs
on Murray’s original list.

Reactive needs involve a response to something specific in the environment and are
aroused only when that object appears. For example, the harm avoidance need appears
only when a threat is present. Proactive needs do not depend on the presence of a
particular object. They are spontaneous needs that elicit appropriate behavior whenever
they are aroused, independent of the environment. For example, hungry people look for
food to satisfy their need; they do not wait for a stimulus, such as a television ad for a
hamburger, before acting to find food. Reactive needs involve a response to a specific
object; proactive needs arise spontaneously.

Overt and covert needs constitute another of Murray’s categories. Overt (manifest) needs
are allowed free expression in society. For example, in American society one can freely
express nAchievement, nAffiliation or nOrder. On the other hand, covert (latent) needs are
not permitted open expression by the culture. Instead, they remain partly or completely
unconscious and find their outlets primarily in dreams, fantasies, projections and neurotic
symptoms. Depending on social norms, nAggression, nSex and nSuccorance could easily fall
in this category. Interestingly, the implication here is that overt needs in one society may be
covert in another.

A final distinction within Murray’s need categories is that of effect versus modal. An effect
need is linked with some direct or specific goal state. A student enrolled in a personality
theory course, for example, usually is motivated toward some identifiable goal (eg., passing
the course, fulfilling a requirement for graduation). The need or needs involved (eg.,
achievement) are directed toward some tangible result. Modal needs, by contrast, are those
in which experienced satisfaction is present to some degree throughout the activity rather
than linked only to its end result (Murray, 1951). An illustration would be playing or listening
to music. Murray describes this as sheer pleasure function─ the need to attain a high degree
of excellence or perfection of performance. In Murray’s system, then, the pleasure derived
from doing something just for the sake of doing it can be just as important as the end result
obtained.

Characteristics of Needs

There is constant dynamic interplay among needs in Murray’s theory.

Needs differ in terms of the urgency with which they impel behavior, a characteristic Murray
called a need’s prepotency. For example, if the needs for air and water are not satisfied,
they come to dominate behavior, taking precedence over all other needs. In Murray’s view,
needs are arranged in a hierarchy of prepotency or urgency. Murray (1951) describes
prepotent needs as those which come to the fore with the greatest urgency if they are not
satisfied. For example, the need to avoid physical danger is more compelling (i.e., has
greater prepotency) than nNurturance.

Some needs are complementary and can be satisfied by one behavior or a set of behaviors.
Murray called this a fusion of needs. For instance, by working to acquire fame and wealth,
we can satisfy the needs for achievement, dominance, and autonomy. Whenever a single
course of action satisfies two or more needs simultaneously, there is a fusion of needs. For
example, a person might satisfy both nAffiliation and nNurturance through a single activity,
e.g., doing a volunteer hospital work. Fusion, then, does not mean that the two needs
become identical. Rather, they complement one another in that they both are satisfied by
the same behaviour.

The concept of subsidiation refers to a situation in which one need is activated to aid in
satisfying another need. In other words, the principle states that certain needs are satisfied
only through the fulfilment of other less demanding needs. For example, to satisfy the
affiliation need by being in the company of other people, it may be necessary to act
deferentially toward them, thus invoking the deference need. In this case, the deference
need is subsidiary to the affiliation need.
A fourth and final illustration of Murray’s interaction principles is conflict. Needs of about
equal strength may often be in conflict with one another and thus produce tension. Murray
believes that much of human misery and most neurotic behaviour are direct results of such
inner conflict.

Murray recognized that childhood events can affect the development of specific needs and,
later in life, can activate those needs. He called this influence press because an
environmental object or event presses or pressures the individual to act a certain way
(needs are always thwarted or gratified in some environmental context). In Murray’s theory,
press in the form of persons or objects may either facilitate or block an individual’s efforts
to gratify his/her needs.

PRESS

ALPHA BETA
PRESS PRESS
Alpha press represents persons, objects or events as they actually exist in reality. For
example, alpha pAggression reflects the fact that there actually are people in one’s
environment harboring and acting upon hostile feelings toward oneself. Beta press, on the
other hand, represents the environment as subjectively perceived and experienced by the
individual. Beta pAggression, then, means that the person sees hostility in those around
him. According to Murray, it is the beta aspect of press which exerts the greater influence
on behaviour since that is what is felt, interpreted and responded to by the person. In well-
balanced individuals, there is good correspondence between alpha and beta press.

PRESS DEFINITION

pAffiliation Companions who are friendly


pAggression Others who assault, criticize or belittle
once

pCounteraction Being attacked either verbally or physically

pDominance Persons or obstacles which restrain or


prohibit one

pLack Living in a state of poverty

pRecognition Competing for awards and honors

pRejection Being refused membership in a club or


organization
(Acronym: CAR LARD)

Because of the possibility of interaction between need and press, Murray introduced the
concept of thema (or unity thema). The thema combines personal factors (needs) with the
environmental factors that pressure or compel our behavior (presses). The thema is formed
through early childhood experiences and becomes a powerful force in determining
personality. Largely unconscious, the thema relates needs and presses in a pattern that
gives coherence, unity, order, and uniqueness to our behavior.

[Earlier known as serial thema. When Murray recognized that individual uniqueness was the
product of many interacting factors, he termed it unity thema.]

The five criteria by which needs can be recognized (Murray, 1938)

1. Consequence or end result of the mode of behaviour involved.


2. The kind of pattern of behaviour involved.
3. The selective perception of and response to a group of circumscribed stimulus
objects.
4. Expression of a characteristic emotion or feeling.
5. Manifestation of satisfaction associated with the attainment of a certain effect or
the manifestation of dissatisfaction associated with the failure to attain a certain
effect.

Personality Development in Childhood

(Refer Schultz, pg no. 203- 204)

ASSESSMENT OR APPLICATION OF MURRAY’S THEORY

(Refer Schultz, pg no. 205- 213)

(Refer Hjelle, pg no. 167- 170)


MODULE 3- LEARNING AND SOCIAL COGNITIVE/LEARNING PERSPECTIVE

UNIT 1: B. F. SKINNER

Burrhus Frederic Skinner was born in 1904 and raised in Susquehanna, Pennsylvania, a small
town in the north-eastern part of the state.

INTELLECTUAL ANTECEDENTS

Skinner acknowledged that he was deeply influenced by his early reading of the English
scientist-philosopher Francis Bacon. “Three Baconian principles have characterized my
professional life.”

1. “I have studied nature not books.”

2. “Nature to be commanded must be obeyed.”

3. “A better world was possible, but it would not come about by accident. It must be
planned and built, and with the help of science” (1984a).

Darwinism and Canon of Parsimony

The idea that animal studies could shed light on human behavior arose as an indirect result
of Darwin’s research and the subsequent development of evolutionary theories. The first
researchers of animal behaviors were interested in discovering the reasoning capacities of
animals. In effect, they tried to raise the status of animals to that of thinking beings. The
examinations of higher thought processes in animals were not supported, however, by the
ideas of two prominent psychologists, Lloyd Morgan and Edward Thorndike. Morgan
proposed a canon of parsimony, which states that given two explanations, a scientist should
always accept the simpler one.
Watson

American John B. Watson (1878–1958), the first avowed behaviorist, defined behaviorism
as follows:

Psychology as the behaviorist views it is a purely objective branch of natural science. Its
theoretical goals the prediction and control of behavior. Introspection forms no essential
part of its methods…. The behaviorist, in his efforts to get a unitary scheme of animal
response, recognizes no dividing line between man and brute. (1913, p. 158)

Watson argued against such a thing as consciousness, declaring that all learning is
dependent upon the external environment and that all human activity is conditioned and
conditionable. Skinner was attracted to the broad philosophical outlines of his work.
Skinner criticized Watson for his denial of genetic characteristics as well as for his tendency
to generalize without the support of concrete data.

Pavlov

Ivan Pavlov (1849–1936), a Russian physiologist, did the first important modern work in the
area of behavioral conditioning (1927). His research demonstrated that autonomic functions
could be conditioned. He showed that salivation could be evoked by a stimulus other than
food, such as a ringing bell. Pavlov was not merely observing and predicting the behaviors
he was studying; he could produce them on command. While other animal experimenters
were content with using statistical analysis to predict the likelihood that a behavior would
occur, Skinner was fascinated with the step beyond prediction—control. Pavlov’s work
pointed Skinner toward tightly controlled laboratory experiments on animals.

Philosophy of Science

Skinner was impressed with the ideas of philosophers of science, including Percy Bridgman,
Ernst Mach, and Jules Henri Poincaré. They created new models of explanatory thinking that
did not depend on any metaphysical substructures. To Skinner, behaviorism “is not the
science of human behavior; it is the philosophy of that science” (1974).

MAJOR CONCEPTS

Scientific Analysis of Behaviour

Behavior, no matter how complex, can be investigated, like any other observable
phenomena. The goal is to look at a behavior and its contingencies (from a Latin word
meaning “to touch on all sides”). For Skinner, these include the antecedents of the behavior,
the response to it, and the consequences or results of the response. Behavior, for Skinner, is
anything an organism can be observed doing (Skinner, 1938, p. 6). A complete analysis of
the behavior would also consider the genetic endowment of the organism and previous
behaviors related to those being studied.

The scientific analysis of behavior begins by isolating the parts of a complex event so that
the individual items can be better understood. Skinner’s experimental research follows this
analytic procedure, restricting itself to conditions amenable to rigorous scientific analysis.
The results of his experiments can be verified independently, and his conclusions checked
against the recorded data.

Personality

Skinner argues that if you base your definition of the self on observable behavior, you
need not discuss the inner working of the self or the personality at all. Personality,
therefore, in the sense of a separate self, has no place in a scientific analysis of behavior.
Personality, as defined by Skinner, is a collection of behavior patterns. Different situations
evoke different response patterns. An individual response is based solely on previous
experiences and genetic history.

Buddhism—to the surprise of most behaviorists—also concludes that because there is no


observable individual self, the self does not exist. Buddhists do not believe in an entity
called personality, but in overlapping behaviors and sensations, all of which are
impermanent. Skinner and the Buddhists developed their ideas based on the assumption of
no ego, no self, no personality, except as characterized by a collection of behaviors.

Explanatory Fictions

Explanatory fictions (Skinner’s term) are terms nonbehaviorists employ to describe


behavior. Skinner believed that people use these concepts when they do not understand the
behavior involved or are unaware of the pattern of reinforcements that preceded or
followed the behavior. Examples of explanatory fictions for Skinner include freedom,
autonomous man, dignity, and creativity. According to behaviorism, using such terms as
explanations for behavior is simply incorrect. Skinner believed that this type of explanation
is actually harmful: it gives a misleading appearance of being satisfactory and thus might
preclude the search for more objective variables.

Freedom

Freedom is a label that we attach to behavior when we are unaware of the causes for the
behavior. A series of studies conducted by Milton Erickson (1939) demonstrated that
through hypnosis, subjects could evoke various kinds of psychopathological symptoms.
Skinner suggests that the feeling of freedom is not really freedom; furthermore, he believes
that the most repressive forms of control are those that reinforce the feeling of freedom,
such as the voters’ “freedom” to choose between candidates whose positions are extremely
similar. These repressive tactics restrict and control action in subtle ways not easily
discernible by the people being controlled.

Autonomous Man

Autonomous man is an explanatory fiction Skinner described as an indwelling agent; an


inner person who is moved by vague inner forces independent of the behavioral
contingencies. To be autonomous is to initiate “uncaused” behavior, behavior that does not
arise from prior behaviors and is not attributable to external events. Skinner found no
evidence that such an autonomous being exists, and he was distressed that so many people

believed in the idea. Skinner’s research demonstrated that if one plots certain kinds of
learning experiences, the shape of the resulting curve (and the rate of the learning) is the
same for pigeons, rats, monkeys, cats, dogs, and human children (Skinner, 1956).

Dignity

Dignity (or credit or praise) is another explanatory fiction. The amount of credit a person
receives is related in a curious way to the visibility of the causes of his behavior. In other
words, we often praise an individual for behavior when the circumstances or the additional
contingencies are unknown. By way of contrast, for example, we do not praise acts of
charity if we know they are done only to lower income taxes. We do not praise a confession
of a crime if the confession came out only under extreme pressure. We do not censure a
person whose acts inadvertently cause others damage. Skinner suggests that if we would
admit our ignorance, we would withhold both praise and censure.

Creativity

With a certain amount of puckish delight, Skinner dismisses the last stronghold of the
indwelling agent: the poetic or creative act. It is for Skinner still another example of using a
metaphysical label to hide the fact that we do not know the specific causes of given
behaviors. Skinner derides the opinions of creative artists who maintain that their works are
spontaneous or arise from sources beyond the artist’s life experience. Evidence from
hypnosis and from the vast body of literature on the effectiveness of propaganda and
advertising, as well as the findings of psychotherapy, all shows that an individual is often
unaware of what lies behind his or her own behavior. His conclusion is that creative activity
is no different from other behaviors except that the behavioral elements preceding it and
determining it are more obscure. He sides with Samuel Butler, who noted that “a poet
writes a poem as a hen lays an egg, and both of them feel better afterwards.”

Will

Skinner considers the notion of will confusing and unrealistic. For him, will, free will, and
willpower are nothing more than explanatory fictions. Skinner assumes that no action is
free. Other researchers, however, have shown that people who believe that external forces
are responsible for their actions less in control of their behavior than people who feel
personally responsible for their actions. Skinner’s investigation of will has drawn more
criticism than any other aspect of his work.

Self

Skinner considers the term self an explanatory fiction. If we cannot show what is responsible
for a man’s behavior, we say that he himself is responsible for it. A concept of self is not
essential in an analysis of behavior. (1953, pp. 283, 285).

Conditioning and Reinforcement

Skinner distinguished between two kinds of behavior:

BEHAVIOUR

RESPONDENT OPERANT
BEHAVIOUR BEHAVIOUR
Respondent Behaviour

Respondent behavior involves a response made to or elicited by a specific stimulus.


Respondent behavior is reflexive behavior. An organism responds automatically and
involuntarily to a stimulus. Respondent conditioning is readily learned and exhibited.
Advertisers who link an attractive person with a product are seeking to form an association
and elicit a certain response. They hope that through the pairing, consumers will respond
positively to the product. At a higher level is respondent behavior that is learned. This
learning, called conditioning, involves the substitution of one stimulus for another.

Operant Conditioning

Respondent behavior depends on reinforcement and is related directly to a physical


stimulus. Every response is elicited by a specific stimulus. To Skinner, respondent behavior
was less important than operant behavior. Operant behaviors are behaviors that occur
spontaneously. “Operant behavior is strengthened or weakened by the events that follow
the response. Whereas respondent behavior is controlled by its antecedents, operant
behavior is controlled by its consequences” (Reese, 1966, p. 3). The conditioning that takes
place depends on what occurs after the behavior has been completed. Skinner became
fascinated by operant behaviors, because he could see that they can be linked to far more
complex behaviors than is true of respondent behaviors. Skinner concluded that almost any
naturally occurring behavior in an animal or in a human can be trained to occur more often,
more strongly, or in any chosen direction.

Operant conditioning is the process of shaping and maintaining a particular behavior by its
consequences. Therefore, it takes into account not only what is presented before the
response but what happens after the response. Extensive research on the variables that
affect operant conditioning has led to the following

conclusions:
1. Conditioning can and does take place without awareness.
2. Conditioning is maintained in spite of awareness.
3. Conditioning is less effective when the subject is aware but uncooperative.

Respondent Behaviour Operant Behaviour

• involves a response made to or • Behavior emitted spontaneously or


elicited by a specific stimulus. voluntarily that operates on the
• depends on reinforcement and is environment to change it.
related directly to a physical • Operant behavior is strengthened or
stimulus. weakened by the events that follow
the response.
• controlled by its antecedents. • controlled by its consequences.
• It has no effect on the environment. • Operates on the environment and
changes it.

Reinforcement

The act of strengthening a response by adding a reward, thus increasing the likelihood that
the response will be repeated. A reinforcer is any stimulus that follows the occurrence of a
response and increases or maintains the probability of that response. In the example of the
child learning to swim, candy was the reinforcer offered after she successfully exhibited a
specific behavior. Reinforcers may be either positive or negative. A positive reinforcer
strengthens any behavior that produces it: a glass of water is positively reinforcing when we
are thirsty, and if we then draw and drink a glass of water, we are more likely to do so again
on similar occasions. A negative reinforcer strengthens any behavior that reduces or
terminates it: when we take off a shoe that is pinching, the reduction in pressure is
negatively reinforcing, and we are more likely to do so again when a shoe pinch. (Skinner,
1974, p. 46) Negative reinforcers are aversive: they are stimuli a person or an animal turns
away from or tries to avoid. Positive and negative consequences regulate or control
behaviors. This is the core of Skinner’s position; he proposes that all behavior is shaped by a
combination of positive and negative reinforcers. Moreover, he asserts, it is possible to
explain the occurrence of any behavior if one has sufficient knowledge of the prior
reinforcers. Skinner conducted his original research on animals.

The reinforcements are more difficult to perceive when one investigates more complex or
abstract situations. Primary reinforcers are events or stimuli that are innately reinforcing.
They are unlearned, present at birth, and related to physical needs and survival. Examples
are air, water, food, and shelter. Secondary reinforcers are neutral stimuli that become
associated with primary reinforcers so that they eventually function as reinforcers. Money is
one example of a secondary reinforcer; it has no intrinsic value, but money or the promise
of money is one of the most widely used and effective reinforcers. Money is an effective
secondary reinforcer for more than humans.

Schedules of Reinforcement

Schedules of reinforcement are patterns or rates of providing or withholding reinforcers.


Continuous reinforcement will increase the speed at which a new behavior is learned.
Intermittent or partial reinforcement will produce more stable behavior—that is, behavior
that will continue to be produced even after the reinforcement stops or appears rarely.
Thus, researchers have found that to change or maintain behaviors, the scheduling is as
important as the reinforcement itself (Kimble, 1961).

Reinforcing a correct response improves learning. It is more effective than punishment


(aversive control), because reinforcement selectively directs behavior toward a
predetermined goal. The use of reinforcement is a highly focused and effective strategy for
shaping and controlling desired behaviors.

Behaviour Control
While many psychologists are concerned with predicting behavior, Skinner is interested in
the control of behavior. The first step in a defense against tyranny is the fullest possible
exposure of controlling techniques. Therefore, if one can make changes in the environment,
one can begin to control behavior. For example, extinction occurs when there is no longer
any consequence following a behavior that had been previously reinforced. Consistent lack
of reinforcement leads to a steady decline in the behavior. For example, a rat is rewarded
with a food pellet after pushing a lever. Lever pushing is reinforced, and the probability of
this behavior increases. However, if the rat no longer receives a food pellet after pushing
the level, the rat will eventually cease its lever pushing behavior.

GROWTH AND OBSTACLES TO GROWTH

Growth for Skinner means the ability to minimize adverse conditions and to increase the
beneficial control of our environment.

Ignorance

Skinner defines ignorance as lack of knowledge about what causes a given behavior. The
first step in overcoming ignorance is to acknowledge it; the second is to change the
behaviors that have maintained the ignorance. One way to eliminate ignorance is to stop
using nondescriptive, mental terms.

Functional Analysis

Functional analysis is an examination of cause-and-effect relationships. It treats every


aspect of behavior as a function of a condition that can be described in physical terms. Thus,
the behavior and its causes can be defined without explanatory fictions. Precise descriptions
of behavior help us make accurate predictions of future behaviors and improve the analysis
of the reinforcements that led to the behavior. Behavior is neither random nor arbitrary but
is a purposeful process we can describe by considering the environment in which the
behavior is embedded. Skinner says that explanations that depend on terms such as will,
imagination, intelligence, or freedom are not functional. They obscure rather than clarify the
causes of behavior because they do not truly describe what is occurring.

Punishment

Punishment provides no information about how to do something correctly. It neither meets


the demands of the person inflicting the punishment nor benefits the person receiving it.
Thus, it inhibits personal growth. Punishment does not work—that is to say, punished
behaviors usually do not go away. Unless new learning is available, the punished responses
will return, often disguised or coupled with new behaviors. The new behaviors may be
attempts to avoid further punishment, or they may be retaliation against the person who
administered the original punishment.

Prison life punishes inmates for their prior behaviors but rarely teaches the individuals more
socially acceptable ways to satisfy their needs. Prisoners who have not learned behaviors to
replace those that landed them in jail will, once released—and exposed to the same
environment and subject to the same temptations— probably repeat those behaviors. The
high proportion of criminals returning to prison underscores the accuracy of these
observations. Skinner concluded that although punishment may be used briefly to suppress
a behavior that is highly undesirable or could cause injury or death, a far more useful
approach is to establish a situation in which a new, competing, and more beneficial behavior
can be learned and reinforced.

(Refer Schultz, pg no. 385- 393, 395-398)

STRUCTURE AND ROLE OF THERAPIST

(Refer Frager, pg no. 229- 238)


UNIT 2: DOLLARD & MILLER

(Refer photostat)

UNIT 3: JULIAN ROTTER

(Refer Schultz, pg no. 436- 440)

UNIT 4: BANDURA

Albert Bandura is one of the best-known psychologists alive today. He was born in 1925 in
Canada. Bandura has stressed that people learn as much from observing the behavior of
others as they learn from their own experience. Through various cognitive processes, we
remember and evaluate what we have observed in others.

MAJOR CONCEPTS

Reciprocal Determinism

Personality theorists often debate whether inner or outer forces control our behavior.
Behaviorists have traditionally insisted environmental factors are the most important.
Psychoanalysts claim control of behavior comes from within. In contrast, Bandura focuses
on the interaction of behavior, internal dynamics, and external factors. Bandura coined the
term reciprocal determinism for the effects on behavior of both our cognitive processes and
the social and physical environment. Bandura has pointed out that the external
environment is not only a cause of behavior, it is also an effect of behavior. Bandura
developed the concept of triadic reciprocality (1989) to refer to the interaction among
behavior, environment, and internal factors such as awareness and cognition.
“Nurture shapes nature”.

Triadic reciprocality rests on several powerful assumptions.

• One assumption is that behavior affects internal factors.


For example, continued success at a certain activity brings confidence in our abilities
in that area.
• Bandura (2001) also claims behavior can affect our neurobiological functions.
If we continue to read, write, and talk about a particular topic, we develop a
“neurological network” for handling information about that topic that makes it
easier to learn more about the topic.
• In addition, internal factors are also affected by the environment.
For example, women who live together in a dormitory often come to match their
menstrual cycles (Matlin & Foley, 1997).
• Finally, our behavior affects the environment, for example, watering our house
plants keeps them alive and failing to water causes them to wither and die.

Internal events, such as thoughts and feelings, influence both our behavior and the
environment. Our belief that our behavior in a certain task will not succeed makes it far
more likely we will fail (Bandura, 1989a). Beliefs about the environment may have important
environmental consequences. If we place little value on our forests, we will allow the forests
to be cut down by the lumber industry.

Disinhibition

Research has shown that behaviors a person usually suppresses or inhibits may be
performed more readily under the influence of a model (Bandura, 1973, 1986). This
phenomenon, called disinhibition, refers to the weakening of an inhibition or restraint
through exposure to a model. For example, people in a crowd may start a riot, breaking
windows and shouting, exhibiting physical and verbal behaviors they would never perform
when alone. They are more likely to discard their inhibitions against aggressive behavior if
they see other people doing so. The disinhibition phenomenon can influence sexual
behavior.

Walters, Bowen, & Parke, 1963 – Refer Schultz, pg no. 403

Observational Learning

Bandura has argued that much significant human learning occurs through observation.
According to Skinnerian theory, responses must first occur and then be reinforced. Other
behavioral theories also stress that learning depends on reinforcing behavior. In contrast,
Bandura has shown that significant learning often occurs when subjects simply observe
models as they perform various behaviors. Bandura has called this observational (or
vicarious) learning. This kind of learning occurs “as a function of observing the behavior of
others and its reinforcing consequences, without the modeled responses being overtly
performed by the viewer during the exposure period” (Bandura, 1965, p. 3). We receive
“vicarious reinforcement” whenever we observe someone receiving rewards for their
behavior. Animals have also been shown to learn new behaviors through observation and
imitation (Reader & Biro, 2010).

In a classic study of observational learning (Bandura et al., 1963), preschool children


watched a film in which an adult hit and kicked a large inflated “Bobo” doll. The adult also
shouted phrases like “Pow, right in the nose!” while hitting the doll. When the children were
allowed to play with the doll themselves, the experimental group was twice as aggressive as
a control group that had not seen the adult model attack the doll. The researchers found the
same increase in aggression with an adult model shown on television and also with a
cartoon character. To study the effects of parental modeling , Bandura also compared the
parents of highly aggressive children and more inhibited children (Bandura & Walters,
1963). The parents of the inhibited children were more inhibited, and the parents of the
aggressive children proved more aggressive.

Effects of Society’s Models


On the basis of extensive research, Bandura concluded that much behavior—good and bad,
normal and abnormal—is learned by imitating the behavior of other people. From infancy
on, we develop responses to the model’s society offers us. Beginning with parents as
models, we learn their language and become socialized by the culture’s customs and
acceptable behaviors. People who deviate from cultural norms have learned their behavior
the same way as everyone else. The difference is that deviant persons have followed
models the rest of society considers undesirable.

Among the many behaviors children acquire through modeling are nonrational fears. A child
who sees that his or her parents are fearful during thunderstorms or are nervous around
strangers will easily adopt these anxieties and carry them into adulthood with little
awareness of their origin. Of course, positive behaviors such as strength, courage, and
optimism will also be learned from parents and other models. In Skinner’s system,
reinforcers control behavior; for Bandura, it is the models who control behavior.

Characteristics of Modeling

Bandura (1977, 1986) found that three factors influence observational learning:

1. The characteristics of the models


First, Bandura found that people are more influenced by a model who is similar to
them than by someone who is significantly different (Similarity). In the Bobo doll
studies, for example, children were more aggressive after exposure to a live model
than a cartoon character. We are more likely to imitate a model of our same sex and
age. Status and prestige also add to the model’s influence. For example, one
research study found pedestrians are more likely to cross the street against a red
light if they see a well-dressed person crossing against the light than if they see one
poorly dressed crossing (Lefkowitz et al., 1955). We can see one direct application of
this line of reasoning in advertisements that use either well-known figures or highly
attractive models to influence us to use a particular product.

2. The characteristics of the observers


Second, observer attributes are also important. Those low in self-confidence and
self-esteem are more likely to imitate a model than are those high in self-confidence
and self-esteem. Also, the more we have been rewarded for imitating a model, the
more likely we are to be influenced by the model’s behavior.

3. The rewards associated with the behaviors

Third, if we see a model is rewarded for a certain behavior, we are more likely to
imitate that behavior. In the Bobo doll study, one group of children saw the model
receive praise and a soda and candy. Another group watched the model receive
criticism and physical punishment for the same behavior. The children who observed
the reward displayed more aggression than the children who saw the punishment.
We are more affected by the behavior of models than we realize.

Conditions for Observational Learning

Bandura believed that successful observational learning is based on five conditions:

1. Pay attention to the model/ Attentional processes.

We do not retain everything we observe. For observation learning to occur, we must


pay sufficient attention and perceive the model’s behavior accurately enough to
allow us to recall and imitate the model’s behavior. It is the process of developing
our cognitive processes and perceptual skills so that we can pay sufficient attention
to a model, and perceiving the model accurately enough, to imitate displayed
behavior. Example: Staying awake during driver’s education class.

2. Remember what we observed/ Retention processes.

Memory is not a passive process. We reflect on what we have observed and tend to
remember whatever we consider useful or important. We must retain essential
elements of the model’s behavior in order to repeat it later. Focused attention and
prior knowledge of the modeled behavior help us understand, remember, and
imitate the modeled behavior. Without prior knowledge, we are far less likely to
understand and remember complex behaviors. For example, someone with no
knowledge of physics is unlikely to retain anything from an advanced physics lecture.
It is the process of retaining or remembering the model’s behavior so that we can
imitate or repeat it at a later time; for this, we use our cognitive processes to encode
or form mental images and verbal descriptions of the model’s behavior. Example:
Taking notes on the lecture material or the video of a person driving a car.

3. Reproduce what we have learned/ Production processes.

According to Bandura, we use two internal representational systems to reproduce


modeled behavior: imaginal and verbal. The imaginal representational system is
composed of vivid, retrievable images formed while observing a model. The verbal
representational system consists of the words an observer uses to describe
observed behavior. It is the process of translating the mental images or verbal
symbolic representations of the model’s behavior into our own overt behavior by
physically producing the responses and receiving feedback on the accuracy of our
continued practice. Example: Getting in a car with an instructor to practice shifting
gears and dodging the traffic cones in the school parking lot.

4. Be motivated to perform the activity observed/ Incentive and motivational


processes.

We are more likely to carefully observe and imitate behavior associated with positive
outcomes than we are behavior with neutral or negative outcomes. This is strongest
when we perceive a model’s behavior leads to reward and expect our reproduction
of that behavior to produce a similar reward. Bandura (1994) indicates that
motivation can come from three sources. First, we observe a model receiving reward
for a particular behavior. Second, the model (a parent or teacher, for example)
rewards our attempts to reproduce their behavior. Third, and perhaps most
important, we reward ourselves for our performance; we “pat ourselves on the
back.” Perceiving that the model’s behavior leads to a reward and thus expecting
that our learning—and successful performance—of the same behavior will lead to
similar consequences. Example: Expecting that when we have mastered driving skills,
we will pass the state test and receive a driver’s license.

5. Practice.
Production of any complex behavior requires practice. When we learn to drive a car,
for example, we may begin by watching parents and others driving and take classes
on the rules and regulations regarding driving. No matter how much preparation we
have made, our initial attempts to drive are inevitably awkward. We need practice
and feedback on our behavior to learn to brake and steer smoothly.

(Refer Schultz, pg no. 406-408)

Self- reinforcement

Self-reinforcement is as important as reinforcement administered by others, particularly for


older children and adults. We set personal standards of behavior and achievement. We
reward ourselves for meeting or exceeding these expectations and standards and we
punish ourselves for our failures. Self-administered reinforcement can be tangible such as a
new pair of gym shoes or a car, or it can be emotional such as pride or satisfaction from a
job well done. Self-administered punishment can be expressed in shame, guilt, or
depression about not behaving the way we wanted to. Self-reinforcement appears
conceptually similar to what other theorists call conscience or superego, but Bandura denies
that it is the same. A continuing process of self-reinforcement regulates much of our
behavior.

Our past behavior may become a reference point for evaluating present behavior and an
incentive for better performance in the future. [When we reach a certain level of
achievement, it may no longer challenge, motivate, or satisfy us, so we raise the standard
and require more of ourselves. Failure to achieve may result in lowering the standard to a
more realistic level. People who set unrealistic performance standards—who observed and
learned behavioral expectations from unusually talented and successful models—may
continue to try to meet those excessively high expectations despite repeated failures.

Emotionally, they may punish themselves with feelings of worthlessness and depression.
These self-produced feelings can lead to self-destructive behaviors such as alcohol and drug
abuse or a retreat into a fantasy world. We learn our initial set of internal standards from
the behavior of models, typically our parents and teachers. Once we adopt a given style of
behavior, we begin a lifelong process of comparing our behavior with theirs.]

Self- efficacy

In Bandura’s system, self-efficacy refers to feelings of adequacy, efficiency, and competence


in coping with life. Meeting and maintaining our performance standards enhance self-
efficacy; failure to meet and maintain them reduces it. Another way Bandura described self-
efficacy was in terms of our perception of the control we have over our life. People strive to
exercise control over events that affect their lives.

People low in self-efficacy feel helpless, unable to exercise control over life events. They
believe any effort they make is futile. When they encounter obstacles, they quickly give up if
their initial attempt to deal with a problem is ineffective. People who are extremely low in
self-efficacy will not even attempt to cope because they are convinced that nothing, they do
will make a difference. Low self-efficacy can destroy motivation, lower aspirations,
interfere with cognitive abilities, and adversely affect physical health.

People high in self-efficacy believe they can deal effectively with events and situations.
Because they expect to succeed in overcoming obstacles, they persevere at tasks and often
perform at a high level. These people have greater confidence in their abilities than do
persons low in self-efficacy, and they express little self-doubt. They view difficulties as
challenges instead of threats and actively seek novel situations. High self-efficacy reduces
fear of failure, raises aspirations, and improves problem solving and analytical thinking
abilities.

Our judgment about our self-efficacy is based on four sources of information:


SOURCES OF
INFORMATION

PHYSIOLOGICAL
PERFORMANCE VICARIOUS VERBAL
AND EMOTIONAL
ATTAINMENT EXPERIENCES PERSUASION
AROUSAL

The most influential source of efficacy judgments is performance attainment. Previous


success experiences provide direct indications of our level of mastery and competence. Prior
achievements demonstrate our capabilities and strengthen our feelings of self-efficacy. Prior
failures, particularly repeated failures in childhood, lower self-efficacy. An important
indicator of performance attainment is receiving feedback on one’s progress or one’s
performance on a task, such as a work assignment or a college examination. One study of 97
college students performing complicated puzzles found that those who received positive
feedback on their performance reported higher levels of perceived competence at that task
than did those who received negative feedback (Elliot et al., 2000).

Thus, put simply, the more we achieve, the more we believe we can achieve, and the more
competent and in control we feel. Short-term failures in adulthood can lower self-efficacy.

Vicarious experiences—seeing other people perform successfully—strengthen self-efficacy,


particularly if the people we observe are similar in abilities. In effect, we are saying, “If they
can do it, so can I.” In contrast, seeing others fail can lower self-efficacy: “If they can’t do it,
neither can I.” Therefore, effective models are vital in influencing our feelings of adequacy
and competence. These models also show us appropriate strategies for dealing with difficult
situations.

Verbal persuasion, which means reminding people that they possess the ability to achieve
whatever they want to achieve, can enhance self-efficacy. This may be the most common of
the four informational sources and one frequently offered by parents, teachers, spouses,
coaches, friends, and therapists who say, in effect, “You can do it.” To be effective, verbal
persuasion must be realistic.

A fourth source of information about self-efficacy is physiological and emotional arousal.


We often use this type of information as a basis for judging our ability to cope. We are more
likely to believe we will master a problem successfully if we are not agitated, tense, or
bothered by headaches. The more composed we feel, the greater our self-efficacy. Whereas
the higher our level of physiological and emotional arousal, the lower our self-efficacy. The
more fear, anxiety, or tension we experience in a given situation, the less we feel able to
cope.

Bandura concluded that certain conditions increase self-efficacy:

1. Exposing people to success experiences by arranging reachable goals increases


performance attainment.

2. Exposing people to appropriate models who perform successfully enhances vicarious


success experiences.

3. Providing verbal persuasion encourages people to believe they have the ability to perform
successfully.

4. Strengthening physiological arousal through proper diet, stress reduction, and exercise
programs increases strength, stamina, and the ability to cope.

Developmental Stages of Modelling and Self-efficacy

(Refer Schultz, pg no. 412- 413)


DEVELOPMENTAL
STAGES

CHILDHOOD ADOLESCENCE ADULTHOOD OLD AGE

APPLICATION

• Behaviour modification
(Refer Schultz, pg no. 413-416)
MODULE 4: HUMANISTIC AND EXISTENTIAL PERSPECTIVE

UNIT 1: ABRAHAM MASLOW

Abraham Maslow was born in Brooklyn, New York, in 1908, to Russian Jewish immigrant
parents.

INTELLECTUAL ANTECEDENT

Psychoanalysis

Maslow believed that psychoanalysis provided the best system for analyzing
psychopathology and also the best form of psychotherapy available. However, he found the
psychoanalytic system unsatisfactory as a general psychology applicable to all of human
thought and behavior. Maslow’s own personal analysis profoundly affected him and
demonstrated the substantial differences that exist between intellectual knowledge and
actual, gut-level experience.

Social Anthropology

Maslow seriously studied the work of social anthropologists, such as Bronislaw Malinowski,
Margaret Mead, Ruth Benedict, and Ralph Linton. In addition, Maslow was fascinated by
William Sumner’s book Folkways (1940). According to Sumner, human behavior is largely
determined by cultural patterns and prescriptions. Maslow was so inspired by Sumner that
he vowed to devote himself to the same areas of study.

Gestalt Psychology

He admired Max Wertheimer, whose work on productive thinking is closely related to


Maslow’s writings on cognition and to his work on creativity. For Maslow, as for Gestalt
psychologists, an essential element in effective reasoning and creative problem solving is
the ability to perceive and think in terms of wholes or patterns rather than isolated parts.

Kurt Goldstein

Kurt Goldstein, a neurophysiologist, emphasized the unity of the organism—what happens


in any part affects the entire system. Maslow’s work on self-actualization was inspired partly
by Goldstein, who was the first to use the term. Maslow dedicated Toward a Psychology of
Being (1968) to Goldstein.

A neurophysiologist whose main focus was brain-damaged patients, Goldstein viewed self-
actualization as a fundamental process in every organism, a process that may have negative
as well as positive effects on the individual. In Goldstein’s view, every organism has one
primary drive: “[The] organism is governed by the tendency to actualize, as much as
possible, its individual capacities, its ‘nature,’ in the world” (1939).

Goldstein argued that tension release is a strong drive but only in sick organisms. A healthy
organism’s primary goal is “the formation of a certain level of tension, namely, that which
makes possible further ordered activity” (1939).

According to Goldstein, successful coping with the environment often involves uncertainty
and shock. In fact, the healthy self-actualizing organism invites such shock by venturing into
new situations in order to utilize its capacities. For Goldstein (and for Maslow also), self-
actualization does not rid the individual of problems and difficulties; on the contrary, growth
may bring a certain amount of pain and suffering.

MAJOR CONCEPTS

Hierarchy of Needs

In his theory of the hierarchy of needs (see Figure 12.1 ), Maslow accomplished an
intellectual tour de force. He managed to integrate in a single model the approaches of the
major schools of psychology—behaviorism, psychoanalysis and its offshoots, and humanistic
and transpersonal psychology. He illustrated that no one approach is better or more valid
than another. Each has its own place and its own relevance.

Maslow defined neurosis and psychological maladjustment as deficiency diseases; that is,
they are caused by deprivation of certain basic needs, just as the absence of certain vitamins
causes illness. The best examples of basic needs are the physiological ones, such as hunger,
thirst, and sleep. Basic needs are found in all individuals. The amount and kind of
satisfaction varies among societies, but basic needs (such as hunger) can never be ignored.
There are two types of basic needs, viz,
Physiological needs include the need for food, drink, oxygen, sleep, and sex. Many people in
our culture can satisfy these needs without difficulty. However, if biological needs are not
adequately met, the individual becomes almost completely devoted to fulfilling them.
Maslow argues that a person who is literally dying of thirst has no great interest in satisfying
any other needs.

Certain psychological needs must also be satisfied in order to maintain health. Maslow
includes the following as basic psychological needs: the need for safety, security, and
stability; the need for love and a sense of belonging; and the need for self-respect and
esteem. In addition, every individual has growth needs: a need to develop one’s potentials
and capabilities and a need for self-actualization.

By safety needs, Maslow means the individual’s need to live in a relatively stable, safe,
predictable environment. All people have belonging and love needs . We are motivated to
seek close relationships with others and to feel part of various groups, such as family and
groups of peers. These needs, Maslow wrote, are increasingly frustrated in our highly
mobile, individualistic society. Furthermore, the frustration of these needs is most often
found at the core of psychological maladjustment.

Maslow (1987) described two kinds of esteem needs . First is a desire for competence and
individual achievement. Second, we need respect from others—status, fame, appreciation,
and recognition. When these needs are unmet, the individual tends to feel inferior, weak, or
helpless.

Even if all these needs are satisfied, Maslow points out, individuals still feel frustrated or
incomplete unless they experience self-actualization —full use of their talents and
capacities. The form that this need takes varies widely from person to person. Each of us has
different motivations and capacities. To one person, becoming an excellent parent may be a
source of self-actualization; another may feel impelled to achieve as an athlete, painter, or
inventor.

According to Maslow, more basic needs must be fulfilled before less critical needs are met.
For example, both physiological and love needs are essential to the individual; however,
when one is starving, the need for love is not a major factor in behavior.
Self-actualization

Maslow loosely defined self-actualization as “the full use and exploitation of talents,
capacities, potentialities, etc.” (1970). Self-actualization is not a static state. It is an ongoing
process in which one’s capacities are fully, creatively, and joyfully utilized.

Most commonly, self-actualizing people see life clearly. They are less emotional and more
objective, less likely to allow hopes, fears, or ego defenses to distort their observations.
Maslow found that all self-actualizing people are dedicated to a vocation or a cause. Two
requirements for growth are commitment to something greater than oneself and success at
one’s chosen tasks. Major characteristics of self-actualizing people include creativity,
spontaneity, courage, and hard work.

Maslow deliberately studied only those who were relatively free of neurosis and emotional
disturbance. He found that his psychologically healthy subjects were independent and self-
accepting; they had few self-conflicts and were able to enjoy both play and work. They had
more interests and less fear, anxiety, boredom, or sense of purposelessness. Whereas most
other people had only occasional moments of joy, triumph, or peak experience, self-
actualizing individuals seemed to love life in general.

One of Maslow’s main points is that we are always desiring something and rarely reach a
state of complete satisfaction, one without any goals or desires. His need hierarchy is an
attempt to predict what kinds of desires will arise once the old ones are sufficiently satisfied
and no longer dominate behavior.

Characteristics of self-actualizing people:

1. Clear perception of reality


2. Acceptance of self, others, and nature
3. Spontaneity, simplicity, and naturalness
4. Dedication to a cause
5. Independence and need for privacy
6. Freshness of appreciation
7. Peak experiences
8. Social interest
9. Deep interpersonal relationships
10. Tolerance and acceptance of others
11. Creativeness and originality
12. Resistance to social pressures

Metamotivation

Metamotivation refers to behavior inspired by growth needs and values. In other words,
metamotivation is the motivation of self-actualizers, which involves maximizing personal
potential rather than striving for a particular goal object. According to Maslow, this kind of
motivation is most common among self-actualizing people, who are by definition already
gratified in their lower needs. Metamotivation often takes the form of devotion to ideals or
goals, to something “outside oneself.” Frustration of metaneeds brings about
metapathologies —a lack of values, meaningfulness, or fulfillment in life. Maslow argues
that a sense of identity, success in a career, and commitment to a value system are as
essential to one’s psychological well-being as are security, love, and self-esteem.

[Metaneeds and metapathology; refer Schultz]

Grumbles and Metagrumbles


Maslow’s system includes different levels of complaints that correspond to the levels of
frustrated needs. In a factory situation, for example, low-level grumbles might be responses
to unsafe working conditions, authoritarian supervisors, or a lack of job security. These
complaints address deprivations of basic needs for physical safety and security. Complaints
of a higher level might be inadequate recognition of accomplishments, loss of prestige, or
lack of group solidarity—that is, complaints based on threats to belonging needs or esteem
needs.

Metagrumbles speak to the frustration of metaneeds, such as perfection, justice, beauty,


and truth. This level of grumbling is a good indication that everything else is going fairly
smoothly. When people complain about the unaesthetic nature of their surroundings, for
example, it probably means that their more basic needs have been relatively well satisfied.

Maslow assumes that we should never expect an end to complaints; we should only hope to
move to higher levels of complaint. When grumblers are frustrated over the imperfection of
the world, the lack of justice, and so on, it is a positive sign: despite a high degree of basic
satisfaction, people are striving for still greater improvement and growth. In fact, Maslow
suggests, a good measure of the enlightenment of a community is the percentage of
metagrumblers among its members.

Maslow lists the following self-actualizing characteristics ( 1970 , pp. 153 – 172 ): (For
simplicity, Engler [2003] grouped them into four dimensions— awareness , honesty ,
freedom , and trust .)

Awareness

1. more efficient perception of reality and more comfortable relations with it

2. continued freshness of appreciation.

3. mystic and peak experiences

4. discrimination between means and ends, between good and evil

Honesty
5. Gemeinschaftsgefühl [a feeling of kinship with others]

6. deeper and more profound interpersonal relations

7. the democratic character structure

8. philosophical, unhostile sense of humor

Freedom

9. spontaneity; simplicity; naturalness

10. the quality of detachment; the need for privacy

11. autonomy; independence of culture and environment

12. self-actualizing creativeness

Trust

13. acceptance (self, others, nature)

14. problem centering [as opposed to being ego-centered]

15. resistance to enculturation; the transcendence of any particular culture

Self- actualisation Theory

In his last book, The Farther Reaches of Human Nature (1971), Maslow describes eight ways
in which individuals self-actualize, or eight behaviors leading to self-actualization.

1. Concentration. “First, self-actualization means experiencing fully, vividly, selflessly, with


full concentration and total absorption” ( Maslow, 1971 , p. 45 ). Usually, we are relatively
unaware of what is going on within or around us. (Most eyewitnesses recount different
versions of the same occurrence, for example.) However, we have all had moments of
heightened awareness and intense involvement, moments that Maslow would call self-
actualizing.
2. Growth Choices. If we think of life as a series of choices, then self-actualization is the
process of making each decision a choice for growth. We often have to choose between
growth and safety, between progressing and regressing. Each choice has its positive and its
negative aspects. To choose safety is to remain with the known and the familiar but to risk
becoming stultified and stale. To choose growth is to open oneself to new and challenging
experiences but to risk the unknown and possible failure.

3. Self-awareness. In the process of self-actualizing, we become more aware of our inner


nature and act in accordance with it. This means we decide for ourselves whether we like
certain films, books, or ideas, regardless of others’ opinions.

4. Honesty. Honesty and taking responsibility for one’s actions are essential elements in
self-actualizing. Rather than pose and give answers that are calculated to please another or
to make ourselves look good, we can look within for the answers. Each time we do so, we
get in touch with our inner selves.

5. Judgment. The first four steps help us develop the capacity for “better life choices.” We
learn to trust our own judgment and our own inner feelings and to act accordingly. Maslow
believes that following our instincts leads to more accurate judgments about what is
constitutionally right for each of us—better choices in art, music, and food, as well as in
major life decisions, such as marriage and a career.

6. Self-development. Self-actualization is also a continual process of developing one’s


potentialities. It means using one’s abilities and intelligence and “working to do well the
thing that one wants to do” ( Maslow, 1971 , p. 48 ). Great talent or intelligence is not the
same as self-actualization; many gifted people fail to use their abilities fully, while others,
with perhaps only average talents, accomplish a great deal. Self-actualization is not a thing
that someone either has or does not have. It is a never ending process of making real one’s
potential.

7. Peak Experiences. “Peak experiences are transient moments of self-actualization” (


Maslow, 1971 , p. 48 ). We are more whole, more integrated, more aware of ourselves and
of the world during peak moments. At such times, we think, act, and feel most clearly and
accurately. We are more loving and accepting of others, have less inner conflict and anxiety,
and are better able to put our energies to constructive use. Some people enjoy more peak
experiences than others, particularly those Maslow called transcending self-actualizers. (See
the following sections: “Peak Experiences” and “Transcending Self-actualization.”)

8. Lack of Ego Defenses. A further step in self-actualization is to recognize our ego defenses
and to be able to drop them when appropriate. To do so, we must become more aware of
the ways in which we distort our images of ourselves and of the external world—through
repression, projection, and other defenses.

Peak Experiences

Peak experiences are especially joyous and exciting moments in the life of every individual.
In other words, peak experience is a moment of intense ecstasy, similar to a religious or
mystical experience, during which the self is transcended. Maslow notes that peak
experiences are often inspired by intense feelings of love, exposure to great art or music, or
the overwhelming beauty of nature. Virtually everyone has peak experiences, although we
often take them for granted. One’s reactions while watching a vivid sunset or listening to a
moving piece of music are examples of peak experiences. According to Maslow, peak
experiences tend to be triggered by intense, inspiring occurrences.

These experiences may also be triggered by tragic events. Recovering from depression or a
serious illness, or confronting death, can initiate extreme moments of love and joy. The lives
of most people are filled with long periods of relative inattentiveness, lack of involvement,
or even boredom. By contrast, peak experiences, understood in the broadest sense, are
those moments when we become deeply involved, excited by, and absorbed in the world.
The most powerful peak experiences are relatively rare.

Plateau Experience

A peak experience is a “high” that may last a few minutes or several hours, but rarely
longer. Maslow also discusses a more stable and long-lasting kind of experience that he
refers to as a plateau experience . The plateau experience represents a new and more
profound way of viewing and experiencing the world. It involves a fundamental change in
attitude, a change that affects one’s entire point of view and creates a new appreciation and
intensified awareness of the world.

Transcending Self- Actualization

Maslow found that some self-actualizing individuals tend to have many peak experiences,
whereas other people have them rarely, if ever. He came to distinguish between self-
actualizers who are psychologically healthy, productive human beings, with little or no
experience of transcendence, and those for whom transcendence is important or even
central.

Transcending self-actualizers are more often aware of the sacredness of all things, the
transcendent dimension of life, in the midst of daily activities. Their peak or mystical
experiences are often valued as the most important aspects of their lives. They tend to think
more holistically than “merely healthy” self-actualizers; they are better able to transcend
the categories of past, present, and future, and good and evil, and to perceive a unity
behind the apparent complexity and contradictions of life. They are more likely to be
innovators and original thinkers than systematizers of the ideas of others. As their
knowledge develops, so does their sense of humility and ignorance, and they may come to
regard the universe with increasing awe. Because transcenders generally regard themselves
as the carriers of their talents and abilities, they are less ego-involved in their work.

Not everyone who has had a mystical experience is a transcending self-actualizer. Many
who have had such experiences have not developed the psychological health and the
productiveness Maslow considered to be essential aspects of self-actualization. Maslow also
found as many transcenders among business executives, managers, teachers, and politicians
as among poets, musicians, ministers, and the like, for whom transcendence is almost
assumed.

Note: Maslow coined the term eupsychia (you-sigh-key-a) to refer to ideal, human-
oriented societies and communities. He preferred it to utopia, which Maslow considered
overused and whose definition suggests impracticality and ungrounded idealism. The
development of an ideal society by psychologically healthy, self-actualizing individuals
was quite possible, he believed. All members of the community would be engaged in
seeking personal development and fulfilment in their work and in their personal lives. But
even an ideal society will not necessarily produce self-actualizing individuals.

Synergy

The term synergy was originally used by Maslow’s teacher Ruth Benedict (1970) to refer to
the degree of interpersonal cooperation and harmony within a society. Synergy means
cooperation (from the Greek word for “work together”). Synergy also refers to a combined
action of elements resulting in a total effect that is greater than all the elements taken
independently.

As an anthropologist, Benedict observed that people in some societies are clearly happier,
healthier, and more efficient than in others. Some groups have beliefs and customs that are
basically harmonious and satisfying to their members, whereas other groups have traditions
that promote suspicion, fear, and anxiety.

Under conditions of low social synergy, the success of one member brings about loss or
failure for another. High social synergy maximizes cooperation. The cultural belief system
reinforces cooperation and positive feelings between individuals, and helps minimize
conflict and discord.

Maslow also writes of synergy in individuals. Identification with others tends to promote
high individual synergy. If the success of another is a source of genuine satisfaction to the
individual, then help is freely and generously offered. In a sense, both selfish and altruistic
motives are merged. In aiding another, the individual is also seeking his or her own
satisfaction.

Synergy can also be found within the individual as unity between thought and action. To
force oneself to act indicates some conflict of motives. Ideally, individuals do what they
should do because they want to do so.

DYNAMICS

Psychological Growth
The pursuit of self-actualization cannot begin until the individual is free of the domination of
the lower needs, such as needs for security and esteem. According to Maslow, early
frustration of a need may fixate the individual at that level of functioning. For instance,
those who were unpopular as children may crave attention, recognition, and praise from
others throughout their lives.

The pursuit of satisfaction of higher needs is in itself one index of psychological health.
Maslow argues that fulfillment of higher needs is intrinsically more satisfying and that
metamotivation is an indication that the individual has progressed beyond a deficiency level
of functioning.

Obstacles to Growth

In Maslow’s view, growth motivation is less basic than the drive to satisfy physiological
needs and needs for security, esteem, and so on. The process of self-actualization can be
limited by (1) negative influences from past experience and resulting unproductive habits,
(2) social influence and group pressure that often operate against our own taste and
judgment, and (3) inner defenses that keep us out of touch with ourselves. Because self-
actualization tops the need hierarchy, it is the weakest need and is easily inhibited by
frustration of more fundamental needs. Also, most people avoid self-knowledge, which is at
the heart of the process of self-actualization, and are afraid of the changes in self-esteem
and self-image that self-knowledge brings.

Poor Habits

Poor habits often inhibit growth. Maslow included in these addiction to drugs or alcohol,
poor diet, and other behaviors that adversely affect health and efficiency. A destructive
environment or rigid, authoritarian education can easily lead to unproductive habits based
on a deficiency orientation. Also, any deep-seated habit tends to interfere with
psychological growth because it diminishes the flexibility and openness necessary to
operate effectively in a variety of situations.

Group pressure and social propaganda also tend to limit the individual. They act to reduce
autonomy and stifle independent judgment; the individual is pressured to substitute
external, societal standards for his or her own taste or judgment. A society may inculcate a
biased view of human nature as seen, for example, in the Western belief that most human
instincts are essentially sinful and must be controlled or subjugated. Maslow argues that this
negative attitude often frustrates growth and that the opposite is in fact true; our instincts
are essentially good, and impulses toward growth are the major sources of human
motivation.

Ego Defenses

Maslow considers ego defenses internal obstacles to growth. The first step in dealing with
ego defenses is to recognize them and to see clearly how they operate. Then the individual
should attempt to minimize distortions created by the defenses. Maslow adds two new
defense mechanisms— desacralization and the Jonah complex —to the traditional
psychoanalytic listing of projection, repression, denial, and the like.

Desacralization refers to the act of impoverishing one’s life by the refusal to treat anything
with deep seriousness and concern. Today, few cultural or religious symbols receive the care
and respect they once enjoyed; consequently, they have lost their power to thrill, inspire, or
even motivate us. Maslow often refers to modern values concerning sex as an example of
desacralization. Although a more casual attitude toward sex may lessen frustration and
trauma, Maslow believed that sexual experience has lost the power it once had to inspire
artists, writers, and lovers.

The Jonah complex refers to the refusal to realize one’s full capabilities. Just as old
Testament Jonah attempted to avoid the responsibilities of becoming a prophet, many
people avoid responsibility because they actually fear using their capacities to the fullest.
They prefer the security of undemanding goals over ambitious ones that require them to
fully extend themselves. This attitude is not uncommon among students who “get by,”
utilizing only a fraction of their talents and abilities.

This “fear of greatness” may be the largest barrier to self-actualization. Living fully is more
than many of us feel we can bear. At times of deepest joy and ecstasy, people often say,
“It’s too much,” or, “I can’t stand it.” The root of the Jonah complex lies in the fear of letting
go of a limited but manageable existence, the fear of losing control, being torn apart, or
disintegrating.

Self

Maslow defines the self as an individual’s inner core or inherent nature—one’s tastes,
values, and goals. Understanding one’s inner nature and acting in accordance with it is
essential to actualizing the self. Maslow approaches understanding the self by studying
those individuals most in tune with their own natures, those who provide the best examples
of self-expression or self-actualization. However, he does not discuss the self as a specific
structure within the personality.

Thearpist

Maslow believed good therapist is like an older brother or sister, someone who offers care
and love. But more than this, Maslow proposed the model of the Taoist helper, a person
who offers assistance without interference. A familiar example is a good coach who works
with the athlete’s natural style to strengthen and improve his or her style. A skillful coach
does not try to force all athletes into the same mold. Good parents are like Taoist helpers
when they resist doing everything for their child. The child develops best by means of
guidance, not interference.

Although Maslow underwent psychoanalysis for several years and received informal training
in psychotherapy, his interests always revolved around research and writing rather than the
actual practice of psychotherapy.

Maslow (1987) did make an important distinction between what he called basic needs
therapy, designed to help people meet primary needs such as safety, belonging, love, and
respect, and insight therapy, which is a profound, long-term process of growth in self-
understanding.

Maslow viewed therapy as a way of satisfying the frustrated basic needs for love and
esteem in virtually everyone who seeks psychological help. He argued ( 1970 ) that warm
human relationships can provide much of the same support found in therapy.
Good therapists should love and care for the being or essence of the people they work with.
Maslow (1971) wrote that those who seek to change or manipulate others lack this essential
attitude. For example, he argued that a true dog lover would never crop the animal’s ears or
tail, and one who really loves flowers would not cut or twist them to make fancy floral
arrangements.

UNIT 2: CARL ROGERS

[Refer Lindzey]

Carl Rogers, the fourth of six children, was born on January 8, 1902, in Oak Park, Illinois, into
a prosperous and strict fundamentalist Protestant home. He created and fostered client-
centered therapy , pioneered the encounter-group movement, was one of the founders of
humanistic psychology, and was the pivotal member of the first person-centered groups
working to resolve international political conflicts.

INTELLECTUAL ANTECEDENTS

Protestant Thought

Dewey and Kilpatrick

MAJOR CONCEPTS

The Field of Experience

There is a field of experience unique to each individual. This field contains “all that is going
on within the envelope of the organism at any given moment which is potentially available
to awareness” ( 1959 , p. 197 ). It includes events, perceptions, and sensations of which a
person is unaware but could recognize if he or she focused on these inputs. It is a private,
personal world that may or may not correspond to observed, objective reality.

This field of experience is selective, subjective, and incomplete ( Van Belle, 1980 ). It is
bounded by psychological limitations (what we are willing to be aware of) and biological
limitations.

The Self as a Process


Within the field of experience is the self. The self is an unstable, changing entity. (what we
are able to be aware of). The self is an organized, consistent gestalt, constantly forming and
reforming as situations change. Rogers’s self is a process, a system that is always shifting.
The self or self-concept, then, is a person’s understanding of himself or herself, based on
past experience, present inputs, and future expectancies ( Evans, 1975 ).

The Ideal Self

The ideal self is “the self-concept which the individual would most like to possess, upon
which he places the highest value for himself” ( Rogers, 1959 , p. 200 ). The ideal self is a
model toward which a person can strive. Like the self, it is a shifting structure, constantly
undergoing redefinition. If one’s ideal-self differs significantly from the actual self, the
person may be uncomfortable, dissatisfied, and experience neurotic difficulties.

The ideal self can become an obstacle to personal health when it differs greatly from the
real self. People who suffer from such a discrepancy often are unwilling to see the
difference between ideals and acts.

Self-Actualizing Tendency

The self-actualizing tendency is a part of human nature. Moreover, this urge is not limited
to human beings but is part of the process of all living things: It is the urge evident in all
organic and human life—to expand, extend, become autonomous, develop, and mature. It is
the tendency to express and activate all the capacities of the organism to the extent that
such activation enhances the organism or the self ( Rogers, 1961 , p. 35 ). Rogers concludes
that in each of us lies an inherent drive toward being as competent and capable as we are
biologically able to be.

Personal Power

The person-centered approach in society that he calls personal power is concerned with “
the locus of decision-making power: who makes the decisions which, consciously or
unconsciously, regulate or control the thoughts, feelings, or behavior of others or oneself
(1978). Rogers assumed that each of us, if given the opportunity, has an enormous capacity
to use our personal power correctly and beneficially.
Congruence and Incongruence

He defines the term congruence as the degree of accuracy between experience,


communication, and awareness. A high degree of congruence means that communication
(what one is expressing), experience (what is occurring), and awareness (what one is
noticing) are all nearly equal. One’s observations and those of an external observer would
be consistent in a situation that has high congruence.

Small children exhibit high congruence. They express their feelings so readily and
completely that experience, communication, and awareness are much the same for them.

Incongruence occurs when differences emerge between awareness, experience, and


communication. Incongruence between awareness and experience is called repression or
denial . The person simply is unaware of what he or she is doing. Most psychotherapy works
on this aspect of incongruence, helping people become more aware of their actions,
thoughts, and attitudes as these behaviors affect the clients themselves and others. A
person who exhibits this kind of incongruence may be perceived by others as deceitful,
inauthentic, or dishonest.

Incongruence may be experienced as tension, anxiety, or, in more extreme circumstances,


disorientation and confusion. Incongruence occurs when a person is unaware of these
conflicts, does not understand them, and therefore cannot begin to resolve or balance
them.

DYNAMICS

Psychological Growth

Rogers is convinced that these tendencies toward health are facilitated by interpersonal
relationships in which at least one member is free enough from incongruence to be in touch
with his or her own self-correcting center. Self-acceptance is a prerequisite to an easier and
more genuine acceptance of others. In turn, being accepted by another leads to a greater
willingness to accept oneself. The last necessary element is empathic understanding
(Rogers, 1984), the ability to accurately sense the feelings of others. This self-correcting and
self-enhancing cycle helps people overcome obstacles and facilitates psychological growth.
Obstacles to Growth

Condition of Worth

Behaviors or attitudes that deny some aspect of the self are called conditions of worth .
Such conditions are considered necessary for a sense of worth and to obtain love.
Conditions of worth inhibit not only behavior but also maturation and awareness; they lead
to incongruence and eventually to less personal awareness.

These conditions become the basic obstacles to accurate perception and realistic thinking.
They are selective blinders and filters a child uses to help ensure a supply of love from
parents and others.

False Self-image

As the child matures, the problem may persist. In order to support the false self-image, a
person continues to distort experiences—the greater the distortion, the greater the chance
for mistakes and the creation of additional problems. The behaviors, errors, and confusion
that accumulate are manifestations of the initial distortions. The situation feeds back on
itself. Each experience of incongruence between the self and reality leads to increased
imbalance, which in turn leads to increased defensiveness, shutting off experiences and
creating new occasions for incongruence.

THE FULLY FUNCTIONING PERSON

The fully functioning person is a person who is completely aware of his or her ongoing self.

“The fully functioning person” is synonymous with optimal psychological adjustment,


optimal psychological maturity, complete congruence, complete openness to experience. . . .
Since some of these terms sound somewhat static, as though such a person “had arrived,” it
should be pointed out that all the characteristics of such a person are process characteristics.
The fully functioning person would be a person-in-process, a person continually changing. (
Rogers, 1959 , p. 235 )

The fully functioning person has several distinct characteristics, the first of which is an
openness to experience . There is little or no use of the early warning signals that restrict
awareness. The person is continually moving away from defensiveness and toward direct
experience.

A second characteristic is living in the present —fully realizing each moment. This ongoing,
direct engagement with reality allows “the self and personality [to] emerge from
experience, rather than experience being translated or twisted to fit a preconceived self-
structure” ( 1961 , pp. 188 – 189 ). An individual is capable of restructuring his or her
responses as experience allows or suggests new possibilities.

A final characteristic is trusting in one’s inner urgings and intuitive judgments, an ever
increasing trust in one’s capacity to make decisions. Rogers suggests that the fully
functioning person will make mistakes through incorrect information, not incorrect
processing or misperceptions.

PERSON CENTERED THERAPY

[Refer Schultz and Linzey]

UNIT 3: ROLLO MAY

Rollo May was born in Ada, Ohio, in 1909, into a family characterized by intense marital
conflicts.

[Refer Ryckman; pg no: 477- 497]

UNIT 4: VIKTOR FRANKL

[Refer pdf and word doc]


MODULE 5: TRAIT AND COGNITIVE PERSPECTIVE

UNIT 1: GORDON ALLPORT: TRAIT CONCEPT, CHARACTERISTICS OF HEALTHY PERSONALITY

[Refer Schultz, pg no: 245- 254]

UNIT 2: RAYMOND CATTELL

[Refer Schultz, pg no: 266- 277]

UNIT 3: HANS EYSENCK

[Refer Schultz, pg no: 278]

UNIT 4: BIG 5/ PERSONALITY TYPE THEORY

[Refer Schultz, pg no: 282]

UNIT 5: GEORGE KELLY

[Refer Ryckman, pg no: 386- 415]

Personal construct theory approaches understanding others by attempting to step inside


their world and speculate how the world might appear from that vantage point. George
Alexander Kelly, the only child, was born April 28, 1905, on a farm near Perth, Kansas, a
small town south of Wichita.

INTELLECTUAL ANTECEDENTS

Pragmatism and John Dewey

The pragmatic philosophy and psychology of John Dewey (1859–1952) exerted the strongest
single influence on the development of personal construct theory. John Dewey ties directly
to Kelly’s intention to create a psychology of action and use.

Existential-Phenomenological Psychology
Both Butt (1997) and Holland (1970) have argued convincingly that personal construct
theory is a variety of existential phenomenology, despite Kelly’s repeated protest that his
theory could not be subsumed by any other position.

Korzybski and Moreno

Kelly is deeply indebted to the semantic theory of Alfred Korzybski and to Jacob Moreno’s
psychodrama as a therapeutic method. For Korzybski (1933 , 1943) , “Suffering and
unhappiness result from a disturbance in the relationship between something in the world
and its semantic, linguistic referents in the person” ( Stewart & Barry, 1991 ). Kelly
combined this idea with Moreno’s (1923, 1937) idea of helping people by involving them in
a personal play about their own lives that is cast by a director and then performed on a
formal stage.

MAJOR CONCEPTS

Personal construct theory takes the position that each theory of personality and
psychotherapy must explicate the philosophical assumptions made in building that theory.
The basic philosophical position for personal construct theory is known as constructive
alternativism. Constructive alternativism is the idea that, while there is only one true
reality, reality is always experienced from one or another perspective, or alternative
construction.

Although there exists a real world external to our perceptions of it, we, as individuals, come
to know that world by placing our own interpretation upon it. The world does not
automatically and directly reveal itself to us; we must strike up a relationship with it. Only
through the relationship we form with the world do we gain the knowledge we need to
progress. We are responsible for the type of knowledge we have of the world in which we
live. Personal construct theory involves the additional assumption that knowledge of the
world is unitary, and over the long haul we will know what things are really like. At some
point in the far distant future, it will eventually be clear which conception of the world we
should accept, which conception is veridical, or genuine.

Personal Construct Systems: Basic Characteristics


First, Kelly took the fundamental position in developing construct theory that we
understand ourselves and others by becoming aware of what we anticipate will happen in
our lives. Second, we anticipate things, thereby offering a construction of them, by looking
for something similar, yet a little different, from what we already know. Third, the nature of
this understanding happens in a binary way best described as a dichotomy.

These statements of the theory give information about what the person is and how we are
to approach an understanding of this person. First, we are to understand the person as an
organized whole. The person cannot, then, be examined in part functions such as memory,
cognition, perception, emotion, sensations, learning, and so on; nor can we see the person
simply as a part of a social group. Rather, we recognize the person in his or her own right as
the focus of study, an individual to be understood in his or her own terms. The unit of
analysis is the personal construct, and we consider the person as if he or she were
structured psychologically as a system of personal constructs. Using the concept of a
personal construct, the clinician approaches the person according to the meaning
dimensions the person imposes upon the world in order to make the world interpretable.
Another way of characterizing this approach is to say that the concern is with the person’s
worldview, particularly in the area of interpersonal relations.

This understanding of the person’s worldview applies to both the client and the professional
psychologist. The theory is designed to be a reflexive theory: the way the client is explained
can likewise explain the therapist as he or she creates this explanation. Any explanation
applied to the client must likewise be applied to the person who offers the explanation.

Organizational Structure of Construct Systems

Personal constructs as single units are related to one another. First, the principle of
ordination states that constructs are organized in a hierarchical fashion. Second, the
concept of fragmentation states that the total construct system contains a certain amount
of internal contradiction. Third, the term range states that constructs cover only a limited
amount of material. Finally, the concept of modulation concerns itself with how permeable
constructs are open to modification.

Process and Function of Construct Systems


Even though construct systems have a definite form (structure), they are always in the
process of changing. For the person who is seen as constantly “in process,” the problem of
psychological importance is to understand the direction in which he or she moves. First,
Kelly’s motion of choice means choosing, within one’s own constructions, that alternative
which offers the best way for us either to extend our understanding of a matter further or
to defining more carefully the matter before us. Second, Kelly’s idea of experience involves
one’s ability to continuously reinterpret (reconstrue) the events so that life continues to
have currency in an ever-changing world.
MODULE 6: EASTERN APPROACHES AND NARROW BAND THEORIES

UNIT 1: INDIAN PERSPECTIVE OF 4 STAGES OF LIFE

[Refer Frager, pg no: 329 -351]

UNIT 2: TRIGUNA THEORY, YOGA

TRIGUNA THEORY

Personality happens to be one of the most extensively studied constructs of modern


psychology. Prior to the development of modern psychology and personality theories, the
concept of personality was extensively discussed in India. Many personality theories were
also developed as a result of this. In India, the religious-philosophical traditions of Hinduism,
Buddhism and Jainism constituted the core of ancient Indian intellectual traditions and
contributed significantly in conceptualizing person and personality.

One major theory of personality that developed in ancient India was the Triguna theory. In
the literal sense, the word triguna is a combination of two words─ the first is “tri” meaning
three and the word “guna” meaning mental attribute of the individual. Etymologically, the
word “guna” is a Sanskrit term and it denotes to a cord, string, or thread. It also refers to
numerous strands that make up a rope.

The religious-philosophical texts of ancient India use the terms svabhava (disposition) and
prakriti to characterize a person. The term prakriti denotes the fundamental human nature
constituting of three attributes called triguna. The three mental attributes are illustrated
and substantiated below.

TRIGUNA
THEORY

SATTVA RAJAS TAMAS


(Clarity or light) (Activity) (Inertia)
a) Sattva
Sattva is characterized by balance, peace, equanimity, and qualities such as
cleanliness, truthfulness, dutifulness, detachment, discipline, contentment, and
staunch determination. Sattva is considered the most spiritual. One of the goals of
Yoga practice is to increase the sattvic element in the individual, which supports the
process of Self-realization. Pure sattva is a plan that remains unrealized.
b) Rajas

Rajas is intense, dynamic, passionate and is marked by agitation, anxiety,


nervousness. Attributes of Rajas include intense activity, desire for sense
gratification, little interest in spiritual elevation, envy of others, and materialistic
mentality. Pure rajas is energy without direction or goal. It is considered as an
intermediary between sattva and tamas gunas.

c) Tamas
It is manifested in dullness, lethargy, fatigue, and even depression. Qualities
associated with Tamas include mental imbalance, anger, ignorance, arrogance,
sleepiness, religious neglect, jealousy, fear, wariness and helplessness. Pure tamas
alone is inert, dead matter.

Every individual balances these three qualities, although most people are dominated by one
of them. The three gunas are always found together, like three strands of intertwined rope.
In the average person, rajas and tamas dominate, with relatively little highly developed
awareness, or sattva. Emotions (rajas) and bodily drives (tamas) distort the focus and clarity
of pure sattvic experience. The goal of yoga practice is to decrease rajas and tamas, and
increase and intensify sattvic awareness. It is stated that a person with dominant raja and
tamas cannot become a perfect yogi. Thus, gunas are mutually supporting, mutually
productive, mutually consorting and mutually existing attributes (Gupta, 2003).

The detailed description of triguna is provided in the Maitri Upanishad. Charaka (300 BC)
and Sushruta (800 BC) has together described seven types of personality based on sattvic
guna, six types based on rajasic guna and three types of personality based on tamasic guna.
The most elaborate and extensive description of triguna has been documented in Chapters
14, 17 and 18 of the Bhagavad Gita. In Gita, living entities are considered as combinations of
the material and spiritual nature. The material nature comprises three modes─

i. Goodness (sattva)
ii. Passion (rajas)
iii. Ignorance (tamas)

Coming in contact with the nature, the eternal living entity becomes conditioned by three
modes. The mode makes an individual happy, passion prompts one for fruitive action; and
ignorance fastens one to madness. These three modes are mutually interactive and struggle
to dominate an entity.

Varaha Mihira (a great astronomer, mathematician and astrologer of ancient India) has
mentioned about triguna in Chapter 22 of Brihat-Samhita. He held that the three gunas deal
with three different kinds of temperament. Predominance of any one guna or combination
of the gunas leads to idiosyncratic behaviour. Sattva guna denotes good temper. A sattvic
person is merciful, firm-minded, strong and sincere. Rajas guna refers to passionate temper.
A rajasic person is a poet, learned in various arts, performs sacrificial rites, is bold and
courageous. A person with tamasic guna has dark temper, is deceitful, ignorant, idle, angry
and sleepy. Based on the predominance of a guna, Varaha Mihira proposed a classification
of seven prakriti─

i. Sattvic
ii. Rajasic
iii. Tamasic
iv. Sattvic-rajasic
v. Sattvic-tamasic
vi. Rajasic-tamasic
vii. Sattvic-rajasic-tamasic
The construct of triguna is perhaps one of the most extensively studied indigenous
personality theory. A number of personality scales have been developed to assess triguna.
The scales are:

• Guna Inventory (Uma, Lakshmi & Parameshwaran, 1971).


• A Temperament Schedule (Singh, 1972).
• Inventory Based on Gita Personality Typology (Das, 1987).
• Personality Inventory based on Triguna (Pathak, Bhatt & Sharma, 1992).
• Inertia, Activity, Stability Rating Scale (Mathew, 1995).
• Sattva, Rajas, Tamas Inventory (Marutham, Balodhi & Mishra, 1998).
• Gita Personality Inventory (Wolf, 1998).
• Guna Scale (Bhal & Debnath, 2006).
• Mysore Triguna Scale (Shilpa & Murthy, 2012).

Thus triguna theory is a composite typological framework of personality that indicates the
temperament, mental make-up and interaction pattern of a person.

YOGA

[Refer Frager, pg no: 331- 342]

UNIT 3: BUDDHISM
[Refer Frager, pg no: 357- 375 and 379- 380]

UNIT 4: SUFISM

[Refer Frager, pg no: 383- 398 and 400- 401]

UNIT 5: LIMITED DOMAIN THEORIES: AUTHORITARIAN PERSONALITY, FEMINIST


PSYCHOLOGY (BRIEF).

AUTHORATARIAN PERSONALITY

Print the pdf

FEMINIST PSYCHOLOGY

[Refer Schultz, pg no: 170- 173]

You might also like