#1 Explain Schema Theory With Reference To One Study
#1 Explain Schema Theory With Reference To One Study
Schema theory is a concept in cognitive psychology that illustrates how knowledge is organized
within the human mind and how this organization affects our understanding of the world. It
originated with the pioneering work of Jean Piaget in the early 20th century, and further
elaborated on by other psychologists such as Frederic Bartlett. These schemas influence how
information is encoded, stored, and retrieved. Acting as mental frameworks, they guide how we
perceive, remember, and make sense of our surroundings. A study that shows the schema
theory, is the one of Brewer and Treyens, who conducted a classic investigation into the role of
schemas in memory. Their study aimed to explore how schemas influence the encoding and
retrieval of episodic memory. They gathered 86 university psychology students and placed them
in a room designed to resemble an office, complete with typical office items like a typewriter and
paper, as well as some unusual objects like a skull. Participants were told to wait in the room
under the pretext of checking on the previous participant, unaware that the study had begun.
Later, they were asked to recall items from the office. Results revealed that participants were
more likely to remember objects congruent with their schema of an office, while incongruent
items were often forgotten. Additionally, participants also tended to change their memory of
objects to fit their schema, such as remembering a pad of yellow paper on the desk instead of
on a chair. Also, when asked to select items from a list, participants were more likely to identify
incongruent items but also showed a higher rate of falsely identifying schema-congruent items
not actually present in the room. This suggests that schemas not only influence the encoding
and retrieval of memories but also impact how individuals perceive and interpret their
surroundings.
#2 Explain the use of one research method in one study of one cognitive process.
One way researchers study how reliable our thinking processes are is through laboratory
experiments. In these experiments they manipulate an independent variable to see its effect on
a dependent variable, either using a quantitative or qualitative method. These experiments help
us understand cause-and-effect relationships by showing what happens when we change
certain factors. Laboratory experiments control other external variables with the goal of
verifying the validity of a hypothesis created by researchers. One example of this is the study by
Loftus and Palmer. Their goal was to conduct a study on memory reconstruction using a
laboratory experiment. They wanted to see if changing the words in a question could change
how fast people thought a car was going in a video of a crash. Their hypothesis was that using
strong words like "smashed" would make participants think the car was going faster. They
showed participants videos of car crashes and then asked them the critical question being the
participants’ estimate for how fast the car was going when it hit the other car. The only
difference was the words they used in the questions. The word “hit” was either replaced with
“collided,” “bumped,” “contacted” or “smashed.” The findings were that when they used strong
words like "smashed," people thought the cars were going faster. Results confirmed their
hypothesis: stronger words led to higher speed estimates. Lab experiments, like theirs,
controlled variables to establish cause-and-effect relationships and increase objectivity.
However, they may sacrifice ecological validity due to their artificial settings. This method might
not reflect real-life situations well, which could affect how well we can apply the findings to
everyday life.
Reconstructive memory is the concept that our memories can be influenced by information we
encounter after an event. Instead of simply replaying memories, our brains actively rebuild them.
It suggests that memory is an active reconstructive process, not passive storage & recall. This
process relies on our existing knowledge and beliefs, which can sometimes lead to errors or
changes in our memories. One example of this is the study by Loftus and Palmer. Their goal was
to conduct a study on memory reconstruction using a laboratory experiment. They wanted to
see if changing the words in a question could change how fast people thought a car was going
in a video of a crash. Their hypothesis was that using strong words like "smashed" would make
participants think the car was going faster. They showed participants videos of car crashes and
then asked them the critical question being the participants’ estimate for how fast the car was
going when it hit the other car. The only difference was the words they used in the questions.
The word “hit” was either replaced with “collided,” “bumped,” “contacted” or “smashed.” The
findings were that when they used strong words like "smashed," people thought the cars were
going faster. Results confirmed their hypothesis: stronger words led to higher speed estimates.
Their findings indicated schemas of violent car crashes influenced encoding/recall of memory
of a video shown of a car crash. This supports the idea of reconstructive memory because it
suggests that our memories can be mixed up with things we learn later. In this case, people's
memories of the crash were influenced by what they already thought about violent accidents.
This study helps prove that memory isn't just about recalling facts but is instead a process
where our past experiences and beliefs can change how we remember things, even to the point
of making up false memories.
The multi-store memory model suggests that our memory is divided into two parts: short-term
memory and long-term memory. According to this model, information moves from sensory
memory to short-term memory, and then to long-term memory. When we focus on something, it
goes from sensory memory to short-term memory. It stays there until we either forget it or move
it to long-term memory by practicing it. When we remember something, we are moving that
information from long-term memory back into short-term memory so that it can be used. One
aspect of this model is the serial position effect, which consists of the primacy and the recency
effect. These state that when we remember a list of things, the first and last items are easier to
remember. Glanzer and Cunitz did a study in 1966 to test this. The experiment consisted of
around 50 men and was a repeated measures design. They had people remember lists of
words under different conditions. The participants were given a list of words that they would
have to recall through free-recall. There were three conditions for the experiment; one where the
participants would immediately recall the list, the second where they would recall the list after
doing a filler activity for 10 seconds, and the third where they would do the filler activity for 30
seconds. They found that when people had to remember the list right away, they remembered
both the first and last words well. But when they had to wait and do a task before remembering,
they only remembered the first words well. This supports the idea that short-term and long-term
memory are different, because the way people remembered the list changed depending on the
conditions. The first part of the list was remembered because it went into long-term memory,
while the last part was remembered because it stayed in short-term memory. When people had
to wait before remembering, the short-term memory got filled up with other things, so they
couldn't remember the last part of the list as well. This study helps explain why short-term and
long-term memory are different, supporting the multi-store memory model.
#5 Explain one bias in thinking and decision making with reference to one study
One bias in thinking and decision-making is anchoring bias. It's when we rely too much on the
first piece of information we hear. One model of thinking and decision-making is the
dual-processing model that talks about how we think and make decisions. Basically, it says we
have two ways of processing information: one is fast and automatic (system one), and the other
is slower and more deliberate (system two). The fast system, called system one, is quick and
relies on past experiences. But it can also make mistakes because it's not careful. The slower
system, called system two, is more careful and thinks about consequences. But it takes more
effort to use. People often use the fast system because it's easier. When we use this system, we
often take shortcuts in our thinking called heuristics. One example of a heuristic is anchoring
bias. This happens when we rely too much on the first piece of information we get when making
decisions. One study exploring anchoring bias is English and Mussweiler. They conducted a
study to see how a prosecutor's suggestion for sentencing affects judges' decisions. They had
young trial judges read a case about alleged rape and then asked them questions about what
they thought the sentence should be were given a questionnaire. Half of the judges were told the
prosecutor recommended a 34-month sentence, while the other half were told it was a 2-month
sentence. The judges who heard the higher number suggested longer sentences compared to
those who heard the lower number. The results of the study support anchoring bias, as it shows
that the judges who were told the prosecutor demanded a higher sentence (high anchor) chose
significantly higher sentences on average than those who were told the prosecutor demanded a
lower sentence (lower anchor.) This is because they were using system one thinking, and used
the anchoring bias heuristic. This affected their decision-making and supports the existence of
the anchoring bias as well as the dual-processing model.
#6 Explain the Working Memory Model with reference to one study.
The working memory model offers a different view from the multi-store memory model by
suggesting that short-term memory isn't just one temporary storage place, but rather it's made
up of different parts that work together actively. These parts allow us to do things like reason
verbally, understand text, do mental math, and other complex tasks. In this model, short-term
memory has a main controller called the central executive, which controls attention and
oversees other components, the phonological loop (which stores sound), the visuospatial
sketchpad (which stores visual and spatial information), and the episodic buffer. Together, these
parts are called the working memory. Landry and Bartling (2011) studied the working memory
model by looking at how saying numbers out loud affected people's ability to remember a list of
letters. They split participants into two groups randomly: one did nothing while viewing the list,
and the other group said numbers out loud while viewing the list. The control group performed
no concurrent task, while the experimental group performed an articulatory suppression task.
The control group viewed the list for 5s and then were instructed to wait for 5s before attempting
to write the letters in the correct order. The experimental group did the same thing, while saying
out loud ‘1’ & ‘2’ at a rate of two numbers per second the entire time. The procedure was
repeated 10 times for both groups. Then, both groups tried to write down the letters they saw.
The group that said numbers out loud performed much worse. This supports the working
memory model because it shows that when the phonological loop is busy and it's overloaded
with saying numbers, it's harder to remember the letters. This fits with the model's idea that
when one part of the working memory is overloaded, it's harder to use the memory effectively.
This study gives evidence for the working memory model.