0% found this document useful (0 votes)
101 views

Psych 1XX3 Form Perception II Lecture Notes

The document summarizes research on form perception and object recognition in the visual system. It discusses how: 1) Magno and parvo cells in the retina transmit visual information to the lateral geniculate nucleus (LGN) and primary visual cortex. Hubel and Wiesel then found neurons in visual cortex respond maximally to lines of a particular orientation. 2) Simple, complex, and hypercomplex cells were identified, each responding to bars or edges in different ways within their receptive fields. 3) The ventral stream in extrastriate cortex combines feature information to allow recognition of objects and faces in temporal cortex, arranged in columns responding to categories like apples or chairs.

Uploaded by

Gurry Mann
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views

Psych 1XX3 Form Perception II Lecture Notes

The document summarizes research on form perception and object recognition in the visual system. It discusses how: 1) Magno and parvo cells in the retina transmit visual information to the lateral geniculate nucleus (LGN) and primary visual cortex. Hubel and Wiesel then found neurons in visual cortex respond maximally to lines of a particular orientation. 2) Simple, complex, and hypercomplex cells were identified, each responding to bars or edges in different ways within their receptive fields. 3) The ventral stream in extrastriate cortex combines feature information to allow recognition of objects and faces in temporal cortex, arranged in columns responding to categories like apples or chairs.

Uploaded by

Gurry Mann
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Psychology 1XX3 Notes Form Perception II Mar 12, 2010

Feature Detectors:
The brain uses a division of labour; with each region along the visual pathway
processing relatively specific information and then passing it on.
Form recognition follows a similar strategy.

Magno and Parvo Cells:


First, magna and parvo cells in the retina transduce the light stimulus into a neural
impulse.
Recall that magno cells are found mainly in the periphery of the retina, and are
used for detecting changes in brightness, as well as motion and depth.
Parvo cells, on the other hand, are found throughout the retina, and are important
for detecting colour, pattern, and form.
These ganglion cells, with their small receptive fields, are the crucial first step to
object recognition. From the retina, the axons of these cells exit the eye via the
optic nerve, travel to the LGN, and end up in the primary visual cortex in the
occipital lobe.
What is most striking is that cells here are very particular about what will make
them fire, and these cells are called feature detectors.

Hodgkin and Huxley:


In 1952, Hodgkin and Huxley recorded the electrical activity in an individual
neuron of the squid, and this paved the way for other researchers to use this
technology to see how individual neurons respond to specific stimuli.

Lettvin et al:
For instance, in 1959, Lettvin and colleagues discovered a neuron in the optic
nerve of a frog that responded only to moving black dots, and they called these
cells "bug detectors".
Hubel and Wiesel:
Hubel and Wiesel spent years extending this work in their studies of cells in the
visual cortex of cats and monkeys, eventually earning the Nobel Prize in 1981.
Beginning in 1962, Hubel and Wiesel began their exploration of the visual cortex
by trying to learn what type of stimuli the individual conical cells responded to.
They did this by putting microelectrodes in the cortex of a cat to record the
electrical activity of individual neurons as the cat was shown different types of
visual stimuli, such as flashes of lights.
The problem was that they weren't getting much response from the neurons, until
one day when they presented the cat with a slide that had a crack in it. When the
line that was projected from that crack moved across the cat's visual field, the
neuron started to fire like crazy!
This was a light bulb moment for Hubel and Wiesel, who realized that neurons
must respond to stimuli that are more complex than diffuse flashes of light.
They began using lines of different orientations and thickness that moved in
different directions, and they found that each neuron is very specific about what
will make it fire the most. These cells fire maximally to stimuli of a certain shape,
size, position, and movement, and this defines the receptive field for that cell.

Simple Cells:
Defn of Simple Cell: Responds maximally to a bar of a certain length and
orientation in a particular region of the retina.
For example, this simple cell responds the most to a horizontal bar but if that
same bar is moved outside that particular region and/or changes orientation, then
the cell will be inhibited, and actually fire less than baseline. (See image below.)

So the receptive field for a simple cell is organized in an opponent fashion,


making it sensitive to the location of the bar within the receptive field.

Complex Cell:
Defn of Complex cell: Responds maximally to a bar of a certain length and
orientation, regardless of where the bar is located within the receptive field, but
unlike the simple cell, a complex cell does not care about where in its receptive
field the bar is located, and it will even continue to fire if the bar is moving within
the receptive field.
Some complex cells do care about the direction of this movement, such as the cell
in this figure that fires the most when the stimulus is oriented at a certain angle
and moving in a particular direction.
See image on next page.
Hypercomplex Cell:
Defn of Hypercomplex cell: Responds maximally to a bar of a particular
orientation that ends at specific points within the receptive field.
For example, this hypercomplex cell fires the most to a horizontal bar of light that
appears anywhere in the "on" region of the receptive field, but gives only a weak
response if the bar touches the "off" region.
So these cells have an inhibitory region at the end of the bar making them
sensitive to the length of the bar.
These three types of cells should give you some idea about how specifically tuned
our visual cortical cells can be.

Topographical Organization:
The layout of the visual scene is preserved in the visual cortex.
Neighbouring objects in your visual field are processed by neighbouring areas of
your brain, but this mapping from visual field to brain is not exact, because the
largest amount of cortex is devoted to processing information from the central
part of the visual field, which projects onto the fovea.
Nevertheless, each region of the cortex receives some input from a small piece of
the visual field, and within each region, there are cells that analyze specific
features of the scene.
For a particular part of the visual field, there are neurons that fire maximally if
there is something in the scene that has a line of a certain orientation, length, and
movement; other neurons respond maximally if there is something in that tiny
portion of the visual scene that is a specific colour; other neurons respond most
when there is a line that moves in a certain direction.
Cluster of cells in the region of the cortex right beside this region are doing the
same analysis for the neighbouring part of the visual scene.
An important benefit of this parallel processing strategy is speed. (See img below)

Ventral Stream:
Combining Information in the Extrastriate Cortex:
The processing of visual input in the primary visual cortex involves specific cells
responding to relatively specific features from a small portion of the visual field.
But for the visual scene to make any sense, this information has to be combined to
form a meaningful whole.
Subregions in Extrastriate:
This combination begins in the extrastriate cortex, also known as visual
association cortex, which surrounds the primary visual cortex.
The extrastriate cortex has multiple subregions that each receives a different type
of information from the primary visual cortex about the visual scene.
For example, one subregion of the extrastriate cortex will receive information
about the colours in the scene, another about any movement in the scene, and
another about different line orientations in the scene.
It is in the extrastriate cortex where the information begins to be segregated into
two streams according to the type of information that is processed.
One stream is the dorsal stream, also known as the "where" stream, which
processes where objects are located in the visual scene and how they are moving
within that scene.
The dorsal stream takes information from the primary visual cortex to the parietal
cortex, which processes spatial information.
The other stream is the ventral stream, also known as the "what" stream because
it processes information about what the object is, including form and colour.
The ventral stream takes information from the primary visual cortex and sends it
to the temporal cortex, where all the bits of feature information come together.

Columns in the Temporal Cortex:


The temporal cortex is arranged in vertical columns that are oriented
perpendicularly to the surface of the cortex.
Neurons in the temporal cortex respond to very specific stimuli that are much
more complex than the stimuli to which the neurons in the primary visual cortex
respond.
These stimuli include images like hands, faces, apples or chairs.
Within each cortical column, there are five layers of neurons, with each layer
responding to complex stimuli that come from the same category.
However, each layer responds to slightly different features within that category.
For example, one column of neurons might respond best to apples, and layer 1
might prefer red apples, layer 2 might fire maximally to green apples, layer 3 to
yellow apples, layer 4 to small apples, and layer 5 to big apples.
See image on next page.
Although this coding at the neuronal level can seem quite specific, every object is
not coded by a specific neuron.
In fact, an object is represented by unique activity patterns across marry cells in
several different brain areas.
The individual cells are just one component of the overall representation, and
even these cells will respond to a range of stimuli.

Development of Pattern/Object/Face Recognition:


The infant visual development proceeds relatively quickly in the first few months
of life. But perception is quite different from simply being able to sense incoming
stimuli, and even if an infant has the necessary equipment to see an object that
does not automatically mean that they can perceive patterns, objects, and faces in
the same way that we do.
Are we born with the ability to detect patterns and objects, or do we have to learn
about edges and other signals of object boundaries?
Many researchers have used the preferential looking method to determine what
kinds of patterns Infants can perceive by measuring which of two patterns the
infants look at the most.

Infant Pattern Recognition:


It turns out that infants prefer to look at patterns more than plain stimuli. When
presented with different patterns, infants prefer the ones that have a lot of high
contrast with sharp boundaries between light and dark regions.
Infants will look the longest at the most complex stimuli that they are able to
perceive.
Infants Prefer Patterns:
For example, if presented with a checkerboard pattern, a newborn will prefer to
look at the pattern with bigger squares; if the squares are too small, the newborn's
poor visual acuity will make it look lust like a uniform grey surface, which isnt
very interesting.
By 2 months of age, when the infant's visual acuity has improved, they will now
prefer to look at a smaller-squared checkerboard because it is more complex.

Infant Object Recognition:


An infants initially poor visual acuity is obviously an issue for perceiving objects
as well as patterns.
In fact, some researchers believe that because infants under 2 months of age see
so poorly, and look at such a limited part of the object, they may be unable to
perceive whole forms at all.
For instance, if you show a young infant a triangle or a star and measure where
they look, they will tend to stare at one corner and not at the entire shape, whereas
infants over 2 months of age are beginning to focus on the entire shape.
This shows that newborns are naturally attracted to certain key features of a
stimulus, like angles and edges, but not necessarily the object as a whole.

Three Month Old Infants and Partial Figures:


By 3 months, infants can perceive a whole form when given only parts of the
form.
For instance, if you're shown this picture of four circles with right angle wedges
missing, you can't help but see a square in the middle, even though the square
isnt really there.
Infants can see this too by 3 months, because if you habituate them to a picture of
a real square and then show them this four-circle square, they will stare less at the
four-circle square than if you showed them a different four- circle shape
The Trouble with Overlapping Objects:
Being able to distinguish two objects that are touching or overlapping is more
difficult because it requires the ability to use cues like pattern and colours to tell
what parts of an object belong together and what parts are separate and from
another object.
Young infants cannot use cues like colour or texture to tell one object apart from
the other until they are almost 5 months old, especially if the objects are standing
still or if they move together.
But if one object moves independently of the other, infants as young as 3 months
old know that there are two separate objects and not a single larger object.

Perceptual Constancies in Infancy:


When an infant is finally able to perceive an entire form, he has to be able to
recognize that form as the being the same under different viewing conditions, that
is, he must have some sense of perceptual constancy.
Imagine looking at the same object under different lighting conditions, distances
or angles, made you perceive it as being a novel object.
It would be very difficult, not to mention tiring, to make sense of the world.
Some studies suggest that Infants are starting to get a handle on brightness, colour
and shape constancy by as young as 4 months of age

Ganruds Size Constancy Study:


In one study, Ganrud tested infants of various ages for size constancy to see if an
infant understood that an object does not change in size when it is viewed at
different distances.
Infants were shown a teddy bear at a specific distance and were then shown a
second teddy bear at a farther distance. The second bear was either the exact same
bean which would of course produce a smaller retinal image than the closer bear
at the first viewing, or a larger bear that would produce the exact same retinal
image as the first bear. (See image below.)

Infants between 4 and 5 months of age treated the identical bear that was viewed
at two different distances as familiar; but they stared much longer at the large hear
that was viewed from a greater distance.
This suggests that infants at this age had some sense of size constancy and
understood that an object that is farther away should produce a smaller retinal
image.
Naturally, size constancy is far from perfect at this age, and continues to develop
with improvements in perceiving depth, well into the school-age years.
Do we have an Innate Preference for Faces?
Some researchers have argued that even newborns prefer to look at faces over any
other pattern.
This innate preference has evolved to ensure that as infants, we orient toward
other people and not other objects in our environment. This could serve to build a
necessary social bond with our caregivers.
In these face preference studies, infants as young as 4 days old are shown
different patterns, colours, shapes, or even a scrambled face, and the infants prefer
looking at faces. Interestingly by 2 months, infants prefer to look at attractive
faces over unattractive faces, and what an infant considers to be an attractive face
actually coincides with what adults consider to be attractive.
Furthermore, 2-month old infants will look longer at their own mother's face than
faces of other people, and by 5 months, they can begin to detect different
emotional expressions, such as happiness or sadness.
All of these studies suggest that we are born with a readiness to perceive and
prefer face stimuli compared to other stimuli.

However, other studies have found that infants show no preference for faces over
other complex stimuli, and if there is a face preference, it emerges gradually from
all the early experience we have with faces.
The argument is that infants do not have a preference for faces per say, but really
have a preference for complex stimuli that have a lot of contrast between light and
dark, such as the eyes and mouth, as well as moving parts.
In fact, studies that used non-face stimuli that were matched to the face stimuli for
complexity showed that infants did not prefer the face stimuli.
Studies that tracked where an infant was looking when shown a picture of a face
revealed that infants under 2 months of age focused mainly on the outer contours
of the face, such as the hairline or chin.
It was not until the infant was over 2 months old that she looked at regions within
the face, like the eyes and mouth.

Experience and Learning:


All of these studies suggest that it is our early experience with faces that develops
our preference for them, and that at birth, we simply have a preference to look at
complex, high-contrast stimuli, whether these are faces or not.

Normal and Abnormal Visual Development:


There is a critical period early in life during which these early visual experiences
must occur to reserve these brain centers for the function of vision.
If that early experience is not there, then those brain centers may take on other
functions, and the individual will forever have abnormal vision.
One way that we know this to be true is from studies with kittens that have had
early visual deprivation of some kind.

Cat Studies Using Cylinder Environments:


For instance, one study raised kittens in a cylinder that had vertical stripes painted
on the walls only.
These kittens failed to developed proper feature detectors for horizontal stripes,
and as a result, they were unable to see horizontal edges and objects in their
environment.
Dark Deprivation in Cats:
Another study conducted in collaboration with Dr. Deda Gillespie, found that if
l-month old kittens were kept in the dark for as little as 3 to 4 days, then the visual
regions of the brain already began to degenerate.
If a 1-month old kitten is left in the dark for an entire week or longer then the
damage to the visual brain regions is severe and permanent.
These studies show how crucial it is for early visual experience, because without
it, the brain starts making other plans for those areas that went unused for as little
as 3 days!

Evidence for Sensitive Periods in Visual Development:


Having cataracts is like having a thick cloud in the lens of your eye, which allows
only diffuse light to reach the retina, and results in a complete loss of the ability to
perceive any objects, patterns, or details.
Usually you hear about older people having cataracts, but sometimes, babies are
born with them.
Cataracts are treatable by surgery to remove the cloudy lens and replace it with an
artificial lens. Babies born with cataracts have their surgeries at various ages,
providing us with a natural population to study the effects of different amounts of
early visual deprivation on visual development.

Preferential Looking Method: Right from birth, babies would rather look at
something that is patterned rather than something that is plain gray.

If we show babies a card with stripes on one side, and they choose to look to that
side on the card, we know that they can see the stripes.
Over trials, we make the stripes thinner and thinner. The thinnest stripes babies
prefer over grey provide a measure of their vision, just like the smallest letter that
you can read on an eye chart.
Using techniques such as this, we can figure out what the world looks like to a
baby. We now know that babies can see right from birth and that vision improves
rapidly over the first few months of life.
By 6 months of age, acuity is 5 times better than it was at birth. Both visual
experience and maturation of the eye and brain contribute to the rapid
development.

Visual Agnosia and Prosopagnosia:


As we learned earlier perception of objects depends heavily on the extrastriate
cortex or the visual association cortex.
If you suffer damage to the primary visual cortex, you will lose vision in some
parts of your visual field, but the parts that you do see will seem normal, and you
will be able to perceive objects in those intact areas of your visual field.
It would be as if you were looking at a scene through a keyhole: you would be
able to see and perceive everything normally within the boundaries of the
keyhole, but everything in the scene around the keyhole would be invisible to
you.
But, if you damaged your visual association cortex, your entire visual scene
might be intact and you would probably be able to see all the objects in the scene,
but you would have a lot of difficulty in recognizing some or all of the objects.
So even though you can see everything, you wouldn't know what anything is, and
this is called visual agnosia.
Object Agnosia:
One type of visual agnosia is object agnosia, which is an inability to perceive
objects.
A person suffering from object agnosia is unable to identify different objects by
sight, even though they can see those objects perfectly, have normal visual acuity,
and are able to recognize and name the objects by touch.
Occasionally the type of object that the person can't recognize is very specific.
For example, a person may not recognize different tools but can recognize
different fruits and vegetables.
Many people with object agnosia can still read, which shows that recognizing
words involves different brain mechanisms than recognizing objects.

Prosopagnosia:
Another type of visual agnosia is prosopagnosia, which occurs when a person
can recognize regular objects but cannot recognize faces.
A person with prosopagnosia will know that they are looking at a face, and they
will be able to see eyes, a nose, and a mouth, but they won't be able to put those
individual features together and perceive whose face it is, even if they're looking
in a mirror.
These people have to rely on other cues to recognize other people, like their voice,
smell, or the way that they walk.
Curiously people with prosopagnosia can also have difficulty recognizing other
specific stimuli, but they can recognize categories of objects.
For example, they would be able to recognize a dog and a car but they would not
be able to pick out their own dog or their own car.

You might also like