Extraction of nonidentity information from unfamiliar faces


Material Information

Extraction of nonidentity information from unfamiliar faces an investigation of normal and pathological face processing
Physical Description:
xvi, 173 leaves : ill. ; 29 cm.
Greve, Kevin W., 1960-
Publication Date:


Subjects / Keywords:
Facial Expression   ( mesh )
Form Perception   ( mesh )
Pattern Recognition, Visual   ( mesh )
bibliography   ( marcgt )
non-fiction   ( marcgt )


Thesis (Ph. D.)--University of Florida, 1991.
Includes bibliographical references (leaves 164-172).
Statement of Responsibility:
by Kevin W. Greve.
General Note:
General Note:

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
oclc - 26208756
System ID:

Full Text








It probably strikes most people at some point during

the course of a major research project that despite the

fact that it's "your project" you could not have hoped to

have completed it alone. I became aware of that fact

during the course of my master's thesis so I knew going

into this project that other people would play an

important role in it. On looking back, however, I am

still amazed at the number of people who have made

contributions to this project and I want to take this

opportunity to thank them.

I have been particularly fortunate to have had Rus

Bauer chair both my thesis and dissertation. My research

accomplishments as a graduate student are a testament to

the quality of his mentorship. His example as both a

clinician and scientist has given me a goal which I may

never attain. I want to thank Rus for both his guidance

and friendship. Dawn Bowers, in many ways, has felt and

functioned like a cochair on my dissertation and has given

me tremendous guidance and encouragement throughout this

project. She also happily lent me her only copy of the

Florida Facial Affect Test and gave me access to all her

unilateral stroke patients. It has been a pleasure

working with her. Eileen Fennell and Ira Fischler, as

members of my dissertation committee, have also made

significant contributions to this project. Eileen has

also made significant contributions to my development as a

psychologist. Michael Conlon was always available when I

had statistical questions and was adept at understanding

my sometimes poorly word or conceptualized questions and

generating straight-forward and often relatively simple

statistical solutions. More importantly, as a

statistician who is not immersed in the psychological

belief system, he kept the rest of us psychologists honest

by offering insightful alternative interpretations. I

don't think I could have asked for a better dissertation

committee. Thank you.

Execution of this project was challenging. Many

stages of development were required before I ran my first

"real" subjects. Randi Lincoln was my partner for the

first six months during which we collected photographs of

men and conducted all the preliminary classification

research on those photographs. John Paul Abner also

played a significant role in the photography portion of

this project. It is also important to thank all the male

students and faculty in the Department of Clinical &

Health Psychology, the Health Center employees, and

members of the Baptist Student Center who took time out to

be photographed and became the 101 stimulus faces. Many

of the subjects who were used in the preliminary

classification studies were either undergraduates from the


Introductory Psychology subject pool or persons who

responded to newspaper ads. However, a large portion of

these subjects were members of the United Church of

Gainesville who were kind enough to allow us into their

church on Sunday mornings. Almost no one in the

Department of Clinical and Health Psychology escaped being

dragged into the lab and forced to stare into my

tachistoscope during the initial pilot studies. Tracy

Henderson contributed some of her free time helping me

collect control data. Her help allowed me to run subjects

twice as fast as I could have alone. We had a great

system. To all these people, without whom this project

would be no more than a proposal, thank you. I would also

like to offer special thanks to L.F., our prosopagnosic,

who, for the four years I have known him, has never

declined to come up to Gainesville for testing. Not only

is he an interesting patient and willing subject, he is a

thoughtful, insightful, and kind person.

Running this project was not an inexpensive

undertaking considering the cost of photography and

subject compensation. Rus Bauer paid the cost of

photography out of money that could have contributed to

his own professional enhancement. Ken Heilman and Dawn

Bowers allowed stroke subjects to be paid from their grant

which meant that I was able to get many patients who would

not have made the long trip to Gainesville without

compensation. My mother, Becky Warren, also made me a

"research grant" that helped cover the cost of pilot study

subjects. Finally, the American Psychological Association

made a significant contribution to this research by

granting me a Dissertation Research Award in 1990.

There are many people who have directly impacted me

and my dissertation. But there are some whose major

contribution was that of making the ongoing course of

doing this dissertation less stressful and giving me

energy and encouragement. My wife, Janet Burroff, is

first and foremost among those people. It's hard to put

into words how important it has been for me to know that

she was there to talk to if things got tough. In my

thesis I thanked her for tolerating "my seemingly endless

blabber about this study" and thanks for that is also

appropriate although I think I didn't blabber quite as

much. Karen Clark and Beth Onufrak have been my

classmates for five years and my partners in crime for two

and a half. We have shared a lot in that time and their

company has always made me feel good. My parents, Doug

Greve and Becky Warren, and my grandmother, Rebecca

Musgrove, have always been tremendously supportive, always

thrilled at my accomplishments. Finally, it is important

to mention Danny Martin, who I have probably not said more

than two or three sentences to about the content of my

dissertation. Despite this, Danny has made a contribution

that is hard to measure: He has taken me fishing with

regularly for the past two years. When my stress level is

up and I'm feeling discouraged and low on energy, there is

no better therapy than fishing. In fact, there is no

better therapy even when I'm feeling good.

Completing my dissertation represents the culmination

of my graduate career. This has been a wonderful

experience and if I had it to do again, I don't think I

would do anything differently (except start fishing

sooner). I couldn't have asked for better training, nor

for better people to learn from and with. Thank you all.


ACKNOWLEDGMENTS.......................... ....... ....... ii

LIST OF TABLES.................. ........ ............. x

LIST OF FIGURES.. .................................... iix

ABSTRACT.......................... ....... ............ xiv


1 INTRODUCTION.. ........... ............... ........ 1

Neuroanatomy of Vision...........................
Lateralization of Face Processing ................ z
Special Face Processing Systems.................
Facial Identity Processing.....................
Facial Expression Processing.....................
Cognitive Model of Face Processing...............
Extraction of Nonobservable Attributes.......... 34
Personality Trait Attributions............... 35
Occupational Category Attributions............. 37
Summary .... ... ............. .. ... ..... ....... 38
Purpose of this Study............................ 49
Hypothesis.............. .. .............. ..........
Direct Tests.... ....... .......... ... .......... 41
Indirect Tests................................. 46

2 METHODS AND RESULTS ............................ 49

Methods .......... ..... ........ ..... ...... 49
Subjects. .. .... .. .... .... ... .. 49
Tests of Face Memory and Perception............ 53
Tests of Direct Access to Face Information..... 54
Tests of Indirect Access to Face Information... 57
General Procedure................ ........... 59
Results ......................... ....... .. ... ... 60
Tests of Face Memory and Perception............ 60
Tests of Direct Access to Face Information..... 62
Tests of Indirect Access to Face Information... 74
Individual Performance on Expression Tasks....... 83


3 SUMMARY AND DISCUSSION........................... 89

Summary .......................................... 89
Discussion ...................................... 94
Identity Processing............................ 96
Expression Processing.......................... 98
Summary ................... ................... 106
Stereotype Processing .......................... 108
Conclusions...................................... 114
Future Directions................................ 117



Category Selection I............................. 120
Category Selection II............................ 122
Stimuli ............. ......................... 122
Participants.................................. 122
Procedure .................................... 123
Results ............. ............. ............ 124
Face Categorization.............................. 124
Participants .................................. 125
Stimuli.... ... ..... ...................... 126
Procedure......... ............................ 126
Results ... .... .................................. 127

B PILOT STUDIES.... ................................ 130

Experiment B-1..... ........ ................. 130
Participants ................................... 130
Stimuli. ................ .............. ... ..... 130
Procedure......... ...... ...................... 131
Results and Discussion......................... 133
Experiment B-2................................. .... 134
Participants. ... ......... ........ ..... ......... 135
Results and Discussion......................... 135
Experiment B-3........ ............ ..... ........... 135
Stimuli and Procedure........................... 136
Results and Discussion......................... 137
Experiment B-4 ............ .. .... ............... .. 137
Participants.................................... 137
Stimuli and Procedure........................... 138
Results and Discussion........................... 138
Experiment B-5..................................... 138
Stimuli and Procedures.......................... 139
Results and Discussion......................... 140
Experiment B-6...................... ........ ..... 142
Results.. ......... ............... .. .......... 143
Summary and Discussion........................... 145


C STIMULUS FACES ................................... 147

REFERENCES........................................ 164

BIOGRAPHICAL SKETCH............... .................... 173













TABLE 2-10

TABLE 2-11

Outcome Assumptions Based on a Review
of the Previous Research for Each
Domain of Face Information.................

Comparisons of Patient and Control
Groups on Demographic, WAIS-R Vocabulary
Score, and Time Post Injury................

Demographic and Lesion Location Data for
Individual Stroke Patients.................

Means for the Tests of Face Memory and
Perception ..... ...........................

Mean Percent Correct for the Florida
Facial Affect Test Affect Discrimination,
Naming, Selection, and Matching Subtests...

Simple Effect and Grand Mean Ratings for
the Occupational Stereotype Rating Test....

Results of t-Tests Comparing L.F.'s
Ratings in the Correct Versus Incorrect
Conditions for Each of the Rating Tests....

Simple Effect and Grand Mean Ratings for
the Personality Stereotype Rating Test.....

Simple Effect and Grand Mean Ratings for
the Identity Rating Test...................

Mean Difference Scores for Ratings Tests...

Reaction Time Means and Standard Deviations
for Face-Occupation Category Interference

Reaction Time Means and Standard Deviations
for Face-Personality Descriptor
Interference Test..........................

TABLE 2-12

TABLE 2-13

TABLE 2-14

TABLE 2-15















Reaction Time Means and Standard Deviations
for Expression-Label Interference Test.....

Reaction Time Means and Standard Deviations
for Face-Identity Interference Test........

Performance of Stroke Patients on Direct
and Indirect Expression Tasks..............

Association of Stroke Patient Performance
on Each FFAT Subtest with Performance on
Expression-Label Interference Task..........

Observed Results of Direct Tests...........

Observed Results of Indirect Tests.........

Ranked Occupational and Personality
Category Images ............................

Weighted Frequency of Category Usage
in the Set of 101 Faces.....................

Descriptive Statistics for Face
Categorization Subjects....................

Final Set of Occupational and
Personality Stereotype Faces...............

Means for the Control, Congruent, and
Incongruent Conditions in Experiments B-l
through B-4................................

Results of Experiment B-5...................

Mean Scores for Experiment B-6..............




























A cognitive model of face processing
showing the hypothetical location of the
functional lesions in prosopagnosia and
RHD........... ................. ....... ..

Hypothetical cognitive model of face
processing...... .... ...................

Performance on FFAT Affect Discrimination,
Naming, Selection, and Matching Subtests..

Mean ratings for the Correct and
Incorrect conditions of the Direct
Occupational Stereotype Test...............

Mean ratings for the Correct and
Incorrect conditions of the Direct
Personality Stereotype test...............

Mean ratings for the Correct and
Incorrect conditions of the Direct
Identity test.............................

Mean reaction times for the Control,
Congruent, and Incongruent conditions
on the Face-Personality Descriptor
interference task.........................

Mean reaction times for the Control,
Congruent, and Incongruent conditions on
the Expression-Label interference task....

Mean reaction times for the Control,
Congruent, and Incongruent conditions
on the Face-Identity task..................

Model of face processing hypothesized
in Chapter 1..............................


Accountant ................... ........... 149














FIGURE C-3 Athlete................................... 150

FIGURE C-4 Doctor ................................... 151

FIGURE C-5 Kind ..................................... 152

FIGURE C-6 Sociable ................................. 153

FIGURE C-7 Aggressive ............ ................... 154

FIGURE C-8 Intolerant............................... 155

FIGURE C-9 Happy..................................... 156

FIGURE C-10 Sad....................................... 157

FIGURE C-11 Angry.................................... 158

FIGURE C-12 Frightened................................ 159

FIGURE C-13 John Kennedy.............................. 160

FIGURE C-14 Lyndon Johnson ............................ 161

FIGURE C-15 Bob Hope ........................................ 162

FIGURE C-16 Elvis Presley ............................ 163


Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment
of the Requirements for the Degree of
Doctor of Philosophy



Kevin W. Greve

August, 1991

Chairman: Russell M. Bauer, Ph.D.
Major Department: Clinical & Health Psychology

Human beings are normally able to accurately

recognize the identity or affective expression of a face

based solely on the visual features of the face. However,

these abilities can be differentially disrupted in certain

cases of brain injury. For instance, persons with right

hemisphere cerebral lesions often cannot recognize facial

expression while they remain able to recognize identity.

The converse is true in prosopagnosia which results from

bilateral occipitotemporal lesions. Additionally, normals

can consistently extract a wide variety of other

information, such as information about personality and

apparent occupation (so-called stereotype information),

from a face that does not yield specific conclusions about

its identity or emotional state. While the ability to

make stereotype judgements has been extensively explored


in the social psychology literature, nothing about their

neurological basis is known, including whether these

processes are significantly impaired in brain disease.

This study was designed to help understand the cognitive

and neurobehavioral mechanisms underlying these abilities.

A prosopagnosic and his 15 age-matched controls and

20 single-event unilateral stroke patients (10 right, RHD;

10 left, LHD) and their 15 age-matched controls were

administered two tests of face perception and memory and

both direct and indirect measures of identity recognition,

expression judgement, and personality and occupational

stereotype identification. Normal subjects and patients

with LHD were unimpaired on all tests except the indirect

occupational stereotype test. The failure of all subjects

on this task was attributed to a shortcoming of the task.

The RHD patients were globally impaired in expression

recognition but were unimpaired on the famous faces and

personality stereotype tests. Their impairment could not

be explained by perceptual dysfunction and, after

considering alternative explanations, this deficit was

attributed to the functional destruction of expression

representations in memory. The prosopagnosic was impaired

on the direct occupational stereotype and famous faces

tests and on the indirect expression and famous faces

tests. His failure on the indirect tasks was attributed

to an interaction between inadequacies of the measures and

a nonconfigural, feature-based mode of face processing.

It was concluded that occupational stereotype decisions

were based on identity information; whether expression or

identity information contributes to personality stereotype

judgements remains an unresolved issue to be explored in

future studies.




Face perception is a vitally important process for

primates, including humans, and information provided by

faces plays an important role in social interaction. It

should come as no surprise that the ability to

discriminate faces begins at a very early age, nor should

it be surprising that there exist populations of cells

within the primate brain that respond primarily to faces

and whose patterns of responding may even differentiate

among faces (Baylis, Rolls, & Leonard, 1985). With little

effort we seem able to judge the sex and age of a person

simply by looking at his/her face. Equally remarkable is

the consistency with which individuals make of judgements

of attractiveness (Secord, 1958) or likeability (Greve &

Bauer, 1988) across individuals. We are also able to

categorize faces based on apparent occupation (Klatzky,

Martin, & Kane, 1982a) and facial expression (Ekman,

Friesan, & Eilsworth, 1972). Because of the importance of

faces in everyday life, our vast experience with faces in

a myriad of contexts has taught us to automatically

extract from them extensive amounts of information.

There do exist, however, certain neurologically

impaired individuals who have lost or are significantly

impaired in the ability to accurately make many of the

above discrimination. Persons who suffer strokes of the

right cerebral hemisphere often have impaired recognition

of affective facial expression, though their ability to

recognize facial identity may be spared (Bowers, Bauer,

Coslett, & Heilman, 1985). On the other hand,

prosopagnosics, who have suffered bilateral brain

impairment, may still recognize affective facial

expression, but have lost the ability to identify faces

(Levine & Calvanio, 1989). The evidence supporting the

conception of face recognition (which is based on

extraction of "identity" information) and affect

identification (based on extraction of information which

is unrelated to the identity of the face) as independent

processes is compelling.

The term "identity recognition" has been used in

reference to three fairly distinct and partially

dissociable abilities (Benton, 1990). First, this term

has been used to refer to the ability of a person to match

unfamiliar faces. The most notable example of this usage

is the "Test of Facial Recognition" (Levin, Hamsher, &

Benton, 1975) in which the patient must find the face

within a set of six which is the same as a target face.

Elements of the Benton task require processing of faces as

faces in that for some items the subject must match a

stimulus face with targets that are oriented or lighted

differently. However, matching to sample or face

discrimination tasks which ask "are these two faces the

same or different people" (e.g., Bowers et al., 1985) are

often used to screen for visuoperceptual impairment. This

usage will be referred to as "face discrimination." A

second usage of this term refers to a person's ability to

indicate whether an unfamiliar face has been previously

presented. Examples of such tasks include the Milner

Facial Recognition Test (Milner, 1968) and Denman's (1984)

Memory for Human Faces. In these tasks the subject

studies a large set of unfamiliar faces then selects the

ones he/she remembers after a delay. This will be

referred to as "face memory." Finally, facial recognition

may refer to the ability to name or otherwise identify

familiar faces (such as those of family or celebrities) as

assessed, for example, by the Albert Famous Faces (Albert,

Butters, & Levin, 1979). This will be referred to as

"facial identity recognition" and is of major interest in

this study. Warrington and James (1967) found no

correlation between an unfamiliar face memory task and a

famous faces task among unilateral cerebral lesion

patients. Similarly, Benton (1985) found a dissociation

between face discrimination and identity recognition in a


The ability to categorize unfamiliar faces in terms

of nonobservable attributes such personality

characteristics and apparent occupational category

membership does not rely on knowledge of the actual

identity of a stimulus face but seems to depend on access

(which is not necessarily conscious) to faces with known

attributes. The question arises as to the role of facial

identity and expression processing in the categorization

of faces in terms of nonobservable attributes. The

studies contained herein were designed to investigate this


This introduction is divided into five sections. The

first discusses neuroanatomically and functional distinct

systems for processing visual stimuli. The second extends

this discussion to evidence concerning the lateralization

of face processing abilities. The third section review

reviews data supporting the existence of relatively

specialized subsystems in the right hemisphere for the

processing of facial identity and expression. In the

fourth section a useful cognitive model is then presented

which is designed to explain the various processes

involved in the recognition of familiar, unfamiliar, and

emotional faces. The final section argues that humans are

able to extract from faces information about nonobservable

characteristics that do not lead to judgements about

identity or affective state and raises questions about the

relationship of the former ability to the latter two.

Neuroanatomy of Vision

It is now quite clear that vision is not a single

unitary phenomenon but consists of parallel and serial

processes occurring in distinct anatomic pathways which

deal with varied aspects of the visual stimulus. The

laminar and columnar structure of striate cortex reflects

the grouping of neurons according to their functional

roles (Kaas, 1989). For example, there are layers which

typically contain color-selective fields (IIIb), while

others (IIIc) are responsive to stimulus orientation and

direction. Additionally, there are the "ocular dominance

columns" and "orientation columns" described by Hubel and

Wiesel (1977). "Ocular dominance columns" are three-

dimensional strips of cortex (not true columns) that

extend vertically through all layers of striate cortex and

respond to stimulation of one eye only. "Orientation

columns" are similar to "ocular dominance columns" except

that they are responsive to stimuli of one particular

orientation only (Hubel, 1988).

Architecture reflects function not only at the

cortical level, but apparently at precortical levels as

well. Kaas (1989) describes three parallel visual

pathways which originate in the retina. The "X" pathway

plays a role in object recognition, while the "Y" pathway

seems to be involved in attention and movement detection.

The third system, "W", is poorly understood but seems to

interact with the other two, especially "X", and modulate

and enhance their neuronal firing.

There is further, strong evidence for the extension

of the "X" and "Y" pathways beyond primary visual cortex.

One pathway extends, multisynapticly, from striate cortex

to the inferior temporal area, later synapsing in the

limbic system and ventral frontal lobe (Mishkin,

Ungerleider, & Macko, 1983). This pathway, the ventral

visual-limbic pathway, appears critical for object

recognition (Mishkin et al., 1983) and has "emotional as

well as 'mnestic' functions which are modality-specific to

vision" (Bauer, 1984; p. 465).

A second system described by Mishkin et al. (1983)

appears to function as the extension of the "Y" pathway.

This pathway runs dorsally in the superior longitudinal

fasciculus to interconnect striate cortex with the

parietal lobe and continues on to synapse in the limbic

system and dorso-lateral frontal lobe, thus forming the

dorsal visuo-limbic pathway. This system appears

important in attention, visual guidance of motor acts

(Mishkin et al., 1983) and spatial localization of "drive

relevant stimuli" (p. 198; Bear, 1983).

The function of the ventral and dorsal visuo-limbic

systems is integrated in normal object vision. Retinal

inputs to the ventral system are primarily foveal, while

inputs from both the fovea and retinal periphery are

equally important to the dorsal system (Mishkin et al.,

1983). Bauer (1984) speculated "that the kind of

processing which is taking place in the dorsal system

involves the deployment of attention and processing effort

toward stimuli which appear significant in a cursory,

preliminary or 'preattentive' analysis. The stimulus is

then foveated, and the visual-discriminatory and modality-

specific arousal functions of the ventral system are

brought to bear on the process of overt identification"

(p. 466).

In addition to the functional differences between the

ventral and dorsal visual-limbic pathways, lateralized

asymmetries are also an important feature of the cerebral

organization of vision. Numerous studies indicate that

the left hemisphere mediates supramodal processing of

verbal stimuli (for a brief review, see Lezak, 1983). Of

greater importance for this study is the lateralization of

face processing which is discussed in the next section.

Lateralization of Face Processing

The right hemisphere plays a major role in mediating

processing of configural stimuli, of which face processing

is an important example. One of the earliest reports

concerning impairment in face processing was one by

Quaglino and Borelli (1867 [cited in Benton, 1990]) who

described impairment of facial recognition (familiar

faces) after a stroke involving primarily the right

hemisphere but also probably extending to the left. In

later studies of patients with lateralized cerebral

lesions DeRenzi, Faglioni, and Spinnler (1968) and Benton

and Van Allen (1968) demonstrated impaired face

discrimination in patients with right-hemisphere lesions.

DeRenzi et al. (1968) evaluated 114 patients with

unilateral lesions using several tasks that involved

matching of face fragments to whole faces, matching faces

with different orientations, and memory for unfamiliar

faces. Overall, the right hemisphere patients were

impaired on these tasks.

In a similar study Benton and Van Allen (1968) asked

37 unilateral cerebral lesion patients to indicate which

face from an array of six was the same as the stimulus

face. In one form of the test target face and the

stimulus face were exact matches. In the two other forms

the target faces differed from the stimulus faces in

either orientation or lighting angle. It was found that

while the left hemisphere patients were impaired relative

to the normal controls, the performance of the right

hemisphere patients was significantly inferior to that of

the left hemisphere patients. They concluded that "the

impairment in facial recognition [face discrimination] as

assessed by [these] procedures is rather closely

associated with disease of the right hemisphere" (p. 358).

Milner (1968) found that among epileptic patients who

had undergone brain surgery to control their seizures the

right temporal and parietal patients were more impaired

than right frontal and left hemisphere patients on a task

that required remembering and later recognizing unfamiliar

faces. Further analysis indicated that the degree of

impairment in the right temporal group was related to the

amount of hippocampus removed. Milner argued that these

findings reflected a visual memory disturbance not the

disruption of a system specific to face memory.

Finally, Kolb, Milner, and Taylor (1983) presented

seizure surgery patients with a stimulus face and two

target faces created by joining mirror images of each half

of the stimulus face and asked them which seemed most like

the stimulus face. Patients with left hemisphere and

right frontal lesions showed a bias for selecting the

target face made from the half of the stimulus face in the

left visual field, while the right temporal and right

parietal patients selected faces randomly. When the

stimuli were inverted the same basic pattern of results

was found. Kolb et al. argued that "the posterior part of

the right hemisphere is specialized for the processing of

complex visual patterns, of which faces are a particularly

striking example" (p. 16).

The right hemisphere also seems to play a significant

role in the processing of faces in normals. For example,

Suberi and McKeever (1977) asked subjects to discriminate

between studied faces and unstudied faces that were

presented tachistoscopically to either the left or right

visual field. Reaction time to indicate whether the face

had been studied was measured. They found a left visual

field advantage (faster reaction times) which indicated

faster processing of faces by the right hemisphere. If

emotional faces were studied, reaction times were even

faster to left visual field presentation. Ley and Bryden

(1979) demonstrated a similar left visual field advantage

using cartoon faces. They tachistoscopically presented

(85ms) an emotional cartoon face to either the right or

left visual field followed by a longer (1000 ms) central

presentation of a second face. The task was to indicate

whether the two faces had the same emotion and whether

they were the same character. The number of errors for

each discrimination task and visual field were calculated.

The results revealed significantly fewer errors on both

the emotion and character discrimination tasks for

presentations to the left visual field (right hemisphere)

again suggesting a right hemisphere superiority for

processing faces in normal subjects. Similar results were

reported by Strauss and Moscovitch (1981). These two

studies of normals present evidence for right hemisphere

superiority in processing faces, including emotional


As one might expect, given the right hemisphere

superiority for processing emotional faces in normals

described above, damage to the right hemisphere also

results in impairment in the ability to comprehend the

emotional expression of faces. DeKosky, Heilman, Bowers,

and Valenstein (1980) gave right and left hemisphere

lesion patients and normal controls six tasks: 1)

discriminate whether two photographs are of the same or

different people (identity discrimination; to rule out

perceptual disturbance); 2) name the emotional on a

stimulus face; 3) choose the face bearing the designated

emotional expression; 4) indicate whether two faces bore

the same or different emotional expression; 5) name the

emotion depicted by a cartoon scene; and, 6) choose the

cartoon scene which depicts the designated emotion. The

right hemisphere patients were impaired relative to the

left hemisphere patients on all tasks except #6 (choose

the emotional scene). This suggests that the ability to

comprehend emotion in faces and visual scenes is a special

process of the right hemisphere though these abilities

were also impaired in left hemisphere disease relative to

normals. Covarying performance on the face discrimination

task resulted in the elimination of all differences

between the left and right hemisphere patients suggesting

that the greater difficulty of the right hemisphere group

on these emotional tasks may be the result of a

visuoperceptual disturbance.

The evidence seems to firmly support the conclusion

that the right hemisphere is superior to the left in

processing both emotional and nonemotional faces.

However, no evidence has been presented to argue against

viewing face processing as anything more than a special

case of complex visuospatial processing for which the

right hemisphere is particularly well suited. The next

section describes evidence which suggests that facial

identity and expression processing are supported by

mechanisms beyond visuoperceptual processes and

independent of each other.

Special Face Processing Systems

Facial Identity Processing

Prosopagnosia. Prosopagnosia is a rare
neurobehavioral syndrome characterized by the inability to

overtly recognize familiar faces encountered before and

after illness onset. Lissauer (1889; cited in Bauer,

1985) categorized agnosics (which would include

prosopagnosics) as "associative" and apperceptivee."

Apperceptive prosopagnosics are characterized by severe

perceptual disturbance and who often are unable to even

recognize faces as faces. Associative prosopagnosics are

able to form a complete visual facial percept but unable

to give it meaning. In other words, they are able to

recognize a face as a face and to accurately match

unfamiliar faces (Benton, 1985) yet completely lack a

sense of familiarity when presented with a familiar face

and are unable to generate either a name for or semantic

information (e.g., occupation) about the face presented.

It is generally considered that the prosopagnosia results

from bilateral lesions of visual association cortex

(Brodman's areas 18 and 19) and the occipitotemporal

projection system although there is a persistent

contention that a single right hemisphere lesion may be

sufficient to cause prosopagnosia (see Benton, 1990).

Bauer and Trobe (1984) and Damasio, Damasio, and Van

Hoesen (1982) argue that these lesions appear to disrupt

both perceptual elaboration and visual memory. Levine

and Calvanio (1989), however, have convincingly argued

that "the perceptual and memory defects [in prosopagnosia]

are not distinct impairments in different stages of visual

recognition but instead are two aspects of the same

underlying disorder, which we call defective visual

'configural' processing" (p. 151).

To summarize their rather extensive findings, their

prosopagnosic: 1) cannot recognize the faces of live

people or photographs of famous people; 2) can match

faces, but has more trouble matching different views of

the same face; 3) cannot remember face-name pairings, but

does better in indicating which of those faces were

previously presented; 4) has trouble identifying animals

and made errors of underspecification (i.e., made his

decisions based on single features of the stimulus); 5)

can name real objects generically but had trouble with

photographs; 6) reads accurately, but slowly; 7) had

trouble identifying incomplete line drawings of objects or

those embedded in visual white noise; 8) performed

adequately on word-fragment completion, anagram, and tasks

in which words were hidden in rows of random letters; 9)

perceptual (visual search) speed was slow; 10) had mixed

performance on a variety of other visuospatial tasks; and,

11) had a mild multimodal memory defect but had a severe

visual identification defect. These data suggest that

their prosopagnosic cannot "identify by getting an

overview of an item as a whole in a single glance" (p.

159) and echo Bauer and Trobe (1984) in their report that

"most often, the reason for his [L.F.'s] success is that a

single detail or contour is sufficient to specify the

object's identity visual identification has become a

'logical process rather than a visual one'" (p. 160).

This suggests that prosopagnosia is not the result of

a disruption of a specific face processing system, a

conclusion that is further supported by the finding that

the impairment in prosopagnosia is not limited to human

faces. Faust (1955; cited in Bauer, 1985) reported a

patient who became unable to discriminate among chairs

while Lhermitte and Pillon (1975; cited in Bauer, 1985)

described a patient who could not recognize specific

automobiles. Similarly, Bornstein and colleagues

(Bornstein, 1963; Bornstein, Sroka, & Munitz, 1969) have

described a birdwatcher and a farmer who became unable to

recognize birds and his individual cows, respectively.

L.F. (our prosopagnosic) reports being unable to determine

the make and model of cars and make temporal associations

to clothing and automobile styles. Damasio et al. (1982)

argue that the prosopagnosic defect involves the

discrimination of any visual stimulus from within a class

of visually similar members. The inability of

prosopagnosics to make a variety of within-class

discrimination suggests that some general disruption of

visual perception is responsible for the observed

impairments in prosopagnosia.

Despite the inability to explicitly identify familiar

faces, prosopagnosics do retain some spared access to the

facial representations (i.e., the Identity-Specific

Semantic Codes and Name Codes). Bauer (1984) demonstrated

this using autonomic measures. He constructed two sets of

facial stimuli containing 1) faces of celebrities and 2)

family members. Each face was presented for 90 seconds

while five names, one of which was the target, were read

aloud while skin conductance was measured. All names

within a set were from the same semantic category. For

example, if Bing Crosby's face was presented all the

alternative names were actors or singers. When a family

member's face was presented all the alternatives were

names from the person's nuclear family. Maximum skin

conductance responses occurred to 60% of correct face-name

pairings in a prosopagnosic despite the patient's

inability to overtly identify any of the faces. Tranel

and Damasio (1985) and Bauer and Verfaellie (1988)

replicated this finding.

DeHaan, Young, and Newcombe (1987) demonstrated

preserved access to face identity information such as name

and occupation using an interference paradigm. In this

procedure, the prosopagnosic was shown a face with a

"speech bubble" extending from the mouth. Within the

"speech bubble" was a name which was to be classified as a

politician or non-politician, with reaction time (RT) as

the dependent measure. Three face-name conditions were

used: 1) "same person", in which the name presented

belonged to the face shown; 2) "related", in which the

name presented belonged to a different person from the

same occupational category as the face shown; and, 3)

"unrelated", in which the name presented belonged to a

different person from a different category. Their

prosopagnosic and controls showed the same performance

pattern: the RTs for the "same person" and "related"

conditions did not significantly differ. However, the RTs

for the "unrelated" condition were significantly longer

than those for the other two conditions. This suggests

that knowledge about the occupation of the person pictured

interfered with the politician-non-politician decision

despite the prosopagnosic's inability to overtly classify

the faces. DeHaan, Bauer, and Greve (in press) replicated

this finding with the prosopagnosic L.F. who had been the

subject of the autonomic recognition studies discussed


Thus, despite profound failure of memory when

confronted with tests whose instructions require reference

to a prior learning episode (direct measures; e.g.,

recognition), prosopagnosics can, under certain

circumstances, demonstrate knowledge on tests in which

facilitation or modification of performance indicates the

contents of memory without direct reference to those

contents (so-called indirect measures; cf. Johnson &

Hasher, 1987; Richarson-Klavehn & Bjork, 1988; Hintzman,

1990). This finding is relevant to discussions concerning

the nature of the hypothesized memory processes involved

in performance of these tasks. According to Reingold and

Merikle (1988) "The sensitivity of a direct discrimination

is assumed to be greater than or equal to the sensitivity

of a comparable indirect discrimination to conscious, task

relevant information" (p. 556). The implication of this

assumption is that "unconscious [or implicit; Schacter,

1987] memory processes are implicated whenever an indirect

measure shows greater sensitivity than a comparable direct

measure" (Merikle & Reingold, 1991; p. 225). Thus it can

be inferred that the normal performance of prosopagnosics

on indirect face processing tasks represents the

functioning of unconscious (or implicit) memory processes.

This supports the contention of DeHaan, et al., (1987)

that prosopagnosia is the result of a failure to

consciously access intact facial representations.

Behavioral Evidence. Behavioral evidence that faces

are processed via a special system is limited but does

exist. Yin (1969) compared memory for unfamiliar faces

with memory for other classes of familiar objects which

are customarily seen in one orientation (i.e., photos of

houses, airplane silhouette drawings, and cartoon stick

figures) in both unright and inverted orientation. He

found that inversion made all the materials harder to

remember, but face memory was disproportionately impaired

by inversion. That is faces were easier to remember in

the upright position than other materials, they were

harder to remember than the other classes of stimuli in

the inverted position. He suggested that some "face-

specific process made the recognition of upright faces

easy, but was of little use in the recognition of all

other materials including inverted faces" (p. 397). In a

similar study using brain injured patients, Yin (1970)

compared memory for faces and houses in both upright and

inverted orientations. He found that the right posterior

patients were impaired on upright faces compared to all

other lesion groups and normals, but better on inverted

faces. This finding was attributed to a deficit specific

to normally presented faces. This type of evidence

suggests that more is involved in face processing than

visuospatial ability.

Brain Stimulation and Recording Data. The most

compelling evidence that faces are processed as a special


class of stimuli comes from studies of single cell

recording from the brains of humans and monkeys. Heit,

Smith, and Halgren (1988) implanted bilateral depth

electrodes in the medial temporal lobe of patients with

intractable seizures in an attempt to locate their seizure

focus. They found some cells in the right hippocampus

which responded to specific faces. This is consistent

with Milner's (1968) finding that the most severe face

memory defect among temporal lobe resection patients

occurred with hippocampal involvement.

Leonard, Rolls, Wilson, and Baylis (1985) found face-

selective neurons in the amygdala of monkeys. These

neurons were sensitive to two- and three-dimension human

and monkey faces while being relatively unresponsive to

gratings, simple geometric and complex three-dimensional

stimuli, and to arousing and aversive stimuli. These

neurons responded differently to different faces and

sometimes responded to parts of faces. Baylis et al.

(1985) reported similar neurons in the middle and anterior

portion of the superior temporal sulcus. One important

feature of these neurons is that while they responded

differently to different faces, they did not respond only

to one face. What this means is that the pattern of

neuronal firing across a group of face neurons can code

many more faces than if one neuron were devoted to each

face and may represent parallel distributed processing.

Summary. The subtle perceptual defect seen in

prosopagnosia is not sufficient to rule out the

possibility of a special face processing system for

several reasons. First, the familiar face recognition

impairment in associative prosopagnosia is dissociated

from the gross visuoperceptual abilities assessed by many

of the "identity" tasks described in the laterality

section. Second, as Shallice (1988) so clearly indicates,

the simple association of impairments (in this case,

either subtle visuoperceptual difficulties or the loss of

ability to recognize bird species with face recognition

impairment) does not rule out separate subsystems. The

dissociations between performance on face tasks and other

visuoperceptual tasks discussed above seem to offer

stronger evidence in favor of a face processing system.

Facial Expression Processing

Normal subjects. The existing evidence supporting

the contention that expression processing is independent

of face discrimination and memory in normals is of two

types. The first is the finding of statistical

independence of performance on face discrimination and

memory versus expression tasks. In other words, when

variance accounted for by performance on a face memory

task is partialled out, the left visual field advantage

for facial expression task performance remains. For

example, Ley and Bryden (1979) found a left visual field

(LFV; right hemisphere) advantage for expression

processing (as described in an earlier section) even after

the performance on their face identity task was partialled

out. A similar effect was reported by Pizzamiglio,

Zoccolotti, Mammucari, and Cesaroni (1983) who had

subjects discriminate studied from unstudied faces and, in

a second task, had subjects respond to a particular

emotional expression. They found the usual left visual

field advantage for both tasks. The advantage on the

expression task remained after the performance on the

identity task had been partialled out. They firmly

concluded that "though clearly dependent on a complex

visuoperceptual process to analyze facial stimuli, the

recognition of emotion in the human face requires a

separate and independent process preferentially

lateralized to the right hemisphere" (p. 185).

The second type of evidence which Ley and Strauss

(1986) argue supports the notion of independent processes

underlying expression and face discrimination is the

finding of different patterns of lateral asymmetry for the

two types of tasks. They note that most researchers

(e.g., Ley & Bryden, 1979; Suberi & McKeever, 1977) find a

left visual field advantage on both expression and

identity tasks, but difference in the size of the left

visual field advantage between tasks suggests that several

task-related factors may be important. As noted earlier,

it is possible that the greater left visual field

advantage in processing emotional faces occurs because the

addition of facial expression results in a more spatially

complex stimulus than a face without affect and is thus

processed less effectively by the left hemisphere which

doesn't handle configural material as well as the right.

To test this hypothesis McKeever and Dixon (1981) asked

subjects to discriminate between studied faces and

unstudied faces that were presented tachistoscopically to

either the left or right visual field. During the study

phase the subjects viewed two faces with neutral

expressions with instructions that indicated the person in

the photographs were either experiencing a neutral or very

sad emotion. This manipulation sought to add affect to

the faces without changing their spatial complexity. They

found a left visual field advantage in the emotional

condition but not in the neutral condition. They argued

that the effect of emotion on visual field superiority is

not the result of simply the greater spatial complexity of

affective faces but that strategic factors play a role.

This further supports the idea that facial expression

processing is more than just a complex visuospatial task.

Right Hemisphere Disease. Bowers et al. (1985),

using tasks similar to those of DeKosky et al. (1980),

found that right hemisphere patients were impaired

relative to the normal controls and left hemisphere

patients on all tasks. Unlike the results of DeKosky et

al. (1980), this impairment remained after partialling out

performance on the face discrimination subtest to control

for visuoperceptual ability. What is suggested is that

the right-hemisphere superiority for processing facial

expression may exist independently of its visuospatial

ability. Bowers and Heilman (1984) proposed the existence

of a "right-hemisphere iconic field, which consists

of a corpus of pictorial representations, or schemata,

[and] is assumed important for categorizing and internally

representing visual images" (p. 375). This iconic field

would contain the schema or prototypes for affective

expressions and the failure of the facial percept to

access this field, either because of disconnection or

destruction, would result in a failure of affective

expression identification.

Prosopagnosia. A number of prosopagnosic cases have

been reported who have had difficulty recognizing facial

expression (e.g., Beyn & Knyazeva, 1962; Bornstein &

Kidron, 1959; Bauer, 1982). However, the presence of

relatively intact facial expression recognition in other

cases (e.g., Bruyer et al., 1983; Cole & Perez-Cruet,

1964; DeHaan et al. 1987; Tranel, Damasio, & Damasio,

1988) indicates that impaired expression recognition is

not a necessary component of the syndrome of prosopagnosia

and supports the contention that facial expression

recognition is at least partly independent of facial

identity recognition.

Epileptic Patients. Itzhak Fried and colleagues

(Fried, Mateer, Ojeman, Wohns, and Fedio, 1982) had awake

seizure patients complete tasks measuring perception and

short-term memory for line orientation and unfamiliar

faces and identification of facial expressions during

seizure surgery while directly stimulating different

cortical areas. Stimulation of the nondominant posterior

portion of the superior temporal gyrus resulted in

impairment in perception (discrimination) and memory for

faces and line orientation. No location was found which

altered face memory alone. However, stimulation of the

posterior middle temporal gyrus resulted in impaired

labeling of facial expression only.


Two important conclusions can be drawn from studies

reviewed above. First, the ability to recognize familiar

faces or remember unfamiliar ones and appreciate facial

expression are clinically, behaviorally, and statistically

dissociable from perceptual ability as indicated by

performance on face discrimination tasks. Second, the

ability to recognize familiar faces and remember

unfamiliar faces is likewise dissociable from the ability

to recognize facial expression. These findings support

the existence of separate functional systems for

processing facial identity and expression.

Cognitive Model of Face Processing

The processes underlying the recognition of facial

identity and facial expression are themselves made up of a

number of subprocesses. A number of attempts have been

made to describe the stages of cognitive processing

involved in the perception and identification of faces.

Baddeley (1982) described a framework for discussing face

recognition which distinguished between two subdomains of

face processing, one concerned with the feature and

topography of the face (facial subdomain) and the other

concerned with their real or imagined semantic associates

(semantic subdomain). Unfamiliar faces would have some

limited access to information in the semantic subdomain

(as evidenced by the existence of facial stereotypes),

while familiar faces would "link with a broader domain of

our memory system than is the case with an unfamiliar

face" (p. 716).

The main themes of this framework, the distinction

between the processing of the physical features of a face

and its access to semantic information and the differences

between familiar and unfamiliar faces, have been amplified

and elaborated upon significantly by other British

researchers (Bruce, 1979, 1983; Bruce & Young, 1986;

Ellis, 1981, 1983; Hay & Young, 1982; Rhodes, 1985; Young,

Hay, & Ellis, 1985). The resultant model is presented in

Figure 1-1. As in Baddeley's framework, face perception

Visual Input


RHD Prosopagnosia

ssion Unfamiliar FRU's
Isis Faces Familiar Faces

Visually-derived Identity-specific
Semantic Codes (Person Nodes)

Semantic Information

Label Codes

Identity Name

Figure 1-1. A cognitive model of face processing showing
the hypothetical location of the functional lesions in
prosopagnosia (1) and RHD (2 and/or 3).

and recognition as described in this model is not a

unitary phenomenon but consists of dissociable

subprocesses which consist of more elaborate descriptions

of the processing within the facial and semantic

subdomains. The following section describes the model.

Hadyn Ellis (1986) notes, in relation to one version of

this model: "This model is a hybrid of those already in

existence and is offered as a heuristic rather a

definitive explanation" (p. 2). This statement is true in

relation to this model as well; consequently, there

remains some disagreement about aspects of it. Areas of

disagreement are also described below.

At its earliest stage, Structural Encoding results in

a set of codes which allow the discrimination of facial

from nonfacial patterns (Ellis, 1986). One might imagine

apperceptive prosopagnosia as a breakdown early in this

stage. Ultimately, Structural Encoding produces "an

interconnected set of descriptions--some describing the

configuration of the whole face, and some describing the

details of particular features" (Bruce & Young, 1986; p.

308). These structural codes range from the relatively

concrete, "viewer-centered" descriptions like those used

in the analysis of expression to more abstract

descriptions which provide information for the Face

Recognition Units (to be described below; Bruce & Young,

1986). The less changeable, more stable internal features

are more important in the recognition of familiar faces

while both internal and external (e.g., hairstyle)

features are important for recognition of unfamiliar faces

(Ellis, Shepherd, & Davies, 1979; Endo, Takahashi, &

Maruyama, 1984). The result of structural processing is a

set of structural codes representing a presented face.

These codes are the basis of recognition and expression

analysis, discussions of which follow.

A face is allegedly recognized when there is a match

between its encoded structural representation (the product

of structural encoding) and a stored structural code which

is referred to as a "Face Recognition Unit" (FRU; Bruce &

Young, 1986; Young et al., 1986). An FRU exists for each

face known to a person and functions such that "when we

look at a face, each FRU signals the degree of resemblance

between structural codes describing the seen face and the

description stored in the recognition unit. When a

certain degree of resemblance to one of these stored

descriptions is signalled we will think that the faces

seems familiar" (p. 124; Young, Hay, & Ellis, 1986).

However, Young et al. (1985) found that frequently a face

was seen as "familiar" but no other information could be

generated relating to that face. In fact, Young et al.

argue that the function of the FRU's and person identity

nodes (see below) is merely to signal how closely a

stimulus face resembles a known face, not to indicate that

it is that known face. Indicating recognition is actually

the function of an associated "cognitive" system. Thus, a

face can seem familiar and/or look like a known face, yet

not be mistaken for a known person.1

Activation of an FRU allows access to the information

contained within the Person Identity Nodes. According to

Bruce and Young (1986), the Person Identity nodes contain

"Identity-specific Semantic Codes" which describe

everything we know about a familiar person including

things like occupation, hobbies, relatives, etc. It is

activation of these codes that gives a person a real sense

that he/she has actually recognized a person. As noted

above, however, Young et al. (1985) suggest that

activation of the person identity nodes simply reflects

activation of the related FRU. Another type of semantic

code, Visually-derived Semantic Code, exists which forms

the basis of judgements about sex, age, and nonobservable

attributes like honesty and intelligence for unfamiliar

faces. These judgements appear to be consistent across

observers from the same culture (e.g., Greve & Bauer,

1988; Klatzky et al., 1982a, 1982b; Secord, 1958) which

suggests that Visually-derived Semantic codes are the

product of considerable experience with faces. This data

is discussed in greater detail in a later section.

Bruce and Young consider these two codes to be

qualitatively different since the information in the

1 It is interesting to note the similarity between the
function of the FRU and the activity of the face-selective
neurons described by Baylis et al. (1985).

Visually-derived Semantic Code is more closely tied to the

actual physical features of a particular face, while the

information contained in the Identity-specific Semantic

Codes may bear little or no relationship to the actual

physical structure of the face. However, one must

question the need for a separate set of semantic codes for

unknown faces since an unfamiliar face may resemble, to a

greater or lesser degree, known faces and thus gain access

to the semantic information about those known faces in

direct relation to the degree of resemblance. This is

basically the view of Rhodes (1985). Put simply, what

this means is that an unfamiliar person who looks a lot

like Robert Redford may be categorized as an actor in an

occupational stereotype task. This position argues that

unknown faces access the semantic information about known

faces in direct relation to the degree of resemblance

between the unknown and known faces.

On the other hand an unfamiliar face may resemble not

an individual known face, but a composite based on

experience with many faces. Activation of the composite

or prototype FRU may then allow access to the semantic

information which the contributing faces have in common.

Support for the existence of abstract prototype

representations exists. Famous faces are more easily

recognized at a second presentation than are unfamiliar

faces (Ellis, Shepherd, & Davies, 1979; Klatzky & Forrest,

1984; Yarmey, 1971) which Klatzky and Forrest (1984)

attribute to the existence of a fairly abstract

representation of the familiar faces. Klatzkyet al.

(1985a) found that highly stereotypic unfamiliar faces

were more easily recognized than low stereotypic

unfamiliar faces, a finding which was also attributed to

the existence of an abstract representation for each

particular stereotype. Thus, the "visually-derived

semantic codes" may reflect the activation of special

FRU's which are created as the result of experience with

many different faces. (For discussion of the differences

between prototype and exemplar models of classification,

see Abdi, 1986, and Medin and Schaffer, 1978.)

Bruce and Young (1986) suggest that the Identity-

specific Semantic Codes and output of the Expression

Analysis system (described below) contribute information

to the production of the Visually-derived Semantic Codes

and note that "future studies may allow the separation of

'visually derived semantic codes' into distinct types,

produced by different routes" (p. 313). Research by

Secord (1958) and Thornton (1943) present data supporting

the notion that facial expression contributes to the

Visually-derived Semantic Codes. This data will be

discussed in greater detail in a later section.

The Expression Analysis system also receives

structural code input and the product of processing is an

"Expression Code" (Expression Representation) which is

based on the shapes and postures of facial features. This

code allows faces to be categorized in terms of their

emotional expressions. The expression codes may be

thought of as part of the "right hemisphere iconic field"

described by Bowers and Heilman (1984). Activation of

both the Identity-Specific Semantic Codes and Expression

Codes allows access to Name Codes and Expression Label

Codes which then allow an appropriate name or label to be

generated. Activation of an appropriate name or

expression label can also allow access to the Person

Identity Nodes and Expression Codes, respectively. Yarmey

(1973) and Young et al. (1985) found that a face could

gain access to semantic information about a person while

the person's name remained unavailable which suggests that

the Name Codes are independent of the Identity-Specific

Semantic Codes. In other words, activation of a Person

Node does not guarantee access to names. Bowers and

Heilman (1984) and Rapcsak, Kasniac, and Rubens (1989)

both reported patients who could neither name facial

expressions nor select the expression named by the

examiner, though they could both discriminate same from

different expressions and match expressions. This

indicates that Expression Label Codes are also independent

of the Expression Codes.

At this point it is worth noting how prosopagnosia

and RHD fit into this cognitive model. It seems clear

that the FRU's are not being activated in prosopagnosia

because such activation is thought to produce a subjective

feeling of familiarity which prosopagnosics do not

experience. However, what seems equally clear is that the

structural information about familiar faces held in the

FRU's is intact because it influences performance on

indirect tasks. Similarly, structural encoding is grossly

unimpaired though subtle defects in some aspect of this

process exist. Thus, the observed impairment in

prosopagnosia appears to result from a functional

disconnection between the output of Structural Encoding

and the FRU's which prevents FRU activation; the FRU's and

the Identity-specific Semantic Codes remain intact and

support indirect task performance. :The hypothesized

location of this functional lesion is indicated in Figure


In right cerebral hemisphere disease an impairment in

the ability to recognize facial affect is commonly seen

while facial identity recognition is intact. Two

explanations for this finding are possible. First, the

facial expression processing defect could be the result of

damage to the expression codes themselves (Bowers &

Heilman, 1984). This condition is represented by a

functional lesion at #2 in Figure 1-1. Second, the

expression codes may be completely intact but disconnected

from the input of structural encoding (#3, Figure 1-1),

resulting in an expression recognition impairment roughly

analogous to the identity recognition defect seen in

prosopagnosia. Thus, the expression representations would

be unavailable to conscious access but their presence

should influence performance on indirect tasks. At this

point there is no data which would allow us to

discriminate between these two possibilities.

Extraction of Nonobservable Attributes

The previous sections have focused primarily on the

processes involved in extracting identity and expression

information from faces. However, there is a great deal of

information conveyed by a face that does not yield

specific conclusions about its identity or affective

state. Examples within the physical domain include sex,

age, race, and attractiveness. Decisions regarding these

attributes can be made relatively effortlessly and show a

high degree of agreement across judges (Dion, Berscheid, &

Walster, 1972). The physical features of a face can also

be used, in the absence of other information, to make

subjective judgements about nonobservable characteristics

of the person including psychological traits, potential

behavior, likeability, or occupational status and these

judgements are also made with a surprising degree of

consistency across observers (Dion, et al., 1972;

Goldstein, Chance, & Gilbert, 1984; Klatzky, et al.,

1982a, 1982b; Secord, 1958; Thornton, 1943).

These nonobservable attributes can be roughly divided

into two general categories. "Personality" attributes

refer to potential behavior of and quality of the social

interaction with the target person. The type of

information that goes into these inferences may be the

kind of data which drives the "first impression." This

research tends to be older and has mainly been the focus

of social psychology researchers. The second type of

nonobservable attribute refers to the occupational

category to which a particular face appears to belong.

Occupational category attributions refer, not to the

actual occupation of the person presented, but to the

category which is inferred simply on the basis of the

physical features of the face. The literature related to

this issue is newer and its purpose has been more to help

elucidate the nature of the cognitive processes involved

in face memory.

Personality Trait Attributions

In one of the earliest studies of "personality"

attribution, Thornton (1943) had subjects rate faces in

terms of kindliness, intelligence, industriousness,

honesty in money matters, dependability, and sense of

humor. He found that ratings of the same face at repeated

presentations within judges were quite consistent.

Moreover, he found that ratings of the same face by two

different groups of judges did not differ significantly.

He also noted that smiling persons tended to be rated

higher in terms of kindliness and sense of humor than the

same person not smiling. This finding suggests that

facial expression recognition plays a role in some

personality judgements.

Secord (1958) indicated that one finding repeatedly

confirmed in his work was that judges agree in attributing

certain personality impressions to faces with particular

physiognomic cues and argued that "the perceiver

selectively attends to certain aspects of the face, and

used ready-made interpretations" (p. 303). Thus, in

females the amount of lipstick related positively to

sexuality while bowed lips produce the impression of being

conceited, demanding, immoral, and receptive to the

attentions of men. Older men were seen as more

distinguished, responsible, and refined. Additionally,

darker skin was associate with higher ratings of

hostility, boorishness, unfriendliness, and lack of sense

of humor. Finally, he argued that commonly agreed-upon

facial expression account for some portion of the

impression which are formed in looking at a photograph.

Secord concluded that cultural factors contribute to the

attribution of personality characteristics in several

ways. "First, the culture places selective emphasis upon

certain cues; e.g., the amount of lipstick a woman wears

is more important that the shape of her ears. Second, the

culture provides ready-made categories which consist of

denotative cues and associated personality attributes, as

in age-sex roles, or in ethnic stereotypes. Finally,

various forms of facial expression have become established

as having at least partly agreed-upon meanings in our

culture" (p. 313).

Occupational Category Attributions

Klatzky et al. (1982b) demonstrated that normals can

extract occupational stereotype information from faces

that is consistent across judges. She asked students to

describe mental prototypes associated with thirteen

particular occupational categories (e.g., athlete, farmer,

watchmaker) and rate each category with regard to the

goodness of their mental prototype. Faces fitting the

prototype descriptions were then selected from various

sources. A new set of students were then presented with

each face and the thirteen occupational categories. Their

task was to indicate the three categories to which each

face most likely belonged. The results indicated that the

students could, indeed, reliably place the faces into

their designated a priori category based strictly on

features of the face.

In a second study (Klatzky et al., 1982a)

demonstrated that occupational category labels could prime

a face/nonface decision using strong category exemplars as

targets. These exemplars were selected based on the data

from the study described above. Subjects were then

presented an occupational category label followed by two

halves of a face and were asked to indicate whether the

two halves were from the same face. In the Congruent

condition the two halves were from the same face and that

face was from the same category as the prime; in the

Incongruent condition, the two halves were of the same

face and that face was from a different category as the

prime. The nonfaces were composed of two halves of

different low exemplar faces. There was also a no prime

condition in which the targets were preceded by the word

"Blank." The results indicated that the prime

significantly interfered with the face/nonface decision in

the incongruent condition but did not affect the congruent

decision. Thus Klatzky et al. (1982a, 1982b) demonstrated

direct and indirect access to occupational category

information in faces with normals.


Two important things should be gleaned from the

studies into the attributions of personality

characteristics and occupational categories to unknown

faces. First, judges are able to make these attributions

with remarkable consistency. Second, as Secord (1958)

emphasized, these attributions are based on shared

cultural experience which has resulted in ready-made

categories for particular features or configurations of

features. The implication is that two people from the

same culture would be able to extract from a stimulus face

information which would lead to similar judgements

concerning nonobservable attributes of the face.

Conceptually, inferences about the nonobservable

attributes of an unknown persons based on physical

appearance are a form of stereotype. Thus, these two

types of attributions (personality and occupation) will be

referred to hereafter as personality and occupational


Purpose of this Study

Bruce and Young (1986) and Young, Hay, and Ellis

(1986) suggest that the basis of these stereotypes are the

Visually-derived Semantic Codes. As noted earlier, Bruce

and Young believe that both expression and identity-

related processes contribute to the creation of Visually-

derived Semantic Codes and that it may be possible to

separate the codes into distinct types that are a function

of different processing routes. That is, different

inferences about nonobservable attributes may rely more or

less heavily on the processes or codes involved in the

judgement of either facial identity or expression. Put

another way, if the ability to extract a particular

nonobservable attribute is based primarily on information

derived from the expression processing system, then damage

to that system should result in impairment in expression

recognition as well as the ability to make the relevant

inferences. The same scenario could be imagined for the

ability to extract nonobservable attributes that are a

function of the identity system. This logic works well

going from a model to hypotheses, however, going from real

data to a hypothetical model is more complex.

Associations between impaired abilities in a brain damaged

individual can never be conclusive evidence that the two

abilities are supported by the same system since, at the

least, a single lesion may damage two independent but

anatomically proximate systems. Behavioral dissociations

within individual patients and across types of patient

groups provide much more conclusive information about the

structure of the relevant cognitive systems, but even

inferences based on such data must be made with care

(Shallice, 1988).

The purpose of this study is to examine the

contribution of expression and identity information to the

formation of different types of Visually-derived Semantic

Codes by assessing the ability of neurologically normal

individuals, right- and left-hemisphere damaged

individuals (NHD, RHD, and LHD, respectively), and a

prosopagnosic to make subjective judgements about

personality characteristics and occupational category

based solely on the qualities of stimulus faces.2

Personality and occupational stereotypes were

selected for two reasons. First, there is a relatively

large body of literature concerning both occupational and

2 This does not mean that subjects will be expected to
guess the actual occupation or personality type of each
face. They must simply judge, based on the appearance of
the face, the occupation or personality type of which the
face is normatively considered most exemplary.

personality stereotypes. Second, both intuition and

research (e.g., Secord, 1958; Thornton, 1943) suggest that

the two types of stereotypes may be differentially

supported by the expression and identity processing

systems. Thus, we will assess functioning in the

following four domains of face processing ability: 1)

recognition of facial identity; 2) recognition of facial

expression; 3) identification of personality stereotypes;

and, 4) identification of occupational stereotypes.

As suggested by the evidence of implicit recognition

of face identity in prosopagnosia, direct measures of any

of the above domains of face processing may be

insufficient to fully evaluate the contents of memory or

the status of the memory representations. Indirect tests

may provide evidence that the contents of memory are

intact but unavailable to conscious access or that, in

fact, the representations are functionally disrupted.

Additionally, the extension of the dissociations between

affective expression and identity processing to indirect

measures would be important support for the view that

different mechanisms underlie those functions. Thus

indirect tests have also been included.

These tasks were based on Young, Ellis, Flude,

McWeeny, and Hay's (1986) name categorization interference

paradigm which formed the methodological basis for DeHaan

et al.'s (1986) experiment with a prosopagnosic. As in

DeHaan et al.'s (1986) study, Young, Ellis, Flude,

McWeeny, and Hay (1986) presented familiar faces paired

with a name in a speech bubble which either belonged to

the person shown, a different person from the same

occupational category as the person shown, or a different

person from a different occupational category. The

subjects were asked to categorize names in terms of

occupational category and vocal response latency was

measured. The presence of the face interfered with name

categorization but only when the face and name were from

different categories. Thus, with normals, the knowledge

that the person named was from a different occupational

category than the person pictured (Incongruent condition)

interfered with their ability to classify the name in

terms of occupation relative to the condition in which the

face and name were of the same person (Congruent

condition). The performance of prosopagnosics (DeHaan et

al., 1986, in press; see earlier discussion) on this task

indicates that the conscious ability to name or categorize

the faces is not necessary for normal performance.

Thus, semantic knowledge about the person pictured

influences the amount of time it takes to semantically

categorize the name with which it was presented as

indicated by a Congruent condition reaction time that is

faster than the Incongruent condition reaction time. When

this effect occurs in association with evidence that the

ability to directly categorize the face along the same

semantic dimension is compromised, it constitutes evidence

that the relevant face and semantic representations are

none-the-less intact. When a person is impaired on both

direct and indirect tasks the interpretation is more

complex.3 The inference that failure on both direct and

indirect tasks of the same ability in a head injured

patient indicates that the relevant representations are

globally unavailable is not logically tenable by itself

and can never be proven unequivocally. However, it may be

possible to generate enough circumstantial evidence to

make that interpretation viable.


Direct Tests

Based on the literature reviewed above we can make

two predictions concerning the outcome for measures of

direct access to facial identity, expression, personality

stereotype, and occupational stereotype information (see

Table 1-1). First, since NHD and LHD patients do not

3 Roediger (1990) has pointed out that poor performance
on indirect tasks can occur because the initial mode of
processing of a stimulus is different from its mode of
presentation at test. For example, when the target
information is initially processed conceptually, in terms
of its meaning, while the indirect task makes significant
use of the perceptual features of the target then
performance on the indirect task may be worse relative to
direct task performance (e.g., Jacoby, 1983). On the
other hand, performance on one indirect task can be
impaired relative to performance on another indirect task
if the stimuli used in the initial exposure are different
in form on the test (e.g., initial exposure to a picture
of a house, then test with the word "house" compared to
testing with the picture of the house; e.g., Weldon &
Roediger, 1987, experiment 4).

typically show impairment on direct tests of face

processing ability, we expect that these two groups will

perform normally on all four measures. Second, we expect

that RHD patients will be impaired in processing facial

expression and unimpaired on tests of facial identity

recognition; the converse should be true for

prosopagnosics. Thus, for the direct tests, only the

outcomes of the RHD patients and the prosopagnosic on the

personality and occupational stereotype identification

tests remain to be determined.

Table 1-1

Outcome Assumptions Based on a Review of the Previous
Research for Each Domain of Face Information

Type of Test
Subjects Identity Expression Personality Occupation

Normal + + + +

LHD + + + +

RHD + ?

PA + ? ?

Note. LHD = left hemisphere disease patients; RHD = right
hemisphere disease patients; PA = prosopagnosia.

In an attempt to be comprehensive one could generate

all the possible outcomes for the two groups on the

remaining two tests. However, doing so would likely add

little to our understanding of this issue. The

alternative is to describe and test the models) which


seems) most theoretically and intuitively sound. From a

theoretical perspective, Bruce and Young (1986) argue that

judgements concerning nonobservable attributes (e.g.,

facial stereotypes) are based on the information contained

in the Visually-derived Semantic Codes and that all face

processing systems contribute to their creation. If their

view is accurate, then intuitively it seems that the

expression processing system would be the primary

contributor of information to the Visually-derived

semantic codes underlying personality stereotype

identification while the identity processing system would

supply significantly more information for the creation of

the codes which support occupational stereotype judgments.

The model depicting these relationships is presented in

Figure 1-3. This intuitive position is partially

supported by the data of Thornton (1943) and Secord (1958)

who report that facial expression made an important (but

not the only) contribution to personality attribution.

In concrete terms this model suggests that we apply

certain personality descriptors to unknown persons based

on their facial expression. That is, because he/she is

smiling we may assume he/she is friendly or kind. On the

other hand, if he/she is frowning we may make more

negative personality inferences about them. Similarly, we

may infer that someone looks like they belong to a

particular occupational category because he/she is similar

in appearance to someone we have encountered, either in

person or through the media, who works in that job (i.e.,

call someone a laborer because he looks like your

construction worker cousin or Archie Bunker). Thus, the

model in Figure 3-1 suggests the RHD patients will be

impaired both on judgement of facial expression and

personality while remaining unimpaired on tests of

identity recognition and occupational stereotype judgment.

The prosopagnosic, of course, should show the opposite


Indirect Tests

To the extent that the relevant facial

representations are disrupted, performance on both the

direct and indirect measures tapping the ability to

process that type of facial stimulus will be impaired.

For example, Bowers and Heilman (1984) have suggested that

failure of right-hemisphere patients to overtly categorize

facial expression may result from destruction or

dysfunction of the facial expression representations. If

this is the case then failure on both direct and indirect

expression recognition tests should be observed. If, on

the other hand, the representations are intact, right-

hemisphere patients should perform "normally" on the

indirect expression tests yet remain impaired on the

direct measures.

A corollary of the above hypothesis is that if two

abilities (e.g., famous face recognition and stereotype

SVisual Input




Unfamiliar FRU's
Faces Familiar Faces

Personality Occupation '--Identity-specific
(Person Nodes)
Visually-derived (Person Node
Semantic Codes
Semantic Information

Label Codes

Identity Name

Figure 1-2. Hypothetical cognitive model of face

identification) are based on the same information,

performance on measures of the two abilities should be

correlated. For example, since the disability in

prosopagnosia appears to be one of conscious access to

intact facial representations, if either or both of the

stereotype processing abilities is supported by the

identity processing system then performance on indirect

measures of that stereotype processing ability should be

unimpaired. If, on the other hand, the stereotype

processing ability requires access to expression

representations which are dysfunctional, thus impairing

performance on indirect expression tests, that stereotype

processing ability should be likewise impaired.

Consequently, we predict that the RHD patients should

remain unimpaired on the identity and occupational

stereotype tasks, and should be impaired on the

personality and expression tasks if the contents of the

Expression system are unusable, otherwise they should

perform normally. The prosopagnosic should perform

normally on all the indirect tasks.





Prosopagnosic Patient. L.F. (who has been reported

frequently; e.g., Bauer 1982, 1984; Bauer & Trobe, 1984;

Greve & Bauer, 1989, 1990) is a 47-year-old male with 16

years of education who, in 1979, sustained bilateral

occipitotemporal hematomas as the result of a motorcycle

accident which left him with profound and stable

prosopagnosia, decreased color vision, an altitudinal

hemianopia with a left inferior congruous quadrantanopia,

and decreased emotional responsiveness to visual stimuli.

See Table 2-1 for a comparison of Our prosopagnosic's age,

education, WAIS-R Vocabulary Scaled Score, and time post

injury (TPI) with the other patient and control groups.

Unilateral Stroke Patients. The stroke patients were

ten right hemisphere damaged patients (RHD; mean age = 64.0,

sd = 5.14; mean education = 12.0, sd = 3.62) and ten left

hemisphere damaged patients (LHD; mean age = 61.0, sd =

8.53; mean education = 12.6, sd = 2.75) who were

participants in ongoing neuropsychological studies at the

Gainesville Veterans Administration Medical Center. All

Table 2-1

Comparisons of Patient and Control Groups on Demographic
Variables, WAIS-R Vocabulary Score, and Time Post Inlury





Education Vocabulary

16.46a 13.60a
2.32 2.97

16ab 13ab



Older m 65.80b 14.73ab
sd 5.02 2.81

LHD m 61.00b 12.60b
sd 8.54 2.76

RHD m 64.00b 12.00b
sd 5.14 3.62
abc Within each column, groups with
significantly different at p < .05.




the same




letter are not

patients had sustained a single unilateral stroke, or if a

second stroke occurred it included the area of the original

stroke. No patients with lesions of two or more areas of

the brain were included. Table 2-1 provides specific

demographic data for these subject compared to control

subjects and Table 2-2 lists relevant demographic data and

lesion location for each stroke patient.

Normal Controls. Fifteen adults aged 40-50 (Young;

mean age = 43.13, sd = 2.69) were recruited from the

community to serve as controls for the prosopagnosic and

fifteen adults aged 60-70 (Older; mean age = 65.8, sd =

5.02), also from the community, served as controls for the




Table 2-2
Demographic and Lesion Location1 Data for Individual Stroke

Left Hemisphere Disease Patients

Patient Age Ed Sex TPI F P T O

L1 52 14 F 18 + + -
L2 63 12 M 276 o o o o
L3 56 7 M 11 + + + +
L4 60 13 M 33 o o o o
L5 73 14 M 143 + + -

L6 65 15 M 11 + -
L7 74 14 M 83 + + +
L8 57 16 M 192 + + + -
L9 47 12 M 84 + + + -
L10 63 9 M 120 + + + -

Right Hemisphere Disease Patients

Patient Age Ed Sex TPI F P T 0

R1 66 8 M 12 + + -
R2 68 11 M 20 + + + -
R3 55 12 M 6 o o o o
R4 63 12 M 162 + + + -
R5 67 16 M 38 + + + -

R6 67 13 M 13 + + + -
R7 55 8 M 8 + + + -
R8 65 12 M 7 o o o o
R9 64 8 M 1 + + + -
R10 70 19 M 167 + + + -

Note. TPI = time post injury; F = frontal; P = parietal; T
= temporal; 0 = occipital

1 Lesion location refers to the major brain structures
involved in the stroke. Size of lesion is not implied by
the number of structures involved. ('+' means that structure
contains part of the lesion; 'o' means the lesion has not
been localized beyond the hemisphere level)

stroke patients. All participants were born and raised in

the United States which insured a relatively circumscribed

cultural base.

Group Comparisons. Individual one-way ANOVA's were

conducted to evaluate group effects for age, education,

WAIS-R Vocabulary scaled score, and TPI. A significant

Group effect was found for age (F [3,46] = 53.15, p = .0001)

with the Ryan-Einot-Gabriel-Welsh Multiple F post hoc test

(REGWF; Ryan, 1959, 1960; Einot & Gabriel, 1975; Welsch,

1977) indicating that the Young control group was, in fact,

significantly younger than the Older control group and both

stroke groups. No differences were found between the Older

control group and the two stroke groups. L.F.'s age was not

significantly different from the Young Control group.1

Significant Group differences were found for education (F

[3,46] = 6.37, p = .0011) with the REGWF indicating that the

two control groups were not significantly different from

each other, and that the Younger control group was

significantly better educated than the two stroke groups.

Educational background of the Older control group and the

two stroke groups did not differ. L.F. also did not differ

from the Young Control group. A significant group effect (F

[3,45] = 9.47, p = .0001)2 was also found for WAIS-R

1 Except where indicated, L.F.'s scores were compared to
the means of the Younger control group using 95% confidence
intervals derived from the formula: CI = m ts; where m =
sample mean, t = the t-value for a give alpha level divided
by 2, and a = the sample standard deviation. Confidence
intervals were calculated using the transformed data and
reconverted to original units for presentation.
2 One Older Control subject who had worked as a
psychological technician and who was familiar with the WAIS-
R was not given the Vocabulary subtest.

Vocabulary scaled scores. The REGWF indicated that the two

control groups did not differ, the Older control group and

the RHD patients did not differ, and the two stroke groups

did not differ. Again, L.F. did not differ significantly

from his control group. Finally, the two stroke groups did

not differ significantly in months since injury (t = 1.5893,

p = .1294). Nor was L.F.'s TPI significantly different from

the stroke patients.

To summarize the findings of this set of analyses, the

Younger control group and L.F. did not significantly differ

on any of the three variables. Additionally, the Older

control group did not differ significantly from the two

stroke groups on age or education, but did differ from the

LHD group on WAIS-R Vocabulary scaled score. However, the

RHD patients did not differ significantly from the LHD

patients on vocabulary score. The patient groups did not

differ in TPI. Thus the patients are well matched to their

respective control groups.

Tests of Face Memory and Perception

Milner Facial Recognition Test. In the Milner Facial

Recognition Test (Milner, 1968) the subject was instructed

to study an array of 12 unfamiliar male and female faces for

45 seconds. Following a distraction period of 90 seconds

the subject was presented with an array of 25 faces which

contained the original 12. The subject's task was to select

the twelve faces he/she remembered from the original array.

Test of Facial Recognition-Short Form. The Test of

Facial Recognition-Short Form (Levin, Hamsher, & Benton,

1975) is a test of face perception, rather than memory, in

which the subject was presented with a single front view of

a face which he/she must match with one face in an array of

six faces. The first six arrays contained one view which

matched the stimulus face exactly. The arrays of the

remaining seven items contained three photographs of the

stimulus person taken under different lighting conditions or

from a different angle. For these items the subject

selected the three faces which were photographs of the same

stimulus person. The total score is the number of correct

out of 54 after correction for age and education.

Identity Discrimination. Identity Discrimination is

the first subtest of the Florida Facial Affect Test (see

below) and consists of twenty vertically arranged pairs of

female faces with neutral expressions and whose hair is

covered with a surgical cap. Half of the pairs consist of

identical photographs of the same person and half consist of

photographs of two different persons. The subject's task is

to indicate whether the photographs are of the same or

different persons.

Tests of Direct Access to Face Information

Florida Facial Affect Test. The Florida Facial Affect

Test (FFAT) is part of the Florida Affect Battery-Revised

(Blonder, Bowers, & Heilman, 1991) which was designed to

assess receptive processing of emotional faces and prosody,

and consists of three parts. Part I, the Florida Facial

Affect Test, is comprised of five face perception subtests.

Subtest 1 was described above. Subtest 2 (Facial Affect

Discrimination) measures the patient's ability to

discriminate emotional facial expressions across different

persons. Twenty pairs of vertically arranged faces are

presented. The two faces in each pair are never the same

person but for half the pairs, the two people have the same

expression and for half they have different expressions.

The subject's task is to indicate whether the facial

expressions are the same or different. In Subtest 3 (Facial

Affect Naming) twenty individual faces with happy, sad,

angry, frightened, or neutral expressions are present to the

patient who must then name the emotional expression on the

face. In Subtest 4 (Facial Affect Selection) the patient

must select from a set of five faces the one face bearing

the expression named by the examiner. Finally, in Subtest 5

(Facial Affect Matching) the patient must select the face

among a set of five which bears the same expression as a

stimulus face.

Stereotype and Identity Rating Tests. Three rating

tests were specifically designed for this study to measure

direct access to occupational and personality stereotype and

face identity information. The Occupational and Personality

Stereotype rating tests each consisted of 10 faces presented

twice, once each with its "correct" category and "incorrect"

category. The "correct" category was the one into which it

was placed most frequently by an independent sample of

subjects while the "incorrect" category was the one into

which it was placed least frequently (see Appendix A for

details). The test of face identity contained 10 famous

faces presented twice, once with its correct name and once

with the name of another person famous at about the same

time but from a different occupational category. In all

tests half of the faces were paired first with the correct

label and half with the incorrect label.

The subject's task was to rate how well each face and

its associated label (occupational category, personality

descriptor, or name, depending on the test) matched using a

9-point Likert scale. Specifically, in the Occupational

Stereotype Rating test the subjects rated how much they

thought the person shown looked like he belonged to the

associated occupational category (1 = "very much no"; 9 =

"very much yes"). In the Personality Stereotype Rating test

the subjects rated how much they thought the person shown

would be described using the personality descriptor

presented (again, 1 = "very much no"; 9 = "very much yes").

Finally, on the Identity Rating test, the subjects indicated

how confident they were that the face and name went together

(1 = "very confident no"; 9 = "very confident yes"). Thus,

if a subject was perfectly accurate, he/she would produce a

mean of 9 for the 10 "correct" face-label pairing and a mean

of 1 for the 10 "incorrect" face-label pairings. The

magnitude of the difference between the two means thus

indicated how well they were able to discriminate between

"correct" and "incorrect" pairings.

Tests of Indirect Access to Face Information

The tests of indirect access to face information

consisted of four interference tasks (Young, Ellis, Flude,

McWeeny, & Hay, 1986) addressing access to information about

perceived occupational and personality category membership,

emotional facial expression, and facial identity. All four

tasks consisted of 40 trials (10 congruent, 10 incongruent,

20 control) with five practice trials preceding each test.

In the Congruent Condition, the label following the facial

photograph was consistent with the face. For example, for

the Occupation-Category and the Personality-Descriptor

Interference Tasks, the category or descriptor presented

would be the one into which the face was placed most

frequently (see Appendix A); in the Expression-Label

Interference Task, the emotional state of the stimulus

person as implied by her expression would be accurately

described by the word which followed the photograph (e.g., a

smiling face would be followed by the word "happy");

finally, in the Face-Identity Interference Task, a

photograph of a famous person would be followed by that

person's name (e.g., a photograph of Lyndon Johnson would be

followed by the name "Lyndon Johnson"). In the Incongruent

Condition, the face and label did not match. In the Control

Condition, the face was replaced with a blank gray rectangle

with the same dimensions as the face photographs. Each gray

blank was followed by one of the labels used in the relevant

test such that each label paired with a photograph appeared

at least once with a blank. Trials of each condition were

distributed in a pseudo-random order throughout the test

such that half the faces were presented in the congruent

condition first.

All interference test stimuli were aligned such that

the fixation dot, eyes of the face, and word were all at the

same vertical position on the tachistoscope card, thus

allowing central fixation without scanning during stimulus

presentation. The stimuli were presented on a Gerbrands

G1135 (T-4A) four-field tachistoscope with a G1151 Gerbrands

Automatic Card Changer and reaction time was measured with a

voice activated millisecond timer. A trial consisted of a

500ms presentation of the fixation dot, followed by a face

or blank for 500ms, after which the label (or name) was

presented for 3000ms. For the Occupation, Personality, and

Identity tasks the subject would "yes" or "no" (based upon

criteria described below) upon the appearance of the word.

For the Expression task the word was simply read as quickly

as possible. The subjects were always instructed to focus

on the word, ignoring the facial stimuli (including the


Decisions were based on the following criteria. For

the Face-Occupation Category Interference task, subjects

were to say "yes" if the occupational category was an

athletic job (quarterback, shortstop) and "no" for any other

occupation (accountant, doctor, laborer, truck driver). For

the Face-Personality Descriptor task they said "yes" if the

word described a "bad guy" (aggressive, intolerant) and "no"

if it described anyone else (kind, sociable, shy). In the

Expression-Label Interference task, subjects simply read the

emotion label as it appeared (rather than making a decision

about it, as in the other tests). Finally, in the Face-

Identity Interference task, they responded "yes" when the

name presented was that of a politician.

General Procedure

Subjects entered the laboratory and completed the

informed consent form. They were given the instructions and

allowed to review the words for each interference task

immediately prior to the administration of each. Following

the completion of the interference tasks, the Vocabulary

subtest from the Wechsler Adult Intelligence Scale-Revised

(WAIS-R; Wechsler, 1981) was completed. The face memory and

perception tests and the tests of direct access to face

information were administered last. The following is a

summary of the order of test administration: 1) Face-

Occupation Category Interference; 2) Face-Personality

Descriptor Interference; 3) Expression-Label Interference;

4) Face-Identity Interference; 5) WAIS-R Vocabulary; 6)

Milner Facial Recognition Test; 7) Benton Test of Facial

Recognition; 8) Occupation Stereotype Rating; 9) Personality

Stereotype Rating; 10) Face Identity Rating; and, 11)

Florida Facial Affect Test. Following completion of all

tests the subjects were debriefed. All control subjects and

stroke patients were tested once. L.F. was tested four

times to ensure a more reliable evaluation.


Tests of Face Memory and Perception

The scores for the Milner Facial Recognition Test

(Milner; number of correct recognition out of 12), Benton

Test of Facial Recognition (Benton; mean corrected long form

score), and the FFAT Identity Discrimination Subtest

(percent correct) were analyzed in separate one-way (Group)

ANOVAs. A significant group effect was found for the Milner

score (F [3,46] = 3.71, p = .0180) with the REGWF indicating

that the two stroke groups did not significantly differ, nor

were there differences among the two control groups and the

LHD group. However, the RHD group performed significantly

worse than the two control groups. L.F.'s performance on

the Milner was in the impaired range according to Milner's

3 All descriptive statistics and analyses of variance
(ANOVA) were computed using SAS Version 6 (SAS Institute,
Inc.) on a Compac 386 personal computer. Scores represented
as proportions (Benton, Milner, and all FFAT scores) were
transformed using the arcsin square-root transformation to
stabilize the variance (Neter, Wasserman, & Kutner, 1985).
All reaction times were log transformed to reduce skewness
(Neter, Wasserman, & Kutner, 1985). All transformed
variables were reconverted to the original units for
presentation purposes.

suggested cut-offs (Milner, 1968). For the Benton,

significant differences were also found, F (3,46) = 15.18, p

= .0001. The REGWF indicated that the RHD patients

performed significantly worse than all other groups and that

the LHD patients performed worse than the Younger controls.

On the Benton, L.F. performed within the normal range

according to Benton's norms (Benton, Hamsher, Varney, &

Spreen, 1983). On the second test of face perception, FFAT

Identity Discrimination, the RHD group was significantly

worse (F [3,46] = 11.99, p = .0001) than the other three

groups and L.F. (who was 100% correct) did not differ from

the controls or the LHD group. Thus, the RHD patients were

impaired relative to the LHD and NHD subjects on all three

tests, while the prosopagnosic was only impaired on the test

of face memory. Clearly, basic face perception is a problem

for the RHD patients. See Table 2-3 for details.

Table 2-3

Means for the Tests of Face Memory and Perception

FFAT Identity
Group Milner Benton Discrimination

Younger 8.40 (1.50)a 48.60 (3.69)a 98 (3.16)a

L.F. 7.00 (1.00)b 43.50 (3.00)a 100 (O.00)a

Older 8.60 (1.55)a 47.07 (2.87)ab 96 (5.49)a

LHD 7.60 (1.17)a 43.80 (4.69)b 93 (6.75)a

RHD 6.80 (1.55)b 36.80 (5.13)c 78 (16.1)b
ab column means with the same letter are not significantly
different at alpha < .05.

Tests of Direct Access to Face Information

Florida Facial Affect Test. The remainder of the FFAT

subtests (Affect Discrimination, Naming, Selection, and

Matching) address affective processing and were submitted to

a 4 (Group) x 4 (Subtest) mixed-block ANOVA which revealed a

significant Group by Subtest interaction, F [9,138] = 3.06,

p = .0023. Follow-up ANOVAs and REGWFs evaluating group

differences on each subtest indicated that for Affect

Discrimination and Affect Naming the Control groups

performed significantly better than the stroke groups (F

[3,46] = 13.10, p = .0001 and F [3,46] = 8.15, p = .0002,

respectively). For both tests, L.F. did not differ

significantly from his control group. For Affect Selection,

the RHD group performed significantly worse than the LHD

group and from controls, F (3,46) = 14.44, p = .0001, and

L.F. performed significantly worse than the Younger Control

group. Finally, for Affect Matching, the RHD group was

significantly worse than the LHD group which was

significantly worse than the control groups, F (3,46) =

19.61, p = .0001. Again, L.F. did not differ from the

Younger control group.

When Subtest differences were evaluated within each

Group the following results occurred. The Young control

group scored significantly higher on Affect Selection than

on Affect Matching which was significantly higher than both

Affect Discrimination and Affect Naming (F [3,42] = 11.06, p

= .0001). For the Older control group only Affect Selection

and Affect Discrimination differed significantly, with

Affect Selection being higher (F [3,42] = 3.68, p = .0194).

For the LHD patients Affect Selection was significantly

higher than the other subtests (F [3,27] = 11.19, p -

.0001). Finally, for the RHD patients Affect Selection was

performed significantly better than Affect Discrimination

and Affect Matching and Affect Naming was performed

significantly better than Affect Matching. Refer to Table

2-4 for a summary of these findings. In general, the

prosopagnosic was unimpaired on these tests while the RHD

patients differed from all other groups on Affect Selection

Table 2-4

Mean Percent Correct for the Florida Facial Affect Test
Affect Discrimination. Naming, Selection, and Matching


Group Discrim. Naming Selection Match:

Younger m 93.0ax 92.7ax 99.3bx 96.7'
sd 6.21 6.23 1.76 4.01

L.F. m 87.5x 87.5x 93.8x 85.0
sd 6.45 6.45 4.79 4.01

Older m 90.7ax 95.3abx 97.3bx 93.0
sd 8.42 5.81 4.17 9.0

LHD m 79.5ay 82.5ay 94.0bx 85.0'
sd 6.43 13.79 8.10 10.0(

RHD m 71.5acy 80.5aby 83.0by 63.5'
sd 12.7 8.96 12.95 15.4'
xyz column means with the same letter are not significant
different at alpha < .05
abc row means with the same letter are not significantly
different at alpha < .05



and Affect Matching. Thus, on direct tests of expression

processing the prosopagnosic was relatively unimpaired while

the RHD patients were relatively impaired. Figure 2-1

presents these findings graphically.

Occupational Stereotype Rating Test. The mean rating

for correct pairings and the mean rating for incorrect

pairings were submitted to a 4 (Group) x 2 (Condition)

mixed-block ANOVA for each test.4 (See Table 2-5 for

overall results of this analysis and Figure 2-2 for

graphical presentation.) For the Occupation Stereotype

Rating test there were significant main effects of Group (F

[3,45] = 3.68, p = .0188) and Condition (F [1,45] = 155.40,

p = .0001) but the Group by Condition interaction was not

significant (F [3,45] = 1.99, p = .1287). Post hoc testing

indicated that, as expected, the Correct condition was

yielded significantly larger values than the Incorrect

condition across groups indicating that all subjects were

able to discriminate the correct from incorrect pairings.

Additionally, the overall ratings for the Younger control

group were significantly higher than those for the Older

control group, but neither control group differed

significantly from the stroke groups. L.F.'s mean rating

for the Correct condition (5.38) was significantly lower

than that of the Younger control subjects (95% CI: 6.05 to

4 One LHD patient was unable to complete the Occupation and
Personality Stereotype Rating tests because of comprehension
difficulties, and was not included in these analyses.



r -90 ....
80 .
r 70

6 Discrimination# Naming# Selectlon* Matching *#
FFAT Subtests

Subject Group
S Young Controls -- Prosopagnosic -l- Old Controls
L HD Patients -x- RHD Patients

Figure 2-1. Performance on the FFAT Affect Discrimination,
Naming, Selection, and Matching Subtests. = RHD < LHD; #
= stroke patients < controls

8.83) while his mean rating for the Incorrect condition

fell within the 95% CI for the Younger control group (95%

CI: 1.49 to 6.85). In fact, both of L.F.'s scores fell

within the 95% CI for the Incorrect condition suggesting

that there was no significant difference between the two.

L.F. was assessed four times and t-tests comparing the

Correct to Incorrect condition were never significant (see

Table 2-6) at alpha < .05. Thus, only the prosopagnosic

showed impaired ability to extract occupational stereotype

information from unfamiliar faces.

Table 2-5

Simple Effect and Grand Mean Ratings for the Occupational
Stereotype Rating Test









Correct Incorrect

7.44x 4.17y
.65 1.25

5.38Y 4.83y
.31 .33

6.67 3.40
1.14 .75

7.08 4.37
.91 .87

6.34 4.50
1.43 1.23

Grand Mean






Grand Mean m 6.92a 4.04b
sd 1.10 1.12

Note. The interaction in this model was not significant at
alpha < .05.

ab main effect means with the same letter are not
significantly different

xy L.F. and the Younger Control group share the same letter
within a condition when L.F.'s score falls within the 95% CI
for that condition

Personality Stereotype Rating Test. For the

Personality Rating test, the Group main effect was not

significant (F [3,45] = 2.14, p = .1081) but the Condition

main effect was significant (F [1,45] = 145.41, p = .0001);

the Group by Condition interaction approached significance

(F [3,45] = 2.56, p = .0670). Again, post hoc testing

indicated that the Correct condition was significantly


Mean Rating



Young Prosop Old LHD RHD
Subject Group

Correot M Incorreot

Figure 2-2. Mean ratings for the Correct and Incorrect
conditions of the Direct Occupational Stereotype test.

higher than the Incorrect condition (see Table 2-7 and

Figure 2-3). L.F.'s scores (Correct = 6.13; Incorrect =

3.8) fell within the appropriate 95% CI's for the Younger

control group (Correct: 4.40 to 8.86; Incorrect: 1.52 to

5.52). The t-tests comparing the Correct to Incorrect

condition for each of L.F.'s evaluations were significant

(see Table 2-6) a p < .05. Clearly then, all groups were

able to accurately derive personality information from

unfamiliar faces.

Identity Rating Test. For the Identity Rating test,

there was a significant Group by Condition interaction (F

[3,46] = 6.82, p = .0007). Evaluation of the interaction

Table 2-6

Results of t-Tests Comparing L.F.'s Ratings in the Correct
Versus Incorrect Conditions for Each of the Rating Tests

Occupational Stereotype Rating Test


mean sd

5.10 (1.52)
5.40 (1.35)
5.80 (1.40)
5.20 (1.40)

mean sd

4.50 (1.72)
5.00 (1.25)
5.20 (1.40)
4.60 (1.26)

Personality Stereotype Rating Test


mean sd

6.10 (0.74)
5.70 (1.16)
6.50 (1.84)
6.20 (1.03)

mean sd

3.70 (1.34)
3.90 (1.10)
3.70 (1.06)
3.90 (1.10)



Identity Rating Test

Correct Incorrect
mean sd mean sd

4.88 (1.13)
4.50 (1.07)
5.63 (0.52)
5.63 (0.52)





indicated that there was a significant difference between

the correct and incorrect conditions within all the groups

(Younger: F [1,14] = 4784.75, p = .0001; Older: F [1,14] =

628.35, p = .0001; LHD: F [1,9] = 290.36, p = .0001; RHD:

F [1,9] = 48.17, p = .0001). Within the Correct condition,

the RHD patients were significantly lower than the two




Table 2-7

Simple Effect and Grand Mean Ratings for the Personality
Stereotype Rating Test

Group Correct Incorrect Grand Mean

Young m 6.63X 3.47X 5.05
sd 1.04 .91 1.87

L.F. m 6.13y 3.80y 4.96
Sd .33 .12 1.26

Old m 6.69 3.64 5.16
sd .69 .94 1.75

LHD m 6.94 4.33 5.57
sd .83 .99 1.61

RHD m 6.11 4.50 5.31
sd 1.33 .91 1.38

Grand Mean m 6.60a 3.90b
sd .99 1.00

Note. Only the Condition main effect was significant
ab condition main effect means with the same letter are
significantly different at alpha < .05

xy when L.F. and the Younger Control group share the same
letter within a condition then L.F.'s score falls within the
95% CI for that condition

control groups (F [3,46] = 4.35, p = .0089) but did not

differ from the LHD patients nor did the LHD patients differ

from the control subjects. Within the Incorrect condition,

the RHD patients were significantly higher than the

remaining groups (F [3,46] = 5.09, p = .0040) which did not

differ. These results indicate that while the RHD patients

were able to discriminate correct from incorrect face name

pairings, they did not do so as well as the other subjects

Mean Rating



3 --.....

Yung Prosop Old LHD RHD
Subject Group

=Correct m correct

Figure 2-3. Mean ratings for the Correct and Incorrect
conditions of the Direct Personality Stereotype test.

(see Table 2-8 and Figure 2-4). L.F.'s score for the

Correct condition (mean = 5.16) was less than the lower

limit of the 95% CI for the Correct condition of the Younger

control group (8.34 to 9.36) and his score for the Incorrect

condition (mean = 4.36) was larger than the upper limit of

the 95% CI for the Incorrect condition of the Younger

control group (.45 to 1.99). Of the t-tests comparing the

Correct to Incorrect condition for each of L.F.'s

evaluations, only one (#4) was significant (see Table 2-6)

at alpha < .05; given the number of analyses computed, an

alpha level of .0498 should be interpreted as non-

significant. Thus, while the RHD patients had mild

difficulty recognizing famous faces, they could do so far

more accurately than the prosopagnosic.

Comparison of Ratings Test Difference Scores. When

difference scores (i.e., the difference between mean rating

for the Correct condition and mean rating for the Incorrect

condition) for the three rating tests were analyzed together

in a 4 (Group) x 3 (Test) ANOVA significant Group (F [3,46]

Table 2-8

Simple Effect and Grand Mean Ratings for the Identity Rating

Group Correct Incorrect Grand Mean

Young m 8.85ax 1.22bx 5.04
sd .24 .36 3.89

L.F. m 5.162 4.362 4.76
Sd .56 .53 .66
Old m 8.55ax 1.46bx 5.00
sd .52 .75 3.66
LHD m 8.28axy 1.61bx 4.95
sd .73 .64 3.49

RHD m 7.67ay 2.47by 5.07
sd 1.56 1.35 3.02

Grand Mean m 8.41 1.62
sd .91 .90
abc row means sharing these letters are not significantly
different at alpha < .05

xy column means sharing these letters are not significantly
different at alpha < .05

xz when L.F. and the Younger Control group share the same
letter within a condition then L.F.'s score falls within the
95% CI for that condition

Mean Rating




1 --------------------------------------
bYung Prosop Old LHD RHD
Subject Group

M Correct ISIncorrect

Figure 2-4. Mean ratings for the Correct and Incorrect
conditions of the Direct Identity test.

= 5.69, p = .0021) and Test (F [2,90] = 191.54, p = .0001)

main effects were found without a significant interaction (F

[6,90] = .46, p = .8366). The REGWF's indicated that the

RHD group was significantly worse at discriminating correct

versus incorrect pairings across all tests and that all

subjects were better at discriminating correct versus

incorrect face-name pairings on the Face Identity test than

on the Occupation and Personality tests. L.F. was

significantly impaired compared to his control group on both

the Occupation (L.F.'s score = .55; 95% CI = .55 to 5.99)

and Identity Rating tests (L.F.'s score = .80; 95% CI = 6.71

to 8.55). His difference score on the Personality Rating

...... ...... .

Test (2.23) fell within the 95% CI for the Younger Control

group (1.44 to 6.88). See Table 2-9 for means and standard

deviations. These findings show non task-specific

impairment in discriminating correct from incorrect pairings

for the RHD patients while the prosopagnosic's impairment is

specific to the occupation and identity tasks.

Summary of the Direct Task Results. It was

hypothesized in Chapter 1 that the RHD patients would be

Table 2-9

Mean Difference Scores for













Rating Tests


Pers Idel

3.17x 7.6:
1.73 .4

2.33x .8
.41 .21

3.05 7.0!
1.31 1.1(

2.69 6.6
1.50 1.21

1.61 5.2(
1.31 2.3







Grand Mean






Grand Mean m 2.87a 2.72a 6.79b

sd 1.66 1.58 1.57
ab main effect means with the same letter are not
significantly different at alpha < .05

xy when L.F. and the Younger Control group share the same
letter within a condition then L.F.'s score falls within the
95% CI for that condition

relatively impaired on the expression and personality

stereotype tasks while the prosopagnosic would be impaired

on the identity and occupation stereotype tasks. As

hypothesized, the prosopagnosic was severely impaired in his

ability to gather identity and occupational stereotype

information from faces while performing normally on the

expression and personality stereotype tasks. On the other

hand, the RHD patients, while being impaired on the

expression task, performed normally on the three other

direct tasks including the personality stereotype task.

Tests of Indirect Access to Face Information

Face-Occupation Category Interference Test. For each

Indirect Test, RTs were submitted to a 4 (Group) by 3

(Condition) mixed-block ANOVA. For this test neither the

Condition main effect (F [2,92] = .25, p = .7757) nor the

Group by Condition interaction (F [6,92] = 1.23, p = .2980)

were significant. Only the Group main effect reached

significance (F [3,92] = 12.59, p = .0001). The post hoc

test indicated that the two stroke groups had significantly

longer overall RTs than did the control groups. L.F.'s

scores fell well within their respective 95% CI's but there

was not a significant condition effect for the Younger

Control group, and L.F.'s Correct condition was his slowest.

Thus, it can be concluded that L.F. did not show an

interference effect. Table 2-10 shows means and standard

deviations for this test.

Table 2-10

Reaction Time (in milliseconds) Means and Standard
Deviations for Face-Occupation Category Interference Test
(Con = Congruent, Incon = Inconqruent)

Group Control Con Incon Main Effect

Younger m 905.8 907.7 931.2 914.9a
sd 149.7 159.2 173.4 157.8

L.F. a 1001.6 1049.1 966.8 1005.8
sd 148.1 187.8 117.0 143.5

Older m 940.0 925.8 928.5 931.4a
sd 249.6 221.4 235.1 220.8

LHD a 1239.8 1239.5 1226.4 1235.2b
sd 271.3 182.5 222.3 220.3

RHD m 1346.2 1340.5 1280.6 1322.4b
sd 230.0 199.9 192.8 203.2

Main Effect m 1071.0 1066.1
sd 286.9 264.4

Note. The Condition main effect means


do not include L.F.'s

Face-Personality Descriptor Interference Test. For

this test, the Group (F [3,92] = 21.00, p = .0001) and

Condition (F [2,92] = 31.69, p = .0001) main effects and

Group by Condition interaction (F [6,92] = 2.91, p = .0122)

were significant. Table 2-11 provides means and standard

deviations and Figure 2-5 presents the data graphically.

Because the interaction was significant, follow-ups on the

main effects were not calculated. Follow-up ANOVAs and

REGWFs were computed for each Group and Condition to

determine the source of the interaction. Within each group

Table 2-11

Reaction Time (in milliseconds) Means and Standard
Deviations for Face-Personality Descriptor Interference Test
(Con = Congruent, Incon = Incongruent)


Main Effect






Main Effect

m 1231.0 1216.4 1321.6
sd 426.7 404.7 467.3

Note. The Condition main effect means do not include L.F.'s

ab row means with the same letter are not significantly
different at alpha = .05; comparisons among means without
letters were not evaluated.

there was a significant Condition effect (Younger: F [2,28]

= 5.09, p = .0130; Older: F [2,28] = 9.49, p = .0007; LHD:

F [2,18] = 18.88, p =.0001; RHD: F [2,18] = 5.73, p =

.0119). The post hoc REGWFs indicated that for the Younger

and Older Controls and the LHD patients the Congruent and

Control conditions did not differ, but the Incongruent

condition was significantly slower than the other two

conditions. For the RHD patients there was no difference

























Reaction Time (me)


12 50 ...........

Young Prosop Old LHD RHD
Subject Group

Congruent M Incongruent

Figure 2-5. Mean reaction times for the Congruent and
Incongruent conditions on the Face-Personality Descriptor
interference task.

between the Control and Incongruent conditions nor between

the Control and Congruent conditions, but the Congruent and

Incongruent conditions were different. The bottom line for

this analysis is that the Incongruent condition was

significantly slower than the Congruent condition for all

groups indicating that information provided by the face

interfered with the personality-related decision.

Within each condition there was a significant Group

effect at alpha = .0001 (Control: F [3,46] = 22.13;

Congruent: F [3,46] = 17.48; Incongruent: F [3,46] =

21.46). For all conditions, the two stroke groups were

significantly slower than the control groups. Because the

prosopagnosic's RTs were, expectedly, much slower than those

of the Young controls, it made no sense to use confidence

intervals built around the control group's scores to

determine if L.F. performed normally. Instead, 95% CIs were

constructed around L.F.'s mean scores (which represent the

results of four testing sessions). For L.F. on this test,

the conditions, ordered from fastest to slowest, were:

Congruent (m = 1021.5; 95% CI: 1017.9 to 1025.1), Control (m

= 1050.6; 95% CI: 1047.0 to 1054.2), and Incongruent (m =

1067.3; 95% CI: 1063.7 to 1070.9). Clearly, there is no

overlap of confidence intervals and the conditions are

ordered as hypothesized for a normal effect. Thus, like the

RHD patients, L.F. showed a normal interference effect on

this test.

Expression-Label Interference Test. For this test, the

Group (F [3,92] = 16.22) and Condition (F [3,92] = 8.22)

main effects were significant at alpha = .0001; the Group by

Condition interaction was not significant (F [6,92] = 0.75,

p = .6100). Table 2-12 shows means and standard deviations

for this test and Figure 2-6 shows the data graphically.

Post hoc testing indicated, again, that the stroke patients

had slower overall RTs than did the control subjects, while

the Incongruent condition was significantly slower than the

Congruent and Control conditions (which did not differ).

While the above results seem to suggest that the RHD

patients performed "normally" (i.e., the Incongruent

Table 2-12

Reaction Time (in milliseconds) Means and Standard
Deviations for Expression-Label Interference Test (Con =
Congruent, Incon = Incongruent)


Con Incon

638.7 654.3
102.8 123.3

691.3 683.0
27.4 22.9

659.3 676.3
105.4 112.0

936.8 975.0
214.4 253.8

912.3 911.4
124.4 152.1

Main Effect






Main Effect

d 755.7a
sd 210.0

Note. The Condition main effect means do not include L.F.'s

ab main effect means with the same letter are not
significantly different at alpha = .05

condition was slower than the Congruent condition) on this

task, further analysis casts doubt on this interpretation.

When the Older control group and the two stroke groups were

analyzed independently, the Incongruent condition was

significantly slower that the other two conditions for the

control subjects (F [2,28] = 9.78, p = .0006) and approached

significance for the LHD patients (F [2,18] = 2.27, p =

.1318). However, the F-test was not even close for the RHD

patients (F [2,18] = .88, p = .4315). In fact, examination















Reaction Time (mn)





Prosop Old
Subject Group


Figure 2-6. Mean reaction times for the Congruent and
Incongruent conditions on the Face-Expression Label
interference task.

of the RHD means in Table 3-10 indicates the Congruent

condition was actually slower than the Incongruent condition

for this group. Thus, despite the finding of a Condition

main effect in the overall ANOVA, it appears that the RHD

patients did not show a normal interference effect on this

task and lack of statistical power cannot account for this

failure. Likewise, L.F. was slower on the Congruent than

Incongruent condition suggesting that he, too, was unable to

indirectly extract expression information from faces.

Face-Identity Interference Test. On this test, both

main effects and the interaction were significant (Group: F



__ __

[3,92] = 21.20, p = .0001; Condition: F [2,92] = 60.56, p =

.0001; Group by Condition: F 16,92] = 2.63, p = .0213).

Follow-up ANOVAs indicated that all groups showed a

significant Condition effect (Younger: F [2,28] = 9.83, p =

.0006; Older: F [2,28] = 14.89, p = .0001; LHD: F [2,18] =

13.81, p = .0002; RHD: F [2,18] = 22.55, p = .0001). The

ordering of Conditions was identical for all groups with the

Congruent being fastest and Incongruent slowest, and Control

in between (see Table 2-13 and Figure 2-7). However, for

Table 2-13

Reaction Time (in milliseconds) Means and Standard
Deviations for Face-Identity Interference Test (Con =
Congruent. Incon = Incongruent)

Group Control Con Incon Main Effect

Younger m 962.4a 931.2a 1045.6b 979.8
sd 157.0 156.1 236.2 189.0

L.F. m 1150.9 1203.0 1199.8 1184.5
sd 31.7 30.8 37.8 39.3

Older m 1091.5a 997.8b 1155.6c 1081.6
sd 295.7 186.4 282.7 261.9
LHD m 1513.9a 1427.6a 1668.8b 1536.8
sd 329.6 298.2 436.8 361.7

RHD m 1633.3a 1394.3b 1785.6c 1604.4
sd 267.5 224.7 371.5 327.9
Grand Mean m 1245.6 1143.2 1351.3
sd 377.2 303.4 445.6

Note. The Condition main effect means do not include L.F.'s
abc means within a column with the same letter are not
significantly different at alpha = .05

Reaction Time (ms)

Prosop Old
Subject Group


Figure 2-7. Mean reaction times for the Congruent and
Incongruent conditions on the Face-Identity interference

the Younger controls and LHD patients there was no

difference between the Congruent and Control conditions.

For the Older controls and RHD patients, all conditions were

significantly different. Within each condition the stroke

groups were again significantly slower than the control

groups (Control: F [3,46] = 21.36; Congruent: F [3,46] =

19.24; Incongruent: F [3,46] = 19.24) at alpha = .0001.

L.F.'s scores were ordered properly (i.e., Congruent faster

than Incongruent) and however the magnitude of the effect

was small compared to the Younger controls and relative to

his own and another prosopagnosic's performance on other



1200 F



face-name interference tasks (DeHaan et al., in press;

DeHaan et al., 1987). Thus it seems that the prosopagnosic

alone failed to demonstrate a normal interference effect on

this task despite having demonstrated implicit recognition

of facial identity in several other studies (Bauer, 1984;

Greve & Bauer, 1990; DeHaan et al., in press). Issues

related to his apparent failure on this task and the

expression interference task will be discussed in Chapter 4.

Summary of Indirect Task Results. In summary, all

groups failed to show an interference effect on the

Occupation task while strong effects were observed for all

groups on the Personality task. The prosopagnosic failed to

show an interference effect on the Identity task, while the

other groups performed normal. This finding is in contrast

to that of DeHaan et al.'s (in press) who showed an

interference effect with this patient; it is also in

contrast to several other studies which have demonstrated

implicit recognition of facial identity in prosopagnosia.

On the Expression task, the RHD patients and the

prosopagnosic failed to show sensitivity to the expression

information provided by the faces. The implications of this

finding are discussed in the following chapter.

Individual Performance on Expression Tasks

The group data indicate associated impairments on the

direct and indirect expression tasks for the RHD patients.

The failure of the RHD patients on both types of tasks has

important implications for our understanding of the emotion

processing defect in these patients. However, as Shallice

(1988) warns, associations of deficits in groups may not

represent the performance of the individual subjects within

the group (e.g., if half the group is high on variable and

low on the other, while the other half has the reverse

pattern, the group will be down on both variables without

any one subject being down on both). To address this issue

this section examines the pattern of individual results for

the Older control and both stroke patient groups on each of

the direct affects tests and the Expression-label

interference task. Ninety-five percent confidence intervals

were constructed based on the Older control group mean

scores for each direct expression test and any stroke

patient falling below the lower limit was considered to have

failed that test. For the indirect expression task, if the

response latency for the Congruent condition was at all

longer than the Incongruent condition (i.e., Incongruent -

Congruent < 0) then performance was considered impaired.

Using this criteria there is some risk that small difference

may reflect chance variation, however, with the exception of

three patients (two LHD, one RHD), all subjects had

difference scores larger than +25ms which allows for little

ambiguity since this is much larger than the group

difference between conditions. Thus, all but three patients

either clearly showed a normal interference effect or they

did not. Table 2-14 shows the performances of all the