Citation
Differences in Psychophysiological Reactivity to Static and Dynamic Displays of Facial Emotion

Material Information

Title:
Differences in Psychophysiological Reactivity to Static and Dynamic Displays of Facial Emotion
Creator:
SPRINGER, UTAKA S. ( Author, Primary )
Copyright Date:
2008

Subjects

Subjects / Keywords:
Anger ( jstor )
Emotional expression ( jstor )
Face ( jstor )
Facial expressions ( jstor )
Fear ( jstor )
Galvanic skin response ( jstor )
Happiness ( jstor )
Mental stimulation ( jstor )
Reactivity ( jstor )
Startle reflex ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Utaka S. Springer. Permission granted to University of Florida to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
5/1/2005
Resource Identifier:
75202958 ( OCLC )

Downloads

This item has the following downloads:

springer_u ( .pdf )

springer_u_Page_06.txt

springer_u_Page_15.txt

springer_u_Page_08.txt

springer_u_Page_40.txt

springer_u_Page_64.txt

springer_u_Page_20.txt

springer_u_Page_56.txt

springer_u_Page_29.txt

springer_u_Page_68.txt

springer_u_Page_42.txt

springer_u_Page_37.txt

springer_u_Page_10.txt

springer_u_Page_58.txt

springer_u_Page_53.txt

springer_u_Page_19.txt

springer_u_Page_12.txt

springer_u_Page_25.txt

springer_u_Page_57.txt

springer_u_Page_31.txt

springer_u_Page_26.txt

springer_u_Page_16.txt

springer_u_Page_04.txt

springer_u_Page_30.txt

springer_u_Page_14.txt

springer_u_Page_59.txt

springer_u_Page_07.txt

springer_u_Page_03.txt

springer_u_Page_38.txt

springer_u_Page_60.txt

springer_u_Page_54.txt

springer_u_Page_11.txt

springer_u_Page_50.txt

springer_u_Page_51.txt

springer_u_Page_05.txt

springer_u_Page_63.txt

springer_u_Page_27.txt

springer_u_Page_35.txt

springer_u_Page_28.txt

springer_u_Page_45.txt

springer_u_Page_49.txt

springer_u_Page_02.txt

springer_u_Page_36.txt

springer_u_Page_32.txt

springer_u_Page_01.txt

springer_u_Page_62.txt

springer_u_Page_55.txt

springer_u_Page_09.txt

springer_u_Page_52.txt

springer_u_Page_48.txt

springer_u_Page_13.txt

springer_u_Page_17.txt

springer_u_Page_21.txt

springer_u_Page_67.txt

springer_u_Page_47.txt

springer_u_Page_66.txt

springer_u_Page_34.txt

springer_u_Page_69.txt

springer_u_pdf.txt

springer_u_Page_24.txt

springer_u_Page_65.txt

springer_u_Page_22.txt

springer_u_Page_33.txt

springer_u_Page_61.txt

springer_u_Page_23.txt

springer_u_Page_44.txt

springer_u_Page_39.txt

springer_u_Page_18.txt

springer_u_Page_41.txt

springer_u_Page_43.txt

springer_u_Page_46.txt


Full Text












DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND
DYNAMIC DISPLAYS OF FACIAL EMOTION















By

UTAKA S. SPRINGER


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2005

































Copyright 2005

by

Utaka S. Springer















ACKNOWLEDGMENTS

This research was supported by RO1 MH62539. I am grateful to Dawn Bowers for

her patience, availability, and expertise in advising this project. I would like to thank the

members of the Cognitive Neuroscience Laboratory for their support throughout this project.

I would like to extend special thanks to Shauna Springer, Alexandra Rosas, John McGetrick,

Paul Seignourel, Lisa McTeague, and Gregg Selke.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ......... .................................................................................... iii

LIST OF TABLES ......... ........ ................................... .......... .... ............ vi

L IST O F F IG U R E S .... ...... ................................................ .. .. ..... .............. vii

A B STR A C T ..................... ................................... ........... ................. viii

1 IN TR OD U CTION ............................................... .. ......................... ..

Perceptual Differences for Static and Dynamic Expressions ......................................3
Cognitive Studies .................................... .......... .. ........... ........... 4
Neural Systems and the Perception of Movement versus Form...........................5
Dimensional versus Categorical Models of Emotion........................ ..............7
D im ensional M odels of Em otion................................... ..................................... 7
Categorical M odels of Em otion.................................... .................................... 10
Emotional Responses to Viewing Facial Expressions............................. .............12

2 STATEM ENT OF THE PROBLEM ................................... .................................... 15

S p e c ific A im I .............................................................................................................1 6
S p e c ific A im II ..................................................................................................... 1 6

3 M E T H O D S .......................................................................................................1 8

P a rtic ip a n ts ........................................................................................................... 1 8
M materials ......................................1...................9..........
Collection of Facial Stimuli: Video Recording ...............................................19
Selection of Facial Stim uli ................. ................................20
Digital Formatting of Facial Stimuli ........................... ....... ............... 21
D ynam ic Stim uli ................................... ..............................22
Final Selection of Stimuli for Psychophysiology Experiment .........................23
D esign Overview and Procedures............................................. 23
Psychophysiologic Measures ...................... ........ .....................26
Acoustic Startle Eyeblink Reflex (ASR) ........ ........................... ......... 26
Skin Conductance Response (SCR) ............. ..................... .................. 27
Data Reduction of Psychophysiology Measures ...................... .......................27
Statistical A n aly sis..........................................................................................2 8









4 R E S U L T S .......................................................... ................ 3 0

Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces .....................30
Startle Eyeblink Response......... ......................... ........... ........... .... 31
Skin Conductance Response (SCR) ....................................... ............... 31
Self-R reported A rousal ............................. ............ .. .................................. 32
Hypothesis 2: Emotion Modulation of Startle by Expression Categories ..................32
Other Patterns of Emotional Modulation by Viewing Mode................ ..................35
Skin C onductance R esponse..................................................... .....................35
S elf-R ep orted A rou sal .............................................................. .....................36
Self-R reported V alence................................................ ............................. 37

5 D ISC U S SIO N ............................................................................... 40

Interpretation and Relationship to Other Findings ............................................. 41
Methodological Issues Regarding Facial Expressions ............................................44
Other Considerations of the Present Findings ......................................................46
Lim stations of the Current Study ...... ............................................... ............... 47
Directions for Future Research ........... ..... ......... ................... 48

APPENDIX

A ST A T IC ST IM U L U S SE T .............................................................. .....................51

B DYNAMIC STIMULUS SET ......... ............... .................... 52

LIST OF REFEREN CE S .. ....... ................................ ........................... ............... 53

BIO GRAPH ICAL SK ETCH .................................................. ............................... 60
















LIST OF TABLES


Table page

3-1 Demographic characteristics of experimental participants .....................................19

3-2 Mean (SD) recognition rates, valence, and arousal of static and dynamic face
stim u li ...................................... ................................................... 2 3

4-1 Mean (SD) dependent variable scores by Viewing Mode........................ 30

4-2 Mean (SD) dependent variable scores by Viewing Mode and Expression
C category ............... ..................................... ...........................33
















LIST OF FIGURES

Figure pge

1-1 Neuroanatomic circuitry of the startle reflex .............. ............... ...............13

3-1 Temporal representation of dynamic and static stimuli .......................................22

4-1 Startle eyeblink T-scores by expression category ......................................... 34

4-2 Self-reported arousal by expression category .................................. ............... 37

4-3 Self-reported valence by expression category ............. ........................................38















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND
DYNAMIC DISPLAYS OF FACIAL EMOTION

By

Utaka S. Springer

May 2005

Chair: Dawn Bowers
Major Department: Clinical and Health Psychology

Rationale. Recent studies suggest that many neurologic and psychiatric disorders

are associated with impairments in accurately interpreting facial expressions. These

studies have typically used photographic stimuli, yet cognitive and neurobiological

research suggests that the perception of moving (dynamic) expressions is different from

the perception of static expressions. Moreover, in day-to-day interactions, humans

generally view faces while they move. This study had two aims: (1) to elucidate

differences in physiological reactivity [i.e., startle eyeblink reflex and the skin

conductance response (SCR)] while viewing static versus dynamic facial expressions,

and (2) to examine patterns of reactivity across specific facial expressions. It was

hypothesized that viewing dynamic faces would be associated with greater physiological

reactivity and that expressions of anger would be associated with potentiated startle

eyeblink responses relative to other facial expressions.









Methods. Forty young adults viewed two slideshows consisting entirely of static

or dynamic facial expressions. Expressions represented the emotions of anger, fear,

happiness, and neutrality. Psychophysiological measures included the startle eyeblink

reflex and SCR. Self-reported valence and arousal were also recorded for each stimulus.

Results. Data were analyzed using repeated measures analyses of variance. The

participants exhibited larger startle eyeblink responses while viewing dynamic versus

static facial expressions. Differences in SCR approached significance (p = .059), such

that dynamic faces tended to induce greater responses than static ones. Self-reported

arousal was not significantly different during either condition. Additionally, the startle

reflex was significantly greater for angry expressions, and comparably smaller for the

fearful, neutral, and happy expressions, across both modes of presentation. Self-reported

differences in reactivity between types of facial expressions are discussed in the context

of the psychophysiology results.

Conclusions. The current study found evidence supporting greater

psychophysiological reactivity in young adults while they viewed dynamic compared to

static facial expressions. Additionally, expressions of anger induced relatively higher

startle responses relative to other expressions, including fear. It was concluded that angry

expressions, representing personally directed threat, induce a greater motivational

propensity to withdraw or escape. These findings highlight an important distinction

between initial stimulus processing (i.e., expressions of fear or anger) and motivated

behavior.














CHAPTER 1
INTRODUCTION

The ability to successfully interpret facial expressions is a fundamental aspect of

normal life. An immense number of configurations across the landscape of the human

face are made possible by 44 pairs of muscles anchored upon the curving surfaces of the

skull. A broad smile, a wrinkled nose, widened eyes, a wink all convey emotional

content important for social interactions. Darwin (1872) suggested that successful

communication through nonverbal means such as facial expressions has promoted

survival of the human species. Indeed, experimental research has demonstrated that

infants develop an understanding of their mother's facial expressions rapidly and

automatically, and that they use these signals to guide their safe behavior (Field,

Woodson, Greenberg, & Cohen, 1982; Johnson, Dziurawiec, Ellis, & Morton, 1991;

Nelson & Dolgrin, 1985; Sorce, Emde, Campos, & Klinnert, 1985). The accurate

decoding of facial signals, then, can play a protective role as well as a communicative

one.

A growing body of empirical research suggests that many conditions are associated

with impaired recognition of facial expressions. A list of neurologic and psychiatric

conditions within which studies have associated impaired interpretation of facial

expressions include autism, Parkinson's disease, Huntington's disease, Alzheimer's

disease, schizophrenia, body dysmorphic disorder, attention-deficit/hyperactivity

disorder, and social phobia (Buhlmann, McNally, Etcoff, Tuschen-Caffier, & Wilhelm,

2004; Edwards, Jackson, & Pattison, 2002; Gilboa-Schechtman, Foa, & Amir, 1999; Kan,









Kawamura, Hasegawa, Mochizuki, & Nakamura, 2002; Singh et al., 1998;

Sprengelmeyer et al., 1996; Sprengelmeyer et al., 2003; Teunisse & de Gelder, 2001).

These deficits in processing facial expressions appear to exist above and beyond

disturbances in basic visual or facial identify processing and may reflect disruption of

cortical and subcortical networks for processing nonverbal affect (Bowers, Bauer, &

Heilman, 1993). In many cases, impairments in the recognition of specific facial

expressions have been discovered. For example, bilateral damage to the amygdala has

been associated with the inability to recognize fearful faces (Adolphs, Tranel, Damasio,

& Damasio, 1994).

One potential problem with these clinical studies is that they most often use static,

typically photographic, faces as stimuli. This may be problematic for two reasons. First,

human facial expressions usually consist of complex patterns of movement. They can

flicker across the face in a fleeting and subtle manner, develop slowly, or arise with

sudden intensity. The use of static stimuli in research and clinical evaluation, then, has

poor ecological validity. Second, mounting evidence suggests that there are fundamental

cognitive and neural differences between the perception of static-based and dynamic

facial expressions. These differences, which can be subdivided into evidence from

cognitive and more biologically based studies, are described in more detail in the

following sections.

The preceding highlights the need to incorporate dynamic facial expression stimuli

in the re-evaluation of conditions currently associated with facial expression processing

deficits, as argued by Kilts and colleagues (2003). This line of research would greatly

benefit from the creation of a standardized battery of dynamic expression stimuli. Before









a more ecologically valid dynamic battery can be developed, it is necessary to more

precisely characterize how normal individuals respond to different types of facial

expression stimuli. Although cognitive, behavioral, and neural systems have been

examined in the comparing responses associated with static and dynamic face perception,

no studies to date have compared differences in emotional reactivity using

psychophysiologic indices of arousal and valence (i.e., startle reflex, skin conductance

response). The two major goals of the present study, then, are as follows: first, to

empirically characterize psychophysiologic differences in how people respond to

dynamic versus static emotional faces, and second, to determine whether

psychophysiologic response patterns differ when individuals view different categories of

static and dynamic facial expressions (e.g., anger, fear, or happiness).

The following sections provide the background for the current study in three parts:

(1) evidence that suggests cognitive and neurobiological differences in the perception of

static versus dynamic expressions, (2) "dimensional" and "categorical" approaches to

studying emotion, and (3) emotional responses to viewing facial expressions. Specific

hypotheses and predictions are presented in the next chapter.

Perceptual Differences for Static and Dynamic Expressions

Evidence that individuals respond differently to static and dynamic displays of

emotion comes from two major domains of research. The first major domain is cognitive

research. With regard to the present study, this refers to the study of the various internal

mental processes involved in the perception of emotions in others (i.e., recognition and

discrimination), as inferred by overt responses. The second major domain is

neurobiological research. Again, specific to the present study, this refers to the

physiological and neurological substrates involved during or after emotion perception.









The following sections review the literature from these two domains with regard to

differences in perception of static and dynamic expressions.

Cognitive Studies

Recent research suggests that facial motion influences several cognitive aspects of

face perception. First, facial motion improves recognition of familiar faces, especially in

less-than-optimal visual conditions (Burton, Wilson, Cowan, & Bruce, 1999; Lander,

Christie, & Bruce, 1999). For example, in conditions such as low lighting or blurriness,

the identity of a friend or a famous actor is more easily discerned through face perception

if the face is moving. It is less clear whether this advantage of movement is also

conferred to the recognition of unfamiliar faces (Christie & Bruce, 1998; Pike, Kemp,

Towell, & Phillips, 1997). As reviewed by O'Toole et al. (2002), there are two

prevailing hypotheses on how facial motion enhances face recognition. According to the

first, facial movement provides additional visual information that helps the viewer

assemble a three-dimensional mental construct of the face (e.g., Pike et al., 1997). A

second view is that certain movement patterns may be unique and characteristic of a

particular individual (i.e., "movement signatures"). These unique movement signatures,

such as Elvis Presley's lip curl, are thought to supplement the available structural

information of the face (e.g., Lander & Bruce, 2004). Either or both hypotheses can

account for observations that familiar individuals are more readily recognized from

dynamic than static pictures.

One question that naturally arises is whether facial motion also increases

recognition and discrimination of discrete types of emotional expressions. Like familiar

faces, emotional expressions on the face have been shown to be similar across individuals

and even across cultures (Ekman, 1973; Ekman & Friesen, 1976). Leonard and









colleagues (1991) found that categorical judgments of "happiness" during the course of a

smile occurred at the point of most rapid movement change in the actor's facial

configuration. Werhle and colleagues (2000) reported that recognition of discrete

emotions was enhanced through the use of dynamic versus static synthetic facial stimuli.

Other research extended the findings of Werhle et al. by finding that certain speeds of

facial expressions are optimal for recognition, depending on the specific expression type

(Kamachi et al., 2001). Altogether, these studies suggest that motion does facilitate the

recognition of facial expressions.

Some research suggests that the subjectively rated intensity of emotional displays

might also be influenced by a motion component. For example, a study by Atkinson and

colleagues (2004) suggested that the perceived intensity of emotional displays is

dependent on motion rather than on form. Participants in this study judged actors posing

full-body expressions of anger, disgust, fear, happiness, and sadness, both statically and

dynamically. Dynamic displays of emotion were judged as more intense than static ones,

both in normal lighting and in degraded lighting (i.e., in darkness with points of light

attached to the actors' joints and faces). Although this evidence suggests that dynamic

expressions of emotion are indeed perceived as more intense than static ones, research on

this topic has been sparse.

Neural Systems and the Perception of Movement versus Form

Previous research also suggests that distinct neural systems are involved in the

perception of static and dynamic faces. A large body of evidence convincingly supports

the existence of two anatomically distinct visual pathways in the cerebral cortex

(Ungerleider & Mishkin, 1982). One visual pathway is involved in motion detection

(V5) while the other visual pathway is involved in processing form or shape information









(V3, V4, inferotemporal cortex) [for review, see Zeki (1992)]. As one example of

evidence that visual form is processed relatively independently, microelectrode

recordings of individual neurons in the inferotemporal cortex of monkeys have been

shown to respond preferentially to simple, statically presented shapes (Tanaka, 1992).

Preferential single-cell responses to more complex types of statically presented stimuli,

such as faces, have also been shown (DeSimone, 1991). An example of evidence for the

existence of a specialized "motion" pathway is provided by a fascinating case study

describing a patient with a brain lesion later found to be restricted to area V5 [Zihl et al.,

1983; as discussed in Eysenck (2000)]. This woman was adequate at locating stationary

objects by sight, she had good color discrimination, and her stereoscopic depth perception

was normal; however, her perception of motion was severely impaired. The patient

perceived visual events as if they were still photographs. People would suddenly appear

here or there, and when she poured her tea, the fluid appeared to be frozen, like a glacier.

Humphreys and colleagues (1993) described findings from two brain-impaired

patients who displayed different patterns of performance during the perception of static

and dynamic facial expressions. One patient was impaired at discriminating facial

expressions from still photographs of faces, but performed normally when asked to make

judgments of facial expressions depicted by moving dots of light. This patient had

suffered a stroke that involved the bilateral occipital lobes and extended anteriorly

towards the temporal lobes (i.e., the "form" visual pathway). The second patient was

poor at judging emotional expressions from both the static and dynamic displays despite

being relatively intact in other visual-perceptual tasks of comparable complexity. This

patient had two parietal lobe lesions, one in each cerebral hemisphere. Taken together,









the different patterns of performance from these two patients suggest dissociable neural

pathways between recognition of static and dynamic facial expressions.

Additional work with microelectrode recordings in non-human primates suggests

that static and dynamic facial stimuli are processed by visual form and visual motion

pathways, respectively, and converge at the area of the superior temporal sulcus (STS)

(Puce & Perrett, 2003). A functional imaging study indicates that the STS region

performs the same purpose in humans (Puce et al., 2003). In monkeys, specific responses

in individual neurons of the STS region have shown sensitivity to static facial details such

as eye gaze and the shape of the mouth, as well as movement-based facial details, such as

different types of facial motion (Puce & Perrett, 2003).

The amalgamation of data from biological studies indicates that static and dynamic

components of facial expressions appear to be processed by separable visual streams that

eventually converge within the region of the STS. The next section provides a

background for two major conceptual models of emotion. This information is then used

as a backdrop for the current study.

Dimensional versus Categorical Models of Emotion

Dimensional Models of Emotion

Historically, there have been two major approaches in the study of emotion. In

what is often described as a dimensional model, emotions are characterized using chiefly

two independent, bipolar dimensions (e.g., Schlosberg, 1952; Wundt, 1897). The first

dimension, "valence", has been described in different ways (i.e., pleasant to unpleasant,

positive to negative, appetitive to aversive); however, it generally refers to a range of

positive to negative feeling. The second dimension, arousal, represents a continuum

ranging from very low (e.g., calm, disinterest, or a lack of enthusiasm) to very high (e.g.,









extreme alertness, nervousness, or excitement). These two orthogonal scales create a

two-dimensional affective space, across which emotions and emotional responses might

be characterized.

Other dimensional approaches have included an additional scale in order to more

fully define the range of emotional judgments. This third scale has been variously

identified as "preparation for action", "aggression", "attention-rejection", "dominance",

and "potency", and has been helpful for differentiating emotional concepts (Averill,

1975; Bush, 1973; Heilman, 1987, February; Russell & Mehrabian, 1977; Schlosberg,

1952). For instance, fear and anger might be indistinguishable within a two-dimensional

affective space both may be considered negative/unpleasant emotions high in arousal.

A third dimension such as dominance or action separates these two emotions in three-

dimensional affective space. Briefly, dominance refers to the range of feeling dominant

(i.e., having total power, control, and influence) to submissive (i.e., feeling a lack of

control or unable to influence a situation). This construct has been discovered

statistically through factor analytic methods based on the work of Osgood, Suci, and

Tannenbaum (1957). Action (preparation for action to non-preparation for action), on the

other hand, was proposed by Heilman [1987; from Bowers et al. (1993)]. This construct

was based on neuropsychological evidence and processing differences between the

anterior portions of the right and left hemispheres (e.g., Morris, Bradley, Bowers, Lang,

& Heilman, 1991). Thus, in the present example for differentiating fear and anger, anger

is associated with feelings of dominance or preparation for action, whereas fear is

associated with feelings of submission (lack of dominance) or a lack of action (i.e., the

"freezing" response in rats with a sudden onset of fear). In this way, then, a third









dimension can sometimes help distinguish between emotional judgments that appear

similar in two-dimensional affective space. Generally, however, the third dimension has

not been a replicable factor across studies or cultures (Russell, 1978; Russell &

Ridgeway, 1983). The present study incorporates only the dimensions of valence and

arousal.

Emotion researchers have measured emotional valence and arousal in several ways,

including: (1) overt behaviors (e.g., EMG activity of facial expression muscles such as

corrugator or zygomatic muscles), (2) conscious thoughts or self-reports about one's

emotional experience, usually measured by ordinal scales, and (3) central and physiologic

arousal and activation, such as electrodermal activity, heart rate, and the magnitude of the

startle reflex (Bradley & Lang, 2000). All three components of emotion have been

measured reliably in laboratory settings. Among the physiological markers of emotion,

the startle eyeblink typically is used as an indicator of the valence of an emotional

response (Lang, Bradley, & Cuthbert, 1990). The startle reflex is an automatic

withdrawal response to a sudden, intense stimulus, such as a flash of light or a loud burst

of noise. More intense eyeblink responses, measured from electrodes over the orbicularis

oculi muscles, have been found in association with negative/aversive emotional material

relative to neutral material. Less intense responses have been found for

positive/appetitive material, relative to neutral material. Palm sweat, or SCR, is another

physiological marker of emotion and typically is used as an indicator of sympathetic

arousal (Bradley & Lang, 2000). Higher SCR has been shown to be associated with

higher self-reported emotional arousal, relatively independent of valence (e.g., Lang,

Greenwald, Bradley, & Hamm, 1993).









Categorical Models of Emotion

A second major approach to the study of emotion posits that emotions are actually

represented by basic, fundamental categories (e.g., Darwin, 1872; Izard, 1994). Support

for the discrete emotions view comes from two major lines of evidence: cross-cultural

studies and neurobiological findings [although cognitive studies have also been

conducted, e.g., Young et al. (1997)]. With regard to the former line of evidence, Darwin

(1872) argued that specific emotional states are evidenced by specific, categorical

patterns of facial expressions. He suggested that these expressions contain universal

configurations that are displayed by people throughout the world. Ekman and Friesen

(1976) developed this idea further and created an atlas describing the precise muscular

configurations associated with each of six basic emotional expressions (e.g., surprise,

fear, disgust, anger, happiness, and sadness). In a cross-cultural study, Ekman (1972)

found that members of a preliterate tribe in the highlands of New Guinea were able to

recognize the meaning of these expressions with a high degree of accuracy. Further,

photographs of tribal members who had been asked to pose various emotions were shown

to college students in the United States. The college students were able to recognize the

meanings of the New Guineans' emotions, also with a high degree of accuracy.

Additional evidence supporting the "categories of emotion" conceptualization is

derived from the neurobiological literature. For instance, electrical stimulation of highly

specific regions of the brain has been associated with distinct emotional states. Hess and

Brugger [1943; from Oatley & Jenkins (1996)] discovered that angry behavior in cats,

dubbed "sham rage" (Cannon, 1931), were elicited with direct stimulation of the

hypothalamus. Fearful behavior and autonomic changes have been induced (both in rats

and humans) with stimulation of the amygdala, an almond-shaped limbic structure within









the anterior temporal lobe. These changes include subjective feelings of fear and anxiety

as well as freezing, increased heart rate, and increased levels of stress hormones [for

review, see Davis & Whalen (2001)]. Positive feelings have also been elicited with direct

stimulation of a specific neural area. Okun and colleagues (2004) described a patient

exuding smiles and feelings of euphoria in association with deep brain stimulation of the

nucleus accumbens region. These studies of electrical stimulation in highly focal areas in

the brain appear to lend credence to the hypothesis that emotions can be categorized into

discrete subtypes.

The case for categorical emotions has been further bolstered with evidence that

different emotional states have been associated with characteristic psychophysiologic

responses. Several studies conducted by Ekman, Levenson, and Friesen (Ekman,

Levenson, & Friesen, 1983; Levenson, Carstensen, Friesen, & Ekman, 1991; Levenson,

Ekman, & Friesen, 1990) involved participants reliving emotional memories and/or

receiving coaching to reconstruct their facial muscles to precisely match the

configurations associated with Ekman's six major emotions (Ekman & Friesen, 1976).

The results of these studies indicated that the response pattern from several indices of

autonomic nervous system activity (specifically, heart rate, finger temperature, skin

conductance, and somatic activity) could reliably distinguish between positive and

negative emotions, and even among negative emotions of disgust, fear, and anger (Ekman

et al., 1983; Levenson et al., 1991; Levenson et al., 1990). Sadness was associated with a

distinctive, but less reliable pattern. Other researchers also have described characteristic

psychophysiologic response patterns associated with discrete emotions (Roberts &

Weerts, 1982; Schwartz, Weinberger, & Singer, 1981).









Emotional Responses to Viewing Facial Expressions

Emotion-specific psychophysiologic responses have been elicited in individuals

viewing facial displays of different types of emotions. For instance, Balaban and

colleagues (1995) presented photographic slides of angry, neutral, and happy facial

expressions to 5-month-old infants. During the presentation of each slide, a brief

acoustic noise burst was presented to elicit the eyeblink component of the startle reflex.

Angry expressions were associated with significantly stronger startle responses than

happy expressions, suggesting that at least in babies, positive and negative facial

expressions could emotionally modulate the startle reflex. This phenomenon was

explored in a recent study using an adult sample, but with the addition of fearful

expressions as a category (Bowers et al., 2002). Thirty-six young adults viewed static

images of faces displaying anger, fear, happy, and neutral expressions. Acoustic startle

probes elicited the eyeblink reflex during the presentation of each emotional face.

Similar to Balaban's (1995) study, responses to angry faces were associated with

significantly stronger startle reflexes than responses to other types of expressions.

Startle eyeblinks during the presentation of neutral, happy, and fearful expressions did

not significantly differ in this study.

The observations that fear expressions failed to prime or enhance startle reactivity

seem counterintuitive for two reasons (Bowers et al., 2002). First, many studies have

indicated that the amygdala appears to play a role in danger detection and processing

fearful material. Stimulation of the amygdala induces auras of fear (Gloor, Olivier,

Quesney, Andermann, & Horowitz, 1982), while bilateral removal or damage of the

amygdala is characterized by behavioral placidity and blunted fear for threatening

material (Adolphs et al., 1994; Kluver & Bucy, 1937). A few studies have even










suggested that the amygdala is particularly important for identification of fearful facial

expressions (Adolphs et al., 1994; J. S. Morris et al., 1998). A second reason why the

null effect of facial fear to startle probes seems counterintuitive is derived from the

amygdala's role in the startle reflex. Davis and colleagues mapped the neural circuitry of

the startle reflex using an animal model [see Figure 1-1; for a review, see Davis (1992)].

Their work has shown that through direct neural projections, the amygdala serves to

amplify the startle circuitry in the brainstem under conditions of fear and aversion. In

light of this research, the finding that fearful faces exerted no significant modulation

effects on the startle circuitry (Bowers et al., 2002) does appear counterintuitive, at least

from an initial standpoint.

Lateral I Autonomic
Region NS
Hypothalamus (HR, BP)

Dorsal Central Gray
Lateral Central (Fight/Flight)
Stimulus Sensory Sensory Nucleus Nucleus
Input Cor Thalamus Ventral Central Gray
Amygdala ,,, ,

SNucleus Reticularis Pontis Caudalis
Potentiated Startle

Figure 1-1. Neuroanatomic circuitry of the startle reflex (adapted from Lang et al., 1997)


The authors, however, provided a plausible explanation for this result (Bowers et

al., 2002). They underscored the importance of the amygdala's role in priming the

subcortical startle circuitry during threat-motivated behavior. Angry faces represent

personally directed threat, and, as demonstrated by the relatively robust startle response

they found, induce a motivational propensity to withdraw or escape from that threat.

Fearful faces, on the other hand, reflect potential threat to the actor, rather than to the

perceiver. It is perhaps unsurprising in this light that fearful faces exerted significantly









less potentiation of the startle reflex. The "preparation for action" dimension (Heilman,

1987) might account for this difference between responses to fearful and angry faces -

perhaps the perception of fear in another face involves less propensity or motivation to

act than personally directed threat. Regardless of the interpretation, these findings

suggest that different types of emotional facial expressions are associated with different,

unique patterns of reactivity as measured by the startle reflex (also referred as "emotional

modulation of the startle reflex"). The question remains as to whether the pattern of

startle reflex responses while viewing different facial expressions is different when

viewing dynamic versus static emotional facial expressions. This has only been

evaluated previously for static facial expressions, but not for dynamic ones. It seems

reasonable to hypothesize that the two patterns of modulation will be similar, as both

dynamic and static visual information must travel from their separate pathways to

converge on the area of the cortex that enables one to apply meaning (STS area of the

cortex). Across emotions, the question also remains as to whether overall differences in

physiologic reactivity exist. These questions are tested empirically in the present study.














CHAPTER 2
STATEMENT OF THE PROBLEM

Historically, the characterization of expression perception impairments in

neurologic and psychiatric populations has been largely based on research using static

face stimuli. The preceding literature suggests this may be problematic, as fundamental

cognitive and neurobiological differences exist in the perception of static and dynamic

displays of facial emotion. A long-term goal is to develop a battery of dynamic face

stimuli that would enable investigators and clinicians to better evaluate facial expression

interpretation in neurologic and psychiatric conditions. Before this battery can be

developed, however, an initial step must be taken to characterize differences and

similarities in the perception of static and dynamic expressions. To date, no study has

used psychophysiological methods to investigate this question.

This study investigates the emotional responses that occur in individuals as a result

of perceiving the emotions of others via facial expressions. The two major aims of the

present study are to empirically determine in normal, healthy adults (1) whether dynamic

versus static faces induce greater psychophysiologic reactivity and self-reported arousal

and (2) whether reactions to specific types of facial expressions (e.g., anger, fear,

happiness) resolve into distinct patterns of emotional modulation based on the mode of

presentation (i.e., static, dynamic). To examine these aims, normal individuals were

shown a series of static or dynamically presented facial expressions (fear, anger, happy,

neutral) while psychophysiologic measures (skin conductance, startle eyeblink) were

simultaneously acquired. Following presentation of each facial stimulus, subjective









ratings of valence and arousal were obtained. Thus, the primary dependent variables

were included: (a) skin conductance as a measure of psychophysiologic arousal; (b)

startle eyeblink as a measure of valence; and (c) subjective ratings of valence and arousal.

Specific Aim I

To test the hypothesis that dynamically presented emotional faces will induce

greater psychophysiologic reactivity and self-reported arousal than statically presented

faces. Based on the reviewed literature, it is hypothesized that the perception of dynamic

facial expressions will be associated with greater overall physiological reactivity than

will the perception of static facial expressions. This hypothesis is based on evidence

suggesting that dynamic displays of emotion are judged as more intense, as well as the

fact that the perception of motion in facial expressions appears to provide more visual

information to the viewer, such as three-dimensional structure or "movement signatures".

The following specific predictions are made: (a) the skin conductance response will be

significantly larger when subjects view dynamic than static faces; (b) overall startle

magnitude will be greater when subjects view dynamic versus static faces; and (c)

subjective ratings of arousal will be significantly greater for dynamic versus statically

presented faces.

Specific Aim II

To test the hypothesis that the pattern of physiologic reactivity (i.e., emotional

modulation) to discrete facial emotions (i.e., fear, anger, happiness, neutral) will be

similar for both static and dynamically presented facial expressions. Based on

preliminary findings from our laboratory, we expected that anger expressions would

induce heightened reactivity (as indexed by the startle eyeblink reflex) than fear,

happiness, or neutral expressions. We hypothesized that this pattern of emotion






17


modulation will be similar for both static and dynamic expressions, since both modes of

presentation presumably gain access to neural systems that underlie interpretation of

emotional meaning. The following specific predictions are made: (a) for both static and

dynamic modes of presentation, the startle response (as indexed by T-scores) for anger

expressions will be significantly larger than those for fear, happy, and neutral ones, while

magnitudes for fear, happy, and neutral expressions will not be significantly different

from each other.














CHAPTER 3
METHODS

Participants

Participants consisted of 51 (27 females, 24 males) healthy, right-handed adults

recruited from the University of Florida campus. Exclusion criteria included: (1) a

history of significant neurologic trauma or disorder, (2) a history of any psychiatric or

mood disorder, (3) a current prescription for mood or anxiety-altering medication, (4) a

history of learning disability, and (5) clinical elevations on the Beck Depression

Inventory (BDI) (Beck, 1978) or the State-Trait Anxiety Inventory (STAI) (Spielberger,

1983). Participants gave written informed consent according to university and federal

regulations. All participants who completed the research protocol received $25.

Eleven of the 51 subjects were excluded from the final data analyses. They

included 8 subjects whose psychophysiology data were corrupted due to excessive

artifact and/or absence of measurable blink responses. The data from 3 subjects were not

analyzed due to clinical elevations on mood questionnaires [BDI (N=2; scores of 36 and

20); STAI (N=1; State score = 56, Trait score = 61)].

Demographic variables for the remaining 40 participants are given in Table 3-1. As

shown, subjects ranged in age from 18 to 43 years (M=22.6, SD=4.3) and had 12 to 20

years of education (M=15.3, SD=1.7). BDI scores ranged from 0 to 9 (M=3.8, SD=2.9),

STAI-State scores ranged from 20 to 46 (M=29.2, SD=6.9), and STAI-Trait scores

ranged from 21 to 47 (M=31.0, SD=6.9). The racial representation was 52.5% Caucasian,









17.5% African American, 12.5% Hispanic/Latino, 12.5% Asian, 2.5% Native American,

and 2.5% Multiracial.

Table 3-1
Demographic characteristics of experimental
participants
Measure Mean (SD) Range
Age 22.6(4.3) 18-43
Education 15.3 (1.7) 20-Dec
GPA 3.48 (0.49) 2.70 3.96
BDI 3.8(2.9) 0-9
STAI-State 29.2 (6.9) 20 46
STAI-Trait 31.0(6.9) 21-47
Note. BDI = Beck Depression Inventory; GPA = Grade
Point Average; STAI = State-Trait Anxiety Inventory.


Materials

Static and dynamic versions of angry, fearful, happy, and neutral facial expressions

from 12 "untrained" actors (6 males, 6 females) were used as stimuli in this study. These

emotions were chosen based on previous findings from our laboratory (Bowers et al.,

2002). The following sections describe the procedure used for eliciting, recording, and

digitally standardizing these stimuli.

Collection of Facial Stimuli: Video Recording

The stimulus set for the present study was originally drawn from 15 University of

Florida graduate students (Clinical and Health Psychology) and undergraduates who were

asked to pose various facial expressions. These untrained actors ranged in age from 19 to

32 years and represented Caucasian, African American, Hispanic, and Asian ethnicities.

All provided informed consent to allow their faces to be used as stimuli in research

studies.









The videorecording session took place in the Cognitive Neuroscience Laboratory,

where the actor sat comfortably in a chair in front of a continuously recording black-and-

white Pulnix videocamera. The camera was connected to a Sony videorecorder and

located approximately 2 meters in front of the actor. The visual field of the videocamera

was adjusted to include only the face of the actor. A Polaris light meter was used to

uniformly balance the incident light upon the patient's left and right sides to within 1 lux

of brightness. To minimize differences in head position and angle between captured

facial expressions, the actor's head was held in one position by a rigid immobilization

cushion (Med-Tec, Inc.) during the entirety of the recording session. Prior to the start of

videorecording, the experimenter verified that the actor was comfortable and that the

cushion did not obstruct the view of the actor's face.

A standardized format was followed for eliciting the facial expressions. The actor

was asked to pose 6 emotional expressions (i.e., anger, disgust, fear, happiness, sadness,

and neutral) and to make each expression intense enough so that others could easily

decipher the intended emotion. For 'neutral', the actor was told to look into the camera

lens with a relaxed expression and blink once. Before each expression type was

recorded, visual examples from Ekman & Friesen's Pictures of Facial Affect (Ekman &

Friesen, 1976) and Bowers and colleagues' Florida Affect Battery (Bowers, Blonder, &

Heilman, 1992) were shown to the actor. At least three trials were recorded for each of

the six expression types.

Selection of Facial Stimuli

Once all the face stimuli were recorded, three naive raters from the Cognitive

Neuroscience Laboratory reviewed all trials of each expression made by the 15 actors.

The purpose of this review was to select the most easily identifiable exemplar from each









emotion category (anger, disgust, fear, happiness, sadness, neutral) that was free of

artifact (blinking, head movement) and most closely matched the stimuli from the Ekman

series (Ekman & Friesen, 1976) and the Florida Affect Battery (Bowers et al., 1992).

Selection was based on consensus by the three raters. The expressions from 3 actors (2

female, 1 male) were discarded due to movement artifact, occurrence of eyeblinks, and

lack of consensus regarding at least half of the intended expression types. This resulted

in 72 selected expressions (6 expressions x 12 actors) stored in videotape format.

Digital Formatting of Facial Stimuli

Each of the videotaped facial expressions were digitally formatted and

standardized. Dynamic versions were created first. Each previously selected expression

(the best exemplar from each emotion category) was digitally captured onto a PC using a

FlashBus MV Pro framegrabber (Integral Technologies) and VideoSavant 4.0 (IO

Industries) software. The resulting digital "movie clips" (videosegments) consisted of a

5.0-second sequence of 150 digitized images or frames (30 frames per second). Each

segment began with the actor's face in a neutral pose that then moved to peak expression.

The temporal sequence of each stimulus was standardized such that the first visible

movement of the face (the start of each expression) occurred at 1.5 seconds and that the

peak intensity was visible and unchanging for at least 3.0 seconds at the end of the

videosegment. To standardize the point of the observer's gaze at the onset of each

stimulus, 30 frames (1 s) of a white crosshairs over a black background were inserted

before the first frame of the videosegment, such that the crosshairs marked the point of

intersection over each actor's nose. In total, each final, processed videosegment

consisted of 180 frames (6.0 seconds). All videosegments were stored in 16-bit greyscale









(256 levels) with a resolution of 640 x 480 pixels and exported to a digital MPEG movie

file (Moving Picture Experts Group) to comprise the dynamic set of face stimuli.

Unmoving, or static correlates of these stimuli were then created by using the

frame representing the peak intensity of each facial expression. "Peak intensity" was

defined as the last visible frame in the dynamic expression sequence of frames. This

frame was multiplied to create a sequence of 150 identical frames (5.0 seconds). As with

the dynamic stimuli, 1.0 second of crosshairs was inserted into the sequence prior to the

first frame. The digital specifications of this stimulus set were identical to that of the

dynamic stimulus set. Figure 3-1 graphically compares the content and timing of the

both versions of these stimuli.

Dynamic Stimuli

Image Crosshairs Neutral Moving Peak
Expression Expression Expression
Seconds 0 1.0 2.5 -3.0 6.0
Frame No. 0 30 75 90 180

Static Stimuli

Image Crosshairs Peak Expression
Seconds 0 1.0 6.0
Frame No. 0 30 180

Figure 3-1. Temporal representation of dynamic and static stimuli by time (s) and frame
number. Each stimulus frame rate is 30 frames / s.


After dynamic and static digital versions of the facial stimuli were created, an

independent group of 21 naive individuals rated each face according to emotion category,

valence, and arousal. Table 3-2 provides the overall mean ratings for each emotion









category by viewing mode (static or dynamic). Ratings by individual actor are given in

Appendixes A (static) and B (dynamic).

Table 3-2
Mean (SD) recognition rates, valence, and arousal of static and dynamic face stimuli
Measure Anger Disgust Fear Happiness Neutral Sadness
Dynamic Faces (n = 12)
% Correct 78.2(16.7) 79.0 (17.5) 94.4 (6.5) 99.6 (1.4) 92.0 (4.2) 93.5 (10.0)
Valence 3.34 (.40) 3.58 (.43) 4.12 (.29) 7.23 (.39) 4.68 (.65) 3.51 (.52)
Arousal 5.28 (.38) 5.19 (.56) 6.00 (.47) 6.00 (.51) 3.63 (.50) 4.55 (.64)
Static Faces (n = 12)
% Correct 68.2(21.3) 77.4 (16.6) 95.2 (5.0) 99.2 (1.9) 89.3 (8.1) 91.3 (11.0)
Valence 3.04 (.39) 3.39 (.55) 3.60 (.41) 7.18 (.52) 4.95 (.41) 3.45 (.40)
Arousal 5.13 (.61) 5.31 (.64) 5.96 (.53) 5.84 (.56) 3.26 (.39) 4.48 (.56)


Final Selection of Stimuli for Psychophysiology Experiment

The emotional categories of anger, fear, happiness, and neutral were selected for

the present study based on previous results from our laboratory (Bowers et al., 2002).

Thus, the final set of stimuli used in the present study consisted of static and dynamic

versions of 12 actors' (6 female, and 6 male) facial expressions representing these four

emotion categories. The total number of facial stimuli was 96 (i.e., 48 dynamic, 48

static).

Design Overview and Procedures

Each subject participated in two experimental conditions, one involving dynamic

face stimuli and the other involving static face stimuli. During both conditions,

psychophysiologic data (i.e., skin conductance, startle eyeblink responses) were collected

along with the participant's ratings of each face stimulus according to valence

(unpleasantness to pleasantness) and arousal. There was a 5-minute rest interval between

the two conditions. Half the participants viewed the dynamic faces first, whereas the









remaining viewed the static faces first. The order of these conditions was randomized but

counterbalanced across subjects.

Testing took place within the Cognitive Neuroscience Lab of the McKnight Brain

Institute at the University of Florida. Informed consent was obtained according to

University and Federal regulations. Prior to beginning the experiment, the participant

completed several questionnaires including a demographic form, the BDI, the STAI, and

a payment form. The skin from both hands and areas under each eye were cleaned and

dried thoroughly. A pair of 3 mm Ag/AgCl sensory electrodes was filled with a

conducting gel (Medical Associates, Inc., Stock # TD-40) and attached adjacently over

the bottom arc of each orbicularis oculi muscle via lightly adhesive electrode collars.

Two 12 mm Ag/AgCl sensory electrodes were filled with conducting gel (K-Y Brand

Jelly, McNeil-PPC, Inc.) and were attached adjacently via electrode collars on the thenar

and hypothenar surfaces of each palm.

Throughout testing, the participant sat in a reclining chair in a dimly lit sound-

attenuated 12' x 12' room with copper-mediated electric shielding. An initial period was

used to calibrate the palmar electrodes and to familiarize the participant with the startle

probes. The lights were dimmed, and twelve 95-dB white noise bursts were presented to

the subject via stereo Telephonics (TD-591c) headphones. The noise bursts were

presented at a rate of about once per 30 seconds.

After the initial calibration period, the participant was given instructions about the

experimental protocol. They were told they would see different emotional faces, one face

per trial, and were asked to carefully watch each face and ignore the brief noises that

would be heard over the headphones. During each trial, the dynamic or static face stimuli









were presented on a 21" PC monitor, positioned 1 meter directly in front of the

participant. Each face stimulus was shown for six seconds on the monitor. While

viewing the face stimulus, the participant heard a white noise burst (95 db, 50 ms) that

was delivered via headphones. The white noise startle probes were randomly presented

at 4200 ms, 5000 ms, or 5800 ms after the onset of the face stimulus.

At the end of each trial, the participant was asked to rate each face stimulus along

the dimensions of valence and arousal. The ratings took place approximately six seconds

following the offset of the face stimulus, when a Self-Assessment Manikin SAM;

Bradley & Lang, 1994) was shown on the computer monitor. Valence ratings ranged

from extremely positive, pleasant, or good (9) to extremely negative, unpleasant, or bad

(1). Arousal ratings ranged from extremely excited, nervous, or active (9) to extremely

calm, disinterested, or unenthusiastic (1). The participant reported their valence and

arousal ratings out loud, and their responses were recorded by an experimenter in the next

room, listening via a baby monitor. A new trial began 6 to 8 seconds after the ratings

were made.

Each experimental condition (i.e., dynamic, static) consisted of 48 trials that were

divided into 6 blocks of 8 trials each. A different actor represented each trial within a

given block. Half were males, and half females. One male actor and one female actor

represented each of four emotions (neutral, happiness, anger, fear) to total the 8 trials per

block. To reduce habituation of the startle reflex over the course of the experiment, 8

trials representing male and female versions of each expression category did not contain a

startle probe. These trials were spread evenly throughout each slideshow.









Following administration of both slideshows, the experimenter removed all

electrodes from the participant, who was then debriefed on the purpose of the experiment,

thanked, and released.

Psychophysiologic Measures

Acoustic Startle Eyeblink Reflex (ASR)

Startle eye blinks were measured via EMG activity from the orbicularis oculi

muscle beneath each eye. This measure was used as a dependent measure because of its

sensitivity to valence, with larger startle eyeblinks associated with negative/aversive

emotional states and smaller eyeblinks associated with positive emotional states (Lang,

Bradley, & Cuthbert, 1990). The raw EMG signal was amplified and frequencies below

90 Hz and above 1000 Hz were filtered using a Coulbourn bioamplifier. Amplification

of acoustic startle was set at 30000 with post-experimental multiplication to equate gain

factors (Bradley et al., 1990). The raw signal was then rectified and integrated using a

Coulbourn Contour Following Integrator with a time constant of 10 ms. Digital sampling

began at 20 Hz 3 s prior to stimulus onset. The sampling rate increased to 1000 Hz 50 ms

prior to the onset of the startle probe and continued at this rate for 250 ms after probe

onset. Sampling then resumed at 20 Hz until 2 s after stimulus offset. The startle data

were reduced off-line using custom software which evaluates trials for unstable baseline

and which scores each trial for amplitude in arbitrary A-D units and onset latency in

milliseconds. The program yields measures of startle response magnitude in arbitrary A-

D units that expresses responses during positive, neutral, and negative materials on the

same scale.









Skin Conductance Response (SCR)

The SCR was measured from electrodes attached to the palms with adhesive

collars. This measure was used because it is an index of sympathetic arousal, correlates

with self-reports of emotional arousal, and is relatively independent of valence (Bradley

& Lang, 2000). Skin conductance data were sampled at 20 Hz using two Coulboum

Isolated Skin Conductance couplers in DC mode (this is a constant voltage system in

which .5v is passed across the palm during recording). The SC couplers output to a

Scientific Solutions A/D board integrated within a custom PC. The skin conductance

response (SCR) was defined as the difference between the peak conductance during the

6-second viewing period and the mean conductance achieved during the last pre-stimulus

second, derived independently for each hand. SCR was represented in microsiemens

(US) units.

Data Reduction of Psychophysiology Measures

After the collection of the psychophysiologic data, the eyeblink and skin

conductance data were reduced using custom condensing software. For startle eyeblink,

data from trials without startle probes and the initial two practice trials were excluded

from the statistical analyses. Trials containing physiological data containing obvious

artifacts were also removed. For the remaining data, the peak magnitude of the EMG

activity elicited by each startle probe within the recorded time window was measured

(peak baseline in microvolts). Peak startle magnitudes were averaged for both eyes into

a composite score when data from both eyes were available. If data from only one eye

was available, this data was used in place of the composite score. Peak startle

magnitudes were additionally translated into T-scores, which were then averaged for each

expression type (i.e., happy, neutral, fear, and anger) and mode of presentation (i.e., static









and dynamic stimuli). For both startle magnitudes and T-scores, the four expression

categories were represented by no fewer than four trials each.

For the skin conductance response, condensing consisted of measuring the peak

magnitude of change relative to baseline activity at the start of each trial. Again, trials

containing physiological data containing obvious artifacts were removed. The magnitude

of change for each trial was measured and averaged for both hands, unless the data from

one of the palms contained excessive artifact. In these cases, the data from the other hand

was used in place of the composite data.

Statistical Analysis

Separate analyses were conducted for startle-blink, skin conductance, SAM

Valence ratings, and SAM Arousal ratings. Repeated-measures ANOVA with adjusted

degrees of freedom (Greenhouse-Geisser correction) were used, with a between-subjects

factor of Order ofSlideshows (dynamic, then static; static, then dynamic) and within-

subjects factors of Expression Category (anger, fear, neutral, happiness) and Viewing

Mode (dynamic, static). Analyses corresponding to apriori predictions were conducted

using planned contrasts (Helmert) between the four expression categories. A significance

level of alpha = 0.05 was used for all analyses.

We predicted three changes corresponding to indices of greater psychophysiologic

reactivity to dynamic expressions versus static expressions. These indices were: (1)

greater magnitude of the startle reflex, (2) greater percent change in skin conductance,

and higher self-reported SAM arousal ratings during perception of dynamic facial

expressions. Additionally, we predicted that the pattern of T-scores for both dynamic and

static facial expressions would show emotional modulation to the four different

categories of facial expressions incorporated in the experimental study. That is, startle






29


reflexes measured during the perception of anger would show larger startle reflexes than

those measured during the perception of fear, neutral, and happy expressions. Startle

responses measured during the perception of facial expressions represented by the latter

three emotional categories would not be appreciably different. Finally, this pattern of

modulation would not be significantly different between static and dynamic viewing

modes.














CHAPTER 4
RESULTS

The primary dependent measures were the acoustic startle eyeblink response

(ASR), the skin conductance response (SCR), and self-reported arousal from the Self-

Assessment Manikin (arousal). As previously described, the ASR was quantified by

measuring the change in EMG activity (mV) following the onset of the startle probes

(i.e., peak minus baseline EMG). The SCR was calculated by the difference between the

peak conductance in microsiemens (iS) during the 6-second period of stimulus

presentation and the mean level of conductance during a 1-s period immediately prior to

the onset of the stimulus. Finally, self-reported arousal encompassed a range of 1 to 9,

with higher numbers representing greater arousal levels. Table 1 gives the means and

standard deviations of each of these dependent variables by viewing mode.

Table 4-1
Mean (SD) dependent variable scores by Viewing Mode
Viewing Mode
Measure Dynamic Static
ASR-M .0062 (.0054) .0048 (.0043)
SCR .314 (.514) .172 (.275)
Arousal 5.27 (.535) 5.30 (.628)
Note. ASR = Acoustic Startle Eyeblink Response, Magnitude
(mV); SCR = Skin Conductance Response (uS); Arousal = Self-
Assessment Manikin, Arousal Scale (1-9).


Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces

An initial set of analyses addressed the first hypothesis and investigated whether

psychophysiologic reactivity (startle eyeblink, SCR) and/or self-reported arousal differed









during the perception of dynamic versus static emotional faces. The results of the

analyses for each of the three dependent variables are described below.

Startle Eyeblink Response

The first analysis examined whether the overall size of the startle eyeblink

responses differed when participants viewed dynamic versus static facial expressions. A

repeated-measures ANOVA was conducted using Viewing Mode (dynamic, static) as the

within-subjects factor and Order ofPresentation (dynamic then static, or static then

dynamic) as the between-subjects factor.1 The results of the ANOVA revealed a

significant main effect for Viewing Mode [F(1, 38) = 9.003, p = .005, r,2= .192, power =

.832]. As shown in Table 1, startle eyeblink responses were greater during dynamic

versus static expressions. The main effect of Order ofPresentations was not significant

[F(1, 38) = 1.175, p = .285, ip2 .030, power =.185], nor was the Viewing Mode X

Order ofPresentations interaction [F(1, 38) = .895, p = .350, ip2 .023, power = .152].

Skin Conductance Response (SCR)

The second analysis examined whether the perception of the different types of

facial emotions induced different SCR patterns between modes of viewing. A repeated

measures ANOVA was conducted with Viewing Mode (dynamic, static) and Expression

Category (anger, fear, happy, neutral) as the within-subjects factors and Order of

Presentations (dynamic first, static first) as the between-subjects factor. The results of

the ANOVA revealed that the main effect of Viewing Mode approached significance

[F(1, 35) = 3.796, p = .059, p2 = .098, power = .474], such that SCR tended to be larger


1 Expression Category was not used as a factor in this analysis. Examination of emotional effects on startle
eyeblink is traditionally done using T-scores as the dependent variable rather than raw magnitude. Raw
startle magnitude is more appropriate as an index of reactivity, whereas T-scores are more appropriate for
examining patterns of emotional effects on startle.









when participants viewed dynamic versus static faces (see Table 1). No other main

effects or interactions reached trend level or significance {Order ofPresentations [F(1,

35) = .511, p .479, rp2= .014, power = .107]; Viewing Mode X Order ofPresentations

[F(1, 35) = 1.559, p = .220, rp2= .043, power = .229]; Expression Category X Order of

Presentations [F(1.832, 64.114)= .942,p .423, p2= .026, power = .251]}.

Self-Reported Arousal

The third analysis examined whether self-reported arousal ratings differed when

participants viewed static versus dynamic facial expressions. Again, a 2 (Viewing Mode)

X 4 (Expression Category) X 2 (Order ofPresentation) repeated measures ANOVA was

conducted. The results of this ANOVA revealed that no main effects or interactions were

significant: { Viewing Mode [F(1, 38) = .072,p .789, rp2 .002, power = .058]; Order

of Presentations [F(1, 38) = 2.912,p .096, p2= .071, power = .384]; Viewing Mode X

Order of Presentations [F(1,38) = .479, p = .493, p2= .012, power = .104]}. The effects

related to Expression Category will be described in the next section (page 39).

In summary, viewing dynamic facial stimuli was associated with significantly

larger acoustic startle eyeblink responses and a tendency (trend, p = .059) for larger skin

conductance responses than viewing static stimuli. There was no significant difference in

self-reported arousal ratings between dynamic and static stimuli.

Hypothesis 2: Emotion Modulation of Startle by Expression Categories

An additional set of analyses addressed the second hypothesis, investigating

emotional modulation of the startle eyeblink response via distinct categories of facial

expressions (i.e., anger, fear, neutral, and happy). Because of individual variability in the

size of basic eyeblink responses, the startle magnitude scores for each individual were

converted to T-scores on a trial-by-trial basis. These T-scores were analyzed in a









repeated-measures 4 (Expression Category: anger, fear, neutral, happy) X 2 (Viewing

Mode: dynamic, static) X 2 (Order ofPresentations: dynamic then static, or static then

dynamic) ANOVA. Table 2 gives the means and standard deviations of these scores and

other dependent variables by Viewing Mode and Expression Category.

Table 4-2
Mean (SD) Dependent variable scores by Viewing Mode and Expression Category
Expression Category
Viewing Mode Measure Anger Fear Neutral Happy
Dynamic
ASR-M .0053 (.0052) .0049 (.0046) .0045 (.0037) .0046 (.0042)
ASR-T 51.06 (3.43) 49.47 (3.01) 49.77 (3.47) 49.68 (3.14)
SCR .1751 (.2890) .1489 (.2420) .1825 (.3271) .1768 (.3402)
Valence 3.10 (.89) 3.44 (.99) 4.76 (.54) 7.19 (.84)
Arousal 5.39 (1.05) 6.43 (.98) 3.41 (1.33) 5.96 (.88)
Static
ASR-M .0066 (.0061) .0059 (.0051) .0061 (.0051) .0061 (.0057)
ASR-T 50.99 (3.79) 49.43 (3.92) 49.57 (4.30) 49.88 (3.21)
SCR .3247 (.5200) .3583 (.8070) .2515 (.3911) .3212 (.5457)
Valence 3.17(1.00) 3.65 (1.21) 4.69 (.84) 7.17 (.84)
Arousal 5.51(1.05) 6.35 (.95) 3.29 (1.36) 5.95 (.87)
Note. ASR=Acoustic Startle Response (mV); SCR=Skin Conductance Response ([tS); Valence=Self-
Assessment Manikin, Valence Scale (1-9); Arousal=Self-Assessment Manikin, Arousal Scale (1-9).


The main effect of Expression Category approached but did not reach

significance [F(3, 117) = 2.208, p = .091, rp2= .055, power = .548]. No other main

effects or interactions reached trend level or significance { Viewing Mode: [F(1, 114) =

.228, p = .636, p2= .006, power = .075]; Order ofPresentations: [F(1, 38) = .336, p =

.566, fp2= .009, power = .087]; Viewing Mode X Order ofPresentations: [F(1, 38) =

.457, p = .503, lp2 = .012, power = .101]; Expression Category X Order ofPresentations:

[F(3, 114) = .596, p = .619, ,p2 = .015, power = .171]; Expression Category X Viewing

Mode: [F(3, 114) = .037, p = .991, rpP2 = .001, power = .056]; Expression Category X









Viewing Mode X Order ofPresentations: [F(3, 114) = .728, p = .537, lp2= .019, power =

.201]}.

The apriori predictions regarding the expected pattern of emotion modulation of

the startle response [i.e., Anger > (Fear = Neutrality = Happiness)] warranted a series of

planned comparisons (Helmert) on Expression Category. Results of these comparisons

revealed that: (a) startle responses were significantly different for faces of anger than the

other expressions [F(1, 38) = 8.217, p = .007, p2= .178, power = .798]; (b) there were no

significant differences among the remaining emotional expressions [i.e., Fear = (Neutral

and Happy): F(1, 38) =.208, p = .651, p2= .005, power = .073); and Neutral = Happy:

F(1, 38) =.022, p = .882, rp2= .001, power = .052)]. Figure 4-2 graphically displays the

pattern of startle reactivity with T-scores among the four expression categories.


54
S53
552
g 51
50
49 -
48
47
46
Anger Fear Neutrality Happiness
Expression Category

Figure 4-1. Startle eyeblink T-scores by expression category [A > (F = N= H)].



To summarize these results, viewing angry facial expressions was associated with

significantly larger acoustic startle eyeblink responses than other types of facial









expressions (i.e., fear, neutral, and happy), and the responses between the other

expressions were not significantly different from each other. Additionally, the non-

significant Expression Category X Viewing Mode interaction (p = .991) indicates that this

response pattern was similar for both static and dynamic facial expressions.

Other Patterns of Emotional Modulation by Viewing Mode

The response pattern among different expression categories was also examined for

SCR and self-reported arousal, as well as self-reported valence. Like arousal, valence

was measured on a scale of 1-9, with higher numbers representing greater positive

feeling, pleasure, or appetitiveness, and lower numbers representing greater negative

feeling, displeasure, or aversiveness. For all three variables, the analyses were separate

3-way (4 x 2 x 2) repeated measures analyses of variance, using the within-subject factors

of Expression Category (anger, fear, neutral, happy) and Viewing Mode (dynamic, static),

and the between-subjects factor of Order ofPresentations (dynamic then static, or static

then dynamic). For SCR and arousal, these analyses were conducted in a preceding

section ("Differences in Reactivity to Dynamic vs. Static Faces", page 39). As such, for

these two measures, this section provides only the results for the Expression Category

main effect and associated interactions. The results for self-reported valence, however,

are provided in full, as this is a novel analysis. Table 2 gives the means and standard

deviations for each dependent variable by Viewing Mode and Expression Category.

Skin Conductance Response

For the skin conductance response, the main effect of Expression Category and all

associated interactions were non-significant: Expression Category [F(1.832, 64.114)=

.306, p = .821, rp2 = .009, power = .107], Expression Category X Viewing Mode









[F(2.012, 70.431) = 1.345, p .264, r,2= .037, power = .349];2 Expression Category X

Viewing Mode X Order ofPresentations [F(2.012, 70.431) = 1.341, p = .265, ,2= .037,

power = .348]. Thus, differences in SCR for discrete expressions were not found.

Self-Reported Arousal

For self-reported arousal, the main effect of Expression Category was significant

[F(2.144, 81.487) = 81.836, p < .001, rp2 = .683, power = 1.000],3 indicating that arousal

ratings were different while viewing different types of facial expressions. The results of

Bonferroni-corrected post-hoc comparisons are provided graphically in Figure 4-2.

Fearful faces (M= 6.39, SD = .91) were associated with significantly higher (p < .001)

intensity ratings than angry faces (M= 5.45, SD = .96), which were in turn rated as higher

(p < .001) in intensity than neutral faces (M= 3.35, SD = 1.22). Differences in intensity

ratings associated with happy faces (M= 5.96, SD = .76) approached significance when

compared to fearful (p = .082) and happy (p = .082) faces, and were rated as but

significantly higher (p < .001) than neutral faces.















2Mauchley's test was significant for both Expression Category [W = .273, ~2(5) = 43.762, p < .001] and the
Expression Category X Viewing Mode interaction [W = .451, X2(5) = 26.850, p < .001]; thus, degrees of
freedom for these effects were adjusted using the Greenhouse-Geisser method.

3 Mauchley's test was significant for both Expression Category [W = .507, ~2(5) = 24.965, p < .001] and
the Expression Category X Viewing Mode interaction [W = .403, X2(5) = 33.335, p < .001]; thus, degrees of
freedom for these effects were adjusted using the Greenhouse-Geisser method.













8

7



5

4

3

2


Anger Fear Neutrality Happiness
Expression Category

Figure 4-2. Self-reported arousal by expression category (F > A > N; H > N).



Self-Reported Valence

The final analysis explored the pattern of self-reported valence ratings for each of

the facial emotion subtypes and viewing modes. The results of the ANOVA revealed a

significant effect for Expression Category [F(2.153, 81.822) = 205.467, p < .001, mp2

.844, power = 1.00],4 indicating that valence ratings differed according to expression

categories. Bonferroni-corrected pairwise comparisons among the four facial expression

types indicated that faces of happiness (M 7.18, SD = .78) were rated as significantly

more pleasant than neutral faces (M= 4.73, SD = .59; p < .001), fear faces (M 3.54,

SD 1.03,p < .001), and angry faces (M= 3.14, SD =.84;p < .001). Additionally,

neutral faces were rated as significantly more pleasant than fearful (p < .001) or angry

4 A significant Mauchley's test for Expression Category [W = .566, X2(5) = 20.903, p = .001] and the
Expression Category X Viewing Mode interaction [W = .504, X2(5) = 25.146, p <.001] necessitated the use
of Greenhouse-Geisser adjusted degrees of freedom.









faces (p < .001). Finally, anger faces were rated as significantly more negative than

fearful faces (p = .014). This pattern is displayed graphically in Figure 4-3. No other

main effects or interactions reached trend level or significance { Viewing Mode: [F(1, 38)

=.646,p =.426, rp2= .017, power = .123]; Order ofPresentations: [F(1, 38) = 1.375,p

.248, rp2= .035, power = .208]; Viewing Mode X Order ofPresentations: [F(1, 38) =

.047, p = .829, rip2= .001, power = .055]; Expression Category X Order ofPresentations:

[F(2.153, 81.822) = 1.037,p = .363, rp2= .027, power = .233]; Expression Category X

ViewingMode: [F(2.015, 76.554) = .933,p = .398, rp2= .024, power = .207]; Expression

Category X Viewing Mode X Order ofPresentations: [F(2.015, 76.554) = 1.435, p =

.244, p2 .036, power = .300]}.


9 -

8 5

7 -

o ^ 6 -----------------------



4

3

2


Anger Fear Neutrality Happiness
Expression Category

Figure 4-3. Self-reported valence by expression category (H > N > F > A).



To summarize, these analyses revealed that the skin conductance response for

different categories of emotional expressions were not different from one another. By









contrast, both self-report measures did distinguish among the emotion categories. With

regard to self-reported arousal, fearful faces were rated highest, significantly moreso than

anger faces, which were in turn rated as significantly more arousing than neutral ones.

The difference in arousal between happy and angry faces, as well between happy and

fearful ones, approached but did not reach significance (p = .082, p = .082, respectively).

Happy faces were, however, rated as significantly more arousing than neutral ones. For

self-reported valence, each expression category was rated as significantly different from

the other, such that angry expressions were rated as most negative, followed by fearful,

neutral, and then happy faces.














CHAPTER 5
DISCUSSION

The present study examined two hypotheses. The first was that the perception of

dynamic versus static faces would be associated with greater physiological reactivity in

normal, healthy adults. Specifically, it was predicted that individuals would exhibit

significantly stronger startle eyeblink reflexes, higher skin conductance responses (SCR),

and higher levels of self-reported arousal when viewing dynamic expressions. These

predictions were based on evidence from previous research suggesting that movement in

facial expression (a) provides more visual information to the viewer, (b) increases

recognition of and discrimination between specific types of emotion, and (c) may make

the facial expressions appear more intense.

The second hypothesis was that the perception of different categories of facial

expressions would be associated with a distinct pattern of emotional modulation, and that

this pattern would not be different for static and dynamic faces. In other words, it was

hypothesized that the level of physiological reactivity while viewing facial expressions

would be dependent on the type of expression viewed, regardless of the viewing mode.

Specifically, the prediction was that normal adults would have increased startle eyeblink

responses during the perception of angry faces, and that responses to fearful, happy, and

neutral faces would not be significantly different from each other. Moreover, it was

predicted that this pattern of responses would be similar for both static and dynamically

presented expressions.









The first hypothesis was partially supported by the data. The participants tested in

the study sample exhibited larger startle eyeblink responses while viewing dynamic

versus static facial expressions. Differences in SCR while viewing the expressions in

these two modes reached trend level (p = .059), such that dynamic faces tended to induce

greater responses than static ones. Self-reported arousal was not significantly different

during either condition. Thus, the perception of moving emotional faces versus still

pictures was associated with greater startle eyeblink responses, but not SCR or self-

reported arousal.

The second hypothesis was supported by the data. That is, the startle reflex was

significantly greater for angry faces, and comparably smaller for the fearful, neutral, and

happy faces. The data suggested that this pattern of emotional modulation was similar

during both static and dynamic viewing conditions.

In summary, participants demonstrated greater psychophysiological reactivity to

dynamic faces compared to static faces, as indexed by the startle eyeblink response, and

partially by SCR. Participants did not, on the other hand, report differences in perceived

arousal. Emotional modulation of the startle response was similar for both modes of

presentation, such that angry faces induced greater negative or aversive responses in the

participants than did happy, neutral, and fearful faces.

Interpretation and Relationship to Other Findings

The finding that viewing faces of anger was found to increase the strength of the

startle eyeblink reflex is consistent with other results. Currently, only two other studies

are known that measured the magnitude of this reflex during the perception of different

facial emotions. Balaban and colleagues (1995) conducted one of these studies. They

measured the size of startle eyeblinks in 5-month-old infants viewing photographic slides









of happy, neutral, and angry faces. Their results were similar to those of the current

study, in that the magnitudes of startle eyeblinks measured in the infants were augmented

while they viewed faces of anger versus faces of happiness.

The other study was conducted by Bowers and colleagues (2002). Similar to the

present experiment, participants were young adults (n = 36) who viewed facial

expressions of anger, fear, neutral, and happiness. These stimuli, however, consisted

solely of static photographs and were sampled from standardized batteries (The Florida

Affect Battery: Bowers et al., 1992; Pictures of Facial Affect: Ekman & Friesen, 1976).

The startle eyeblink responses that were measured while viewing these pictures reflected

the pattern produced in the present study: greater negative or aversive responses were

associated with angry faces than happy, neutral, or fearful faces. Responses to happy,

neutral, and fearful faces yielded relatively reduced responses and were not different

from each other in magnitude.

The augmentation of the startle reflex during the perception of angry versus other

emotional faces appears to be a robust phenomenon for several reasons. First, the

findings from the present study were similar to those of previous studies (Balaban et al.,

1995; Bowers et al., 2002). Second, this pattern of emotional modulation was replicated

using a different set of facial stimuli. Thus, the previous findings were not restricted to

faces from specific sources. Third, the counterbalanced design of the present study

minimized the possibility that the anger effect was due to some imbalance of factors other

than the portrayed facial emotion. Within each experimental condition, for example, both

genders and each actor were equally represented within each expression category.









Although the current results were made more convincing for these reasons, the

implication that the startle circuitry is not enhanced in response to fearful expressions

was unexpected for several reasons. The amygdala has been widely implicated in states

of fear and processing fearful material (Davis & Whelan, 2001; Gloor et al., 1981,

Kltiver-Bucy, 1939), and some investigators have even directly implicated the amygdala

for processing facial expressions of fear (Adolphs et al., 1994; Morris et al., 1998).

Additionally, the work of Davis and colleagues (Davis et al., 1992) uncovered direct

neural projections from the amygdala to the subcortical startle circuitry, which have been

shown to prime the startle mechanism under fearful or aversive conditions.

This body of research suggests that fearful expressions might potentiate the startle

reflex relative to other types of facial expressions; however, Bowers and colleagues'

study (2002) as well as the present one provide evidence that suggests otherwise. No

other studies are known to have directly compared startle reactivity patterns among

fearful and other emotionally expressive faces. Additionally, imaging and lesion studies

have shown mixed results with respect to the role of the amygdala and the processing of

fearful and angryfaces per se. For instance, Sprengelmeyer and colleagues (1998)

showed no fMRI activation in the amygdala in response to fearful relative to neutral

faces. Young and colleagues (1995) attributed a deficit in recognition of fear faces to

bilateral amygdala damage, but the much of the surrounding neural tissue was also

damaged.

So, how might one account for the relatively reduced startle response to fearful

faces? Bowers and colleagues (2002) provided a plausible explanation, implicating the

role of motivated behavior [i.e., Heilman's (1987)preparation for action scale] on these










results. As previously described, angry faces represent personally directed threat, and, as

might be reflected by the increased startle found in the present study, induce a

motivational propensity to withdraw or escape from that threat. Fearful expressions, on

the other hand, reflect some potential environmental threat to the actor, rather than to the

observer. Thus, this would reflect less motivational propensity for action and might

account for the reduced startle response.

Methodological Issues Regarding Facial Expressions

Before discussing the implications of this study more broadly, several

methodological issues must be addressed that potentially influenced the present findings.

The first relates to the reliability of the facial expression stimuli in depicting specific

emotions. Anger was the emotion that elicited the greatest startle response overall. At

the same time, anger facial expressions were least accurately categorized by a group of

independent naive raters (see Table 3-2, page 23).5 Whether there is a connection

between these findings is unclear, particularly since the emotions that the raters viewed

included a wider variety of options (i.e., 6 expressions) than those viewed by the

participants in this study (4 expressions). For example, the raters were shown facial

expressions of anger, disgust, fear, sad, happiness and neutral. Their accuracy in



1 A 2 (Viewing Mode: dynamic, static) X 6 (Expression Category: anger, disgust, fear, happy, neutral, sad)
repeated-measures ANOVA was conducted with an alpha criterion of .05 and Bonferroni-corrected post-
hoc comparisons. Results showed that dynamic expressions (M = .89, SD = .06) were rated significantly
more accurately than static expressions (M = .87, SD = .07). Additionally, Expression Category was found
to be significant, but not the interaction between Expression Category and Viewing Mode. Specific to the
emotion categories used in the present study, it was also found that happy faces were rated significantly
more accurately (M = .99, SD = .01) than neutral (M = .91, SD = .06) and angry (M = .73, SD = .18) faces,
while fear (M = .95, SD = .05) recognition rates were not significantly different from the other three.
Comparing each emotion across viewing modes, only anger was rated significantly more accurately in
dynamic (M = .78, SD = .17), versus static (M = .68, SD = .21), modes, while the advantage for dynamic
neutral faces (M = .92, SD = .04) over static versions (M = .89, SD = .08) only approached significance (p
= .055). A static version of an emotional expression was never rated significantly more accurately than its
dynamic version.









identifying anger expression was around 78%. When errors were made, they typically

(i.e., 95% of the time) judged the anger expressions as being 'disgust.' In the

psychophysiology study, the participants were shown only four expressions. It seems

unlikely that participants in the psychophysiology study easily confused anger, fear,

happiness, and neutral expressions. However, this could be addressed by examining the

ratings that were made by the psychophysiology participants.

Nevertheless, elevated startle reactivity for facial expressions that were less reliably

categorized might occur for several reasons: (1) differences in attention between

relatively poorly and accurately recognized stimuli, and (2) differences in perceived

arousal levels between relatively poorly and accurately recognized stimuli.

Regarding attention, previous researchers have suggested that visual attention

inhibits the startle response when the modalities between the startle probe and stimulus of

interest are mismatched (e.g., Ornitz, 1996). In this case, acoustic startle probes were

used in conjunction with visual stimuli. Since anger was associated with the strongest

startle reflexes, it was not likely inhibited. Thus, attention was probably not a mediating

factor between lower recognition rates and this effect. Regarding arousal, researchers

such as Cuthbert and colleagues (1996) indicated that potentiation of the startle response

occurs with more arousing stimuli when the stimuli are of negative valence. Anger, was

rated as the most negatively valenced, significantly more so than fear. Happy was rated

most positively. Since anger was rated most negatively, the only way arousal could have

been an influencing factor on anger's potentiated startle response was if anger was more

arousing than the other two expressions. However, it was rated as significantly less

arousing than both fear and happiness.









To conclude, it seems unlikely that ambiguity of the angry facial expressions

significantly contributed to the current findings. However, examination of ratings made

by the participants themselves might better clarify the extent to which anger expressions

were less accurately categorized than other expressions.

Other Considerations of the Present Findings

One explanation for the failure to uncover more robust findings using the skin

conductance response might relate to several of this measure's attributes. First, although

SCR can be a useful measure of emotional arousal, it does have considerable limitations.

It is estimated that that 15-20% of healthy individuals are skin conductance "non-

responders"; some individuals do not exhibit a discernable difference in this response to

different categories of emotional stimuli, while others exhibit very weak responses

overall (Bradley & Lang, 2000; O'Gorman, 1990). Moreover, the sensitive electrical

signal that records SCR is vulnerable to the effects of idle, unconscious motor activity,

especially considering that the electrodes are positioned on the palms of both hands.

Because participants sat alone during these recordings, it was impossible to determine

whether they followed instructions for keeping still. These factors suggest that the

potential for interference during the course of the two slideshows in the present study is

not insignificant and may have contributed to the null SCR findings, both for reactivity

across emotions, and response differences between emotions. As such, this study

uncovered only weak evidence that dynamic faces induced stronger skin conductance

responses than static faces; only a trend towards significance was found. A significant

difference might have emerged with more statistical power (dynamic: power = .47).

Numerically, dynamic faces were associated with larger mean SCR values (.314) than









static faces (.172). Therefore, a larger sample size would be required to increase our

confidence about the actual relationship of SCR for these two visual modes.

Several explanations might account for the finding that self-reported arousal ratings

were not significantly different for static and dynamic expressions (contrary to one

prediction in the current study). First, it is possible that the similar ratings between these

two experimental conditions were the product of an insensitive scale. The choice

between integers ranging only from 1 to 9 may have prohibited sufficient response

variability for drawing out differences between viewing modes. Also, it is possible that

subjects rated each expression in arousal relative to the expressions immediately

preceding the currently rated one, and failed to consider their responses relative to the

previously seen presentation. If this were the case, the viewed expressions might have

been rated in arousal relative to the average score within the current presentation, and the

means of arousal ratings from both presentations would be virtually identical.

Limitations of the Current Study

It is important to acknowledge some of the limitations of the current study. One

limitation is that the specific interactions between participant and actor variables of

gender, race, and attractiveness were not analyzed. It is likely that the emotional

response of a given individual to a specific face is dependent upon these factors due to

the individual's unique experiences. In addition, the meaning of some facial expressions

may be ambiguous when they are viewed in isolation. Depending on the current

situation, for instance, a smile might communicate any number of messages, including

contentment, peer acceptance, sexual arousal, relief, mischief, or even contempt (i.e., a

smirk). Taken together, averaging potentially variable responses due to highly specific

interactions with non-expressive facial features or varying interpretations of facial stimuli









between subjects might have contributed to certain non-significant effects, or created

artificial ones.

Secondly, the facial expression stimuli may have been perceived as somewhat

artificial, which potentially reduced the overall emotional responses (and consequently,

physiologic reactivity). The actors were recorded using black and white video with their

heads surrounded on either side with an immobilization cushion. In addition, despite

some pre-training, the actors deliberately posed the facial expressions; these were not the

product of authentic emotion per se. Previous research has determined that emotion-

driven and posed expressions are mediated by different neural mechanisms and muscular

response patterns (Monrad-Krohn, 1924; for review, see Rinn, 1984). It is likely that

some expressions might have been correctly recognized by emotional category, but not

necessarily believed as having an emotional origin. The extent to which emotional

reactivity is associated with perceiving genuine versus posed emotion in others remains

the topic of future research. It is reasonable to conjecture, however, that based on

everyday social interactions, the perception of posed expressions would be less

emotionally arousing and would therefore be associated with reduced emotional

reactivity.

Directions for Future Research

There are many avenues for future research. Further investigation into the effects

of and interactions between factors of gender, race, age, and attractiveness and the

characterization of these effects on patterns of startle modulation is warranted. The

effects of these factors would need to be determined to clearly dissociate expression-

specific differences in emotion perception. One of these factors may be implicated as

being more influential than facial expressivity in physiological reactivity to facial stimuli.









Further, the use of more genuine, spontaneous expressions as stimuli might be considered

to potentially introduce greater levels of emotional arousal into studies of social emotion

perception. Greater ecological validity might be gained via this route, as well as the use

of color stimuli and actors given free range of head movement.

Also, patterns of startle modulation to facial expressions should be further studied

over different age groups to help uncover the development of emotional recognition and

social cognition over the lifespan. This is especially warranted given the difference in the

findings of the present study (i.e., increased startle response to anger with attenuated

responses being associated with fearful, happy, and neutral expressions) in relation to

those of Balaban's (1995) study who tested infants. In her study, fearful expressions

yielded significantly greater responses than neutral ones and neutral ones yielding greater

responses than happy ones). Continued research with different age groups would help

disentangle the ontogenetic responsiveness to the meaning conveyed through facial

emotional signals and help determine the reliability of these few studies that have been

conducted.

To conclude, despite the limitations of the current study, dynamic and static faces

appear to elicit qualitatively different psychophysiological responses; specifically, that

dynamic faces induce greater startle eyeblink responses than static versions. This

observation has not been previously described in the literature. Because they appear to

differentially influence motivational systems, these two types of stimuli cannot be treated

interchangeably. The results of this and future studies will likely play an important role

in the development of a dynamic facial affect battery and aid in the race to extricate more






50


precisely the social cognition impairments in certain neurologic, psychiatric, and brain

injured populations.

















APPENDIX A
STATIC STIMULUS SET

Actor Measure Anger Disgust Fear Happiness Neutrality Sadness
Male 1 % Recognition 47.6 66.7 90.5 100 100 85.7
Valence M (SD) 3.0 (1.6) 3.9 (1.5) 4.4 (1.7) 7.4 (1.3) 5.2 (0.9) 3.7 (1.2)
ArousalM (SD) 5.5 (1.4) 5.4 (1.7) 5.8(1.5) 6.3 (1.3) 3.5 (1.8) 4.6 (1.4)
Male 2 % Recognition 90.5 85.7 100 100 90.5 95.2
Valence 2.8 (1.3) 3.5(1.1) 4.5 (1.8) 7.2 (1.4) 4.2 (1.2) 2.6 (1.3)
Arousal 5.1(2.1) 5.0 (1.9) 6.8 (1.7) 5.7 (1.7) 3.7 (1.8) 5.0 (1.8)
Male 3 % Recognition 71.4 81 90.5 100 100
Valence 3.2(1.5) 3.2(0.9) 4.2(1.7) 7.3(0.9) 4.7(1.4)
Arousal 5.2 (2.0) 5.1(1.7) 6.3 (1.5) 5.9 (1.6) 3.7 (1.9)
Male 4 % Recognition 57.1 71.4 85.7 100 95.2 95.2
Valence 3.3 (1.5) 3.6 (1.7) 3.8(1.6) 7.0 (2.2) 4.6 (0.7) 3.1 (1.2)
Arousal 5.4 (1.4) 5.5 (1.2) 6.0 (0.8) 6.7 (1.4) 3.3 (1.7) 4.5 (1.6)
Male 5 % Recognition 57.1 76.2 95.2 95.2 81 100
Valence 4.1 (1.2) 4.6 (0.8) 4.5 (1.2) 7.0 (1.3) 5.4 (1.2) 4.1 (1.3)
Arousal 4.6(1.3) 4.0 (1.6) 5.5 (1.4) 5.4 (1.7) 3.9 (1.8) 4.1(1.7)
Male 6 % Recognition 71.4 61.9 95.2 100 90.5 76.2
Valence 3.1 (1.6) 3.0 (1.8) 3.6 (1.6) 6.9 (1.3) 4.6 (1.7) 3.5 (1.5)
Arousal 5.1(1.6) 6.1(2.3) 5.8(1.6) 5.3 (2.1) 3.9 (2.2) 5.3 (1.3)
Female 1 % Recognition 61.9 76.2 100 100 85.7 90.5
Valence 3.3 (1.5) 3.3 (1.6) 3.9(1.7) 6.7(1.1) 4.5 (1.3) 2.9 (1.2)
Arousal 6.1(1.8) 5.3 (2.0) 6.3 (1.9) 6.0 (1.3) 3.4 (1.6) 4.7 (1.6)
Female 2 % Recognition 28.6 100 100 100 76.2 66.7
Valence 3.2 (1.6) 3.5 (1.0) 3.9(1.5) 7.1 (1.1) 3.3 (1.3) 4.4 (1.0)
Arousal 5.5 (1.5) 4.7 (1.4) 5.9(1.9) 5.8 (1.7) 2.8 (1.6) 2.9 (1.6)
Female 3 % Recognition 95.2 71.4 95.2 100 90.5 100
Valence 3.9 (1.0) 3.6 (2.0) 4.0(1.1) 7.7 (1.3) 4.4 (1.0) 3.4 (1.5)
Arousal 5.0(1.5) 6.0 (1.7) 5.5 (1.2) 6.4 (1.5) 3.5 (1.8) 4.8 (1.5)
Female 4 % Recognition 95.2 100 100 100 95.2 100
Valence 2.9 (1.4) 3.7(1.3) 4.3(1.1) 7.1 (0.9) 4.8 (0.5) 3.7 (1.4)
Arousal 5.6 (2.3) 5.5 (1.9) 5.9(1.7) 5.9 (2.0) 3.3 (1.7) 4.6 (1.2)
Female 5 % Recognition 90.5 95.2 100 95.2 90.5 95.2
Valence 3.8 (1.7) 3.3 (1.0) 4.1 (1.8) 7.2(1.1) 4.5(1.1) 3.7 (1.2)
Arousal 5.5 (1.7) 5.2 (1.3) 7.0 (1.5) 5.7 (1.5) 4.1(1.9) 4.8 (1.5)
Female 6 % Recognition 52.4 42.9 90.5 100 76.2 100
Valence 3.5 (1.6) 3.9 (1.4) 4.1 (1.1) 8.1 (0.9) 5.9(1.1) 3.7(1.1)
Arousal 5.0(1.5) 4.9 (1.8) 5.6 (1.8) 7.1(2.0) 4.8 (2.4) 5.1(1.6)
Note. The sad expression for male 3 was not created because of videotape corruption.


















APPENDIX B
DYNAMIC STIMULUS SET


Actor Measure Anger
Male 1 % Recognition 76.2
Valence M (SD) 2.9(1.2)
Arousal M (SD) 5.7 (2.0)
Male 2 % Recognition 95.2
Valence 3.2 (1.3)
Arousal 4.0(1.3)
Male 3 % Recognition 71.4
Valence 2.9(1.1)


Arousal
Male 4 % Recognition
Valence
Arousal
Male 5 % Recognition
Valence
Arousal
Male 6 % Recognition
Valence
Arousal
Female 1 % Recognition
Valence
Arousal
Female 2 % Recognition
Valence
Arousal
Female 3 % Recognition
Valence
Arousal
Female 4 % Recognition
Valence
Arousal
Female 5 % Recognition
Valence
Arousal
Female 6 % Recognition
Valence
Arousal


5.3 (1.5)
95.2
3.6 (0.9)
4.5(1.3)
71.4
3.2 (1.4)
5.2(1.3)
66.7
3.0 (0.8)
5.4 (1.8)
57.1
2.7 (1.6)
5.7 (2.0)
52.4
2.6 (1.3)
5.1 (2.0)
100
3.5 (1.3)
4.3 (1.8)
100
2.3 (1.1)
6.1 (1.9)
85.7
3.4(1.7)
5.0 (2.0)
66.7
3.2 (1.3)
5.1 (1.5)


Disgust Fear Happiness Neutrality Sadness
52.4 90.5 100 95.2 95.2
4.1 (1.3) 4.5 (1.9) 7.5 (0.9) 5.4 (0.7) 4.1 (1.1)
5.1 (1.5) 6.1 (2.0) 6.1 (1.4) 3.2 (2.1) 3.4 (1.9)
85.7 100 100 95.2 100
3.7(1.1) 3.6(1.9) 7.0(1.0) 4.9(0.7) 3.1 (1.4)
4.6 (2.1) 6.3 (2.1) 5.6 (1.6) 3.1 (1.9) 4.9 (1.6)
85.7 95.2 100 95.2
3.1 (0.8) 3.7 (1.6) 6.5 (1.2) 4.7 (0.9)
4.8 (1.9) 6.2 (1.4) 5.4 (1.4) 3.2 (2.0)
85.7 90.5 100 90.5 100
3.3 (1.7) 4.0 (1.8) 6.9 (2.1) 5.0 (0.9) 3.3 (1.0)
5.8 (1.5) 5.9(1.9) 6.4 (1.8) 3.6 (2.6) 4.4 (1.4)
52.4 95.2 100 85.7 100
4.1 (0.9) 3.8(1.6) 6.9(1.1) 4.9 (0.4) 3.2 (1.3)
4.5 (1.9) 5.8(1.5) 5.2 (2.0) 3.1 (1.9) 4.7 (1.7)
85.7 100 95.2 95.2 90.5
2.9(1.5) 4.1 (1.2) 6.9 (1.7) 4.8 (0.7) 3.3 (1.5)
5.9 (1.5) 4.6 (2.2) 5.8 (2.1) 2.9 (2.0) 5.1 (2.0)
57.1 100 100 95.2 85.7
2.1 (1.1) 3.2 (1.3) 6.9 (1.5) 4.5 (1.3) 3.1 (0.9)
5.8 (2.1) 6.3 (1.6) 5.9 (0.9) 3.3 (2.1) 4.8 (1.3)
100 100 100 85.7 66.7
3.6 (0.9) 3.4 (1.5) 7.3 (1.2) 4.3 (0.9) 4.2 (0.9)
4.4 (1.8) 5.8(1.7) 5.5 (1.6) 2.8 (1.7) 3.5 (2.2)
81 80.1 100 90.5 100
3.7(2.1) 3.1(1.1) 7.9(1.2) 4.9(0.5) 3.2(1.0)
6.4 (1.9) 5.6 (1.8) 6.8 (1.9) 3.3 (2.0) 4.6 (1.4)
100 95.2 100 95.2 100
3.4 (2.2) 3.5 (1.3) 7.3 (1.4) 5.1 (0.7) 3.1 (1.0)
5.5 (1.8) 6.1 (1.6) 5.9 (1.8) 3.0 (1.9) 4.9 (1.1)
95.2 100 100 95.2 95.2
3.3 (1.0) 3.2 (1.8) 6.9 (1.6) 5.14 3.6 (1.6)
5.4 (1.8) 6.8 (2.0) 4.9 (1.8) 3.2 (2.1) 4.2 (1.4)
66.7 85.7 100 85.7 95.2
3.5 (1.3) 3.3 (1.3) 8.3 (0.9) 6.0 (1.0) 3.5 (1.1)
5.6 (1.3) 6.1 (1.7) 6.7 (2.1) 4.3 (2.2) 4.8 (2.1)


Note. The sad expression for male 3 was not created because of videotape corruption.















LIST OF REFERENCES


Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994). Impaired recognition of
emotion in facial expressions following bilateral damage to the human amygdala.
Nature, 372(6507), 669-672.

Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion
perception from dynamic and static body expressions in point-light and full-light
displays. Perception, 33(6), 717-746.

Averill, J. R. (1975). A semantic atlas of emotional concepts. JSAS Catalogue of Selected
Documents in Psychology, 5, 330. (Ms. No. 421).

Balaban, M. T. (1995). Affective influences on startle in five-month-old infants: reactions
to facial expressions of emotion. ChildDevelopment, 66(1), 28-36.

Beck, A. T. (1978). Depression inventory. Philadelphia: Center for Cognitive Therapy.

Bowers, D., Bauer, R., & Heilman, K. M. (1993). The Nonverbal Affect Lexicon:
theoretical perspectives from neuropsychological studies of affect perception.
Neuropsychology, 7(4), 433-444.

Bowers, D., Blonder, L. X., & Heilman, K. M. (1992). Florida Affect Battery. University
of Florida.

Bowers, D., Parkinson, B., Gober, T., Bauer, M. C., White, E., & Bongiolatti, S. (2002,
November). Two faces of emotion: patterns of startle modulation depend on facial
expressions and on knowledge of evil. Poster presented at the Society for
Neuroscience, Orlando, FL.

Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: the Self-Assessment Manikin
and the Semantic Differential. Journal of Behavioral Therapy and Experimental
Psychiatry, 25(1), 49-59.

Bradley, M. M., & Lang, P. J. (2000). Measuring emotion: behavior, feeling, and
physiology. In R. D. Lane & L. Nadel (Eds.), Cognitive Neuroscience of Emotion
(pp. 242-276). New York: Oxford University.

Buhlmann, U., McNally, R. J., Etcoff, N. L., Tuschen-Caffier, B., & Wilhelm, S. (2004).
Emotion recognition deficits in body dysmorphic disorder. Journal of Psychiatric
Research, 38(2), 201-206.






54


Burton, A. M., Wilson, S., Cowan, M., & Bruce, V. (1999). Face recognition in poor-
quality video: evidence from security surveillance. Psychological Science, 10(3),
243-248.

Bush, L. E., II. (1973). Individual differences in multidimensional scaling of adjectives
denoting feelings. Journal ofPersonality and Social Psychology, 25, 50-57.

Cannon, W. B. (1931). Again the James-Lange and the thalamic theories of emotion.
Psychological Review, 38, 281-295.

Christie, F., & Bruce, V. (1998). The role of dynamic information in the recognition of
unfamiliar faces. Memory and Cognition, 26(4), 780-790.

Cuthbert, B. N., Bradley, M. M., & Lang, P. J. (1996). Probing picture perception:
activation and emotion. Psychophysiology, 33(2), 103-111.

Darwin, C. (1872). The expression of the emotions in man and animals. Chicago:
University of Chicago Press.

Davis, M. (1992). The role of the amygdala in fear-potentiated startle: implications for
animal models of anxiety. Trends in Pharmacological Science, 13(1), 35-41.

Davis, M., & Whalen, P. J. (2001). The amygdala: vigilance and emotion. Mol.
Psychiatry, 6(1), 13-34.

DeSimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of
Cognitive Neuroscience, 3, 1-8.

Edwards, J., Jackson, H. J., & Pattison, P. E. (2002). Emotion recognition via facial
expression and affective prosody in schizophrenia: a methodological review.
Clinical Psychology Review, 22(6), 789-832.

Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In
J. Cole (Ed.), Nebraska symposium on motivation, 1971 (pp. 207-283). Lincoln,
NE: University of Nebraska Press.

Ekman, P. (1973). Darwin and facial expression; a century of research in review. New
York: Academic Press.

Ekman, P. (1980). The face of man: expressions of universal emotions in a New Guinea
village. New York: Garland STPM Press.

Ekman, P. (1982). Emotion in the human face (2nd ed.). New York: Cambridge
University Press. Editions de la Maison des Sciences de l'Homme.

Ekman, P., & Davidson, R. J. (1994). The nature of emotion: fundamental questions.
New York: Oxford University Press.









Ekman, P., & Friesen, W. V. (1976). Pictures official affect. Palo Alto: Consulting
Psychologists Press.

Ekman, P., Levenson, R. W., & Friesen, W. V. (1983). Autonomic nervous system
activity distinguishes among emotions. Science, 221(4616), 1208-1210.

Ekman, P., & Rosenberg, E. L. (1997). What the face reveals: basic and applied studies
of spontaneous expression using the facial action coding system (FACS). New
York: Oxford University Press.

Eysenck, M. W., & Keane, M. (2000). Cognitive Psychology: A S.Nlidel Handbook.
Philadelphia: Taylor & Francis.

Field, T. M., Woodson, R., Greenberg, R., & Cohen, D. (1982). Discrimination and
imitation of facial expression by neonates. Science, 218(4568), 179-181.

Gilboa-Schechtman, E., Foa, E. B., & Amir, N. (1999). Attentional biases for facial
expressions in social phobia: the face-in-the-crowd paradigm. Cognition and
Emotion, 13(3), 305-318.

Gloor, P., Olivier, A., Quesney, L. F., Andermann, F., & Horowitz, S. (1982). The role of
the limbic system in experiential phenomena of temporal lobe epilepsy. Annals of
Neurology, 12(2), 129-144.

Hargrave, R., Maddock, R. J., & Stone, V. (2002). Impaired recognition of facial
expressions of emotion in Alzheimer's disease. Journal ofNeuropsychiatry and
Clinical Neurosciences, 14(1), 64-71.

Hariri, A. R., Tessitore, A., Mattay, A., Frea, F., & Weinberger, D. (2001). The amygdala
response to emotional stimuli: a comparison of faces and scenes. Neuroimage,
17(317-323).

Heilman, K. M. (1987, February). Syndromes official affect processing. Paper presented
at the International Neuropsychological Society, Washington, DC.

Hess, W. R., & Brugger, M. (1943). Subcortical center of the affective defense reaction.
In K. Akert (Ed.), Biological order and brain organization: selected works of W. R.
Hess (pp. 183-202). Berlin: Springer-Verlag.

Humphreys, G. W., Donnelly, N., & Riddoch, M. J. (1993). Expression is computed
separately from facial identity, and it is computed separately for moving and static
faces: neuropsychological evidence. Neuropsychologia, 31(2), 173-181.

Izard, C. E. (1994). Innate and universal facial expressions: evidence from developmental
and cross-cultural research. Psychological Bulletin, 115(2), 288-299.

Johnson, M. H., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns' preferential
tracking of face-like stimuli and its subsequent decline. Cognition, 40(1-2), 1-19.









Kamachi, M., Bruce, V., Mukaida, S., Gyoba, J., Yoshikawa, S., & Akamatsu, S. (2001).
Dynamic properties influence the perception of facial expressions. Perception,
30(7), 875-887.

Kan, Y., Kawamura, M., Hasegawa, Y., Mochizuki, S., & Nakamura, K. (2002).
Recognition of emotion from facial, prosodic and written verbal stimuli in
Parkinson's disease. Cortex, 38(4), 623-630.

Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., & Hoffman, J. M. (2003). Dissociable
neural pathways are involved in the recognition of emotion in static and dynamic
facial expressions. Neuroimage, 18(1), 156-168.

Klver, H., & Bucy, P. C. (1937). "Psychic blindness" and other symptoms following
bilateral temporal lobectomy. American Journal ofPhysiology, 119, 352-353.

Kohler, C. G., Bilker, W., Hagendoorn, M., Gur, R. E., & Gur, R. C. (2000). Emotion
recognition deficit in schizophrenia: association with symptomatology and
cognition. Biological Psychiatry, 48(2), 127-136.

Lander, K., & Bruce, V. (2004). Repetition priming from moving faces. Memory and
Cognition, 32(4), 640-647.

Lander, K., Christie, F., & Bruce, V. (1999). The role of movement in the recognition of
famous faces. Memory and Cognition, 27(6), 974-985.

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1990). Emotion, attention, and the startle
reflex. Psychological Review, 97(3), 377-395.

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1997). Motivated attention: affect,
activation and action. In P. J. Lang, R. F. Simons & M. T. Balaban (Eds.), Attention
and orienting: sensory and motivational processes. Hillsdale, NJ: Lawrence
Erlbaum.

Lang, P. J., Bradley, M. M., Cuthbert, B. N., & Patrick, C. J. (1993). Emotion and
psychopathology: a startle probe analysis. Progress in Experimental, Personality,
andP% ) l q/itpIth 1h ogi /Research, 16, 163-199.

Lang, P. J., Greenwald, M. K., Bradley, M. M., & Hamm, A. O. (1993). Looking at
pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology, 30,
261-273.

Leonard, C., Voeller, K. K. S., & Kuldau, J. M. (1991). When's a smile a smile? Or how
to detect a message by digitizing the signal. Psychological Science, 2, 166-172.

Levenson, R. W., Carstensen, L. L., Friesen, W. V., & Ekman, P. (1991). Emotion,
physiology, and expression in old age. Psychology and Agiig. 6(1), 28-35.









Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial action generates
emotion-specific autonomic nervous system activity. Psychophysiology, 27(4),
363-384.

Monrad-Krohn, G. H. (1924). On the dissociation of voluntary and emotional innervation
in facial paralysis of central origin. Brain, 47(22-35).

Morris, J. S., Friston, K. J., Buchel, C., Frith, C. D., Young, A. W., Calder, A. J., et al.
(1998). A neuromodulatory role for the human amygdala in processing emotional
facial expressions. Brain, 121 (Pt 1), 47-57.

Morris, M., Bradley, M. M., Bowers, D., Lang, P. J., & Heilman, K. M. (1991). Valence
specific hypoarousal following right temporal lobectomy [Abstract]. Journal of
Clinical and Experimental Neuropsychology, 14, 105.

Nelson, C. A., & Dolgrin, K. G. (1985). The generalized discrimination of facial
expressions by seven-month-old infants. Child Development, 56, 58-61.

Oatley, K., & Jenkins, J. M. (1996). Understanding emotions. Cambridge: Blackwell
Publishers.

O'Gorman, J. G. (1990). Individual differences in the orienting response: nonresponding
in nonclinical samples. Pavlov Journal of Biological Science, 25(3), 104-108;
discussion 109-110.

Okun, M. S., Bowers, D., Springer, U., Shapira, N., Malone, D., Rezai, A., Nuttin, B.,
Heilman, K. M., Morecraft, R., Rasmussen, S., Greenberg, B., Foote, K.,
Goodman, W. (2004). What's in a "smile?" Intra-operative observations of
contralateral smiles induced by deep brain stimulation. Neurocase, 10(4), 271-279.

Ornitz, E. M., Russell, A. T., Yuan, H., & Liu, M. (1996). Autonomic,
electroencephalographic, and myogenic activity accompanying startle and its
habituation during mid-childhood. Psychophysiology, 33(5), 507-513.

Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning.
Chicago: University of Illinois Press.

O'Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: a
psychological and neural synthesis. Trends in Cognitive Science, 6(6), 261-266.

Pike, G. E., Kemp, R. I., Towell, N. A., & Phillips, K. C. (1997). Recognizing moving
faces: the relative contribution of motion and perspective view information. Visual
Cognition, 4(4), 409-438.

Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological
motion. Philosophical Transactions of the Royal Society ofLondon. Series B,
Biological Sciences, 358(1431), 435-445.









Puce, A., Syngeniotis, A., Thompson, J. C., Abbott, D. F., Wheaton, K. J., & Castiello,
U. (2003). The human temporal lobe integrates facial form and motion: evidence
from fMRI and ERP studies. Neuroimage, 19(3), 861-869.

Rinn, W. E. (1984). The neuropsychology of facial expression: a review of the
neurological and psychological mechanisms for producing facial expressions.
Psychological Bulletin, 95(1), 52-77.

Roberts, R. J., & Weerts, T. C. (1982). Cardiovascular responding during anger and fear
imagery. Psychology Report, 50(1), 219-230.

Rosen, J. B., & Davis, M. (1988). Enhancement of the acoustic startle by electrical
stimulation of the amygdala. Behavioral Neuroscience, 102(2), 195-202.

Russell, J. A. (1978). Evidence of convergent validity on the dimensions of affect.
Journal of Personality and Social Psychology, 36, 1152-1168.

Russell, J. A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions.
Journal ofResearch in Personality, 11(273-294).

Russell, J. A., & Ridgeway, D. (1983). Dimensions underlying children's emotion
concepts. Developmental Psychology, 19, 785-804.

Schlosberg, H. (1952). The description of facial expressions in terms of two dimensions.
Journal ofExperimental Psychology, 44(4), 229-237.

Schwartz, G. E., Weinberger, D. A., & Singer, J. A. (1981). Cardiovascular
differentiation of happiness, sadness, anger, and fear following imagery and
exercise. Psychosomatic Medicine, 43(4), 343-364.

Singh, S. D., Ellis, C. R., Winton, A. S., Singh, N. N., Leung, J. P., & Oswald, D. P.
(1998). Recognition of facial expressions of emotion by children with attention-
deficit hyperactivity disorder. Behavior Modification, 22(2), 128-142.

Sorce, J., Emde, R., Campos, J., & Klinnert, M. (1985). Maternal emotional signaling: it's
effect on the visual cliff behavior of 1-year-olds. Developmental Psychology, 21(1),
195-200.

Spielberger, C. D. (1983). State-Trait Anxiety Inventory. Palo Alto, CA: Mind Garden.

Sprengelmeyer, R., Rausch, M., Eysel, U. T., & Przuntek, H. (1998). Neural structures
associated with recognition of facial expressions of basic emotions. Proceedings of
the Royal Society of London Series B Biological Sciences, 265(1409), 1927-1931.

Sprengelmeyer, R., Young, A. W., Calder, A. J., Karnat, A., Lange, H., Homberg, V., et
al. (1996). Loss of disgust. Perception of faces and emotions in Huntington's
disease. Brain, 119 (Pt 5), 1647-1665.









Sprengelmeyer, R., Young, A. W., Mahn, K., Schroeder, U., Woitalla, D., Buttner, T., et
al. (2003). Facial expression recognition in people with medicated and unmedicated
Parkinson's disease. Neuropsychologia, 41(8), 1047-1057.

Tanaka, K. (1992). Inferotemporal cortex and higher visual functions. Current Opinion in
Neurobiology, 2, 502-505.

Teunisse, J. P., & de Gelder, B. (2001). Impaired categorical perception of facial
expressions in high-functioning adolescents with autism. Neuropsychology,
Development, and Cognition. Section C, Child Neuropsychology, 7(1), 1-14.

Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M.
A. Goodale & R. J. W. Mansfield (Eds.), Analysis of VisualBehavior (pp. 549-
586). Cambridge: MIT Press.

Walker, D. L., & Davis, M. (2002). Quantifying fear potentiated startle using absolute
versus proportional increase scoring methods: implications for the neurocircuitry of
fear and anxiety. Psychopharmacology(164), 318-328.

Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of
emotional expression using synthesized facial muscle movements. Journal of
Personality and Social Psychology, 78(1), 105-119.

Wundt, W. (1897). Outlines ofpsychology (C. H. Judd, Trans.). New York: Gustav E.
Stetchert.

Young, A. W., Aggleton, J. P., Hellawell, D. J., Johnson, M., Broks, P., & Hanley, J. R.
(1995). Face processing impairments after amygdalotomy. Brain, 118 (Pt 1), 15-24.

Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997).
Facial expression megamix: tests of dimensional and category accounts of emotion
recognition. Cognition, 63(3), 271-313.

Zeki, S. (1992). The visual image in mind and brain. Scientific American, 267(3), 68-76.

Zihl, J., von Cramon, D., & Mai, N. (1983). Selective disturbance of movement vision
after bilateral brain damage. Brain, 106 (Pt 2), 313-340.















BIOGRAPHICAL SKETCH

Utaka Springer was born in Menomonie, WI, and received his B.S. in biology from

Harvard University. After gaining research experience in cognitive neuroscience at the

McKnight Brain Institute in Gainesville, FL, he entered the doctoral program in clinical

psychology at the University of Florida, specializing in neuropsychology.




Full Text

PAGE 1

DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND DYNAMIC DISPLAYS OF FACIAL EMOTION By UTAKA S. SPRINGER A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by Utaka S. Springer

PAGE 3

ACKNOWLEDGMENTS This research was supported by R01 MH62539. I am grateful to Dawn Bowers for her patience, availability, and expertise in advising this project. I would like to thank the members of the Cognitive Neuroscience Laboratory for their support throughout this project. I would like to extend special thanks to Shauna Springer, Alexandra Rosas, John McGetrick, Paul Seignourel, Lisa McTeague, and Gregg Selke. iii

PAGE 4

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES.............................................................................................................vi LIST OF FIGURES..........................................................................................................vii ABSTRACT.....................................................................................................................viii 1 INTRODUCTION........................................................................................................1 Perceptual Differences for Static and Dynamic Expressions.......................................3 Cognitive Studies...................................................................................................4 Neural Systems and the Perception of Movement versus Form............................5 Dimensional versus Categorical Models of Emotion...................................................7 Dimensional Models of Emotion...........................................................................7 Categorical Models of Emotion...........................................................................10 Emotional Responses to Viewing Facial Expressions................................................12 2 STATEMENT OF THE PROBLEM..........................................................................15 Specific Aim I.............................................................................................................16 Specific Aim II...........................................................................................................16 3 METHODS.................................................................................................................18 Participants.................................................................................................................18 Materials.....................................................................................................................19 Collection of Facial Stimuli: Video Recording..................................................19 Selection of Facial Stimuli..................................................................................20 Digital Formatting of Facial Stimuli...................................................................21 Dynamic Stimuli.........................................................................................................22 Final Selection of Stimuli for Psychophysiology Experiment............................23 Design Overview and Procedures...............................................................................23 Psychophysiologic Measures......................................................................................26 Acoustic Startle Eyeblink Reflex (ASR).............................................................26 Skin Conductance Response (SCR)....................................................................27 Data Reduction of Psychophysiology Measures........................................................27 Statistical Analysis......................................................................................................28 iv

PAGE 5

4 RESULTS...................................................................................................................30 Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces.......................30 Startle Eyeblink Response...................................................................................31 Skin Conductance Response (SCR)....................................................................31 Self-Reported Arousal.........................................................................................32 Hypothesis 2: Emotion Modulation of Startle by Expression Categories..................32 Other Patterns of Emotional Modulation by Viewing Mode......................................35 Skin Conductance Response................................................................................35 Self-Reported Arousal.........................................................................................36 Self-Reported Valence.........................................................................................37 5 DISCUSSION.............................................................................................................40 Interpretation and Relationship to Other Findings.....................................................41 Methodological Issues Regarding Facial Expressions...............................................44 Other Considerations of the Present Findings............................................................46 Limitations of the Current Study................................................................................47 Directions for Future Research...................................................................................48 APPENDIX A STATIC STIMULUS SET.........................................................................................51 B DYNAMIC STIMULUS SET....................................................................................52 LIST OF REFERENCES...................................................................................................53 BIOGRAPHICAL SKETCH.............................................................................................60 v

PAGE 6

LIST OF TABLES Table page 3-1 Demographic characteristics of experimental participants......................................19 3-2 Mean (SD) recognition rates, valence, and arousal of static and dynamic face stimuli.......................................................................................................................23 4-1 Mean (SD) dependent variable scores by Viewing Mode.........................................30 4-2 Mean (SD) dependent variable scores by Viewing Mode and Expression Category...................................................................................................................33 vi

PAGE 7

LIST OF FIGURES Figure page 1-1 Neuroanatomic circuitry of the startle reflex...........................................................13 3-1 Temporal representation of dynamic and static stimuli...........................................22 4-1 Startle eyeblink T-scores by expression category....................................................34 4-2 Self-reported arousal by expression category..........................................................37 4-3 Self-reported valence by expression category..........................................................38 vii

PAGE 8

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND DYNAMIC DISPLAYS OF FACIAL EMOTION By Utaka S. Springer May 2005 Chair: Dawn Bowers Major Department: Clinical and Health Psychology Rationale. Recent studies suggest that many neurologic and psychiatric disorders are associated with impairments in accurately interpreting facial expressions. These studies have typically used photographic stimuli, yet cognitive and neurobiological research suggests that the perception of moving (dynamic) expressions is different from the perception of static expressions. Moreover, in day-to-day interactions, humans generally view faces while they move. This study had two aims: (1) to elucidate differences in physiological reactivity [i.e., startle eyeblink reflex and the skin conductance response (SCR)] while viewing static versus dynamic facial expressions, and (2) to examine patterns of reactivity across specific facial expressions. It was hypothesized that viewing dynamic faces would be associated with greater physiological reactivity and that expressions of anger would be associated with potentiated startle eyeblink responses relative to other facial expressions. viii

PAGE 9

Methods. Forty young adults viewed two slideshows consisting entirely of static or dynamic facial expressions. Expressions represented the emotions of anger, fear, happiness, and neutrality. Psychophysiological measures included the startle eyeblink reflex and SCR. Self-reported valence and arousal were also recorded for each stimulus. Results. Data were analyzed using repeated measures analyses of variance. The participants exhibited larger startle eyeblink responses while viewing dynamic versus static facial expressions. Differences in SCR approached significance (p = .059), such that dynamic faces tended to induce greater responses than static ones. Self-reported arousal was not significantly different during either condition. Additionally, the startle reflex was significantly greater for angry expressions, and comparably smaller for the fearful, neutral, and happy expressions, across both modes of presentation. Self-reported differences in reactivity between types of facial expressions are discussed in the context of the psychophysiology results. Conclusions. The current study found evidence supporting greater psychophysiological reactivity in young adults while they viewed dynamic compared to static facial expressions. Additionally, expressions of anger induced relatively higher startle responses relative to other expressions, including fear. It was concluded that angry expressions, representing personally directed threat, induce a greater motivational propensity to withdraw or escape. These findings highlight an important distinction between initial stimulus processing (i.e., expressions of fear or anger) and motivated behavior. ix

PAGE 10

CHAPTER 1 INTRODUCTION The ability to successfully interpret facial expressions is a fundamental aspect of normal life. An immense number of configurations across the landscape of the human face are made possible by 44 pairs of muscles anchored upon the curving surfaces of the skull. A broad smile, a wrinkled nose, widened eyes, a wink all convey emotional content important for social interactions. Darwin (1872) suggested that successful communication through nonverbal means such as facial expressions has promoted survival of the human species. Indeed, experimental research has demonstrated that infants develop an understanding of their mothers facial expressions rapidly and automatically, and that they use these signals to guide their safe behavior (Field, Woodson, Greenberg, & Cohen, 1982; Johnson, Dziurawiec, Ellis, & Morton, 1991; Nelson & Dolgrin, 1985; Sorce, Emde, Campos, & Klinnert, 1985). The accurate decoding of facial signals, then, can play a protective role as well as a communicative one. A growing body of empirical research suggests that many conditions are associated with impaired recognition of facial expressions. A list of neurologic and psychiatric conditions within which studies have associated impaired interpretation of facial expressions include autism, Parkinsons disease, Huntingtons disease, Alzheimers disease, schizophrenia, body dysmorphic disorder, attention-deficit/hyperactivity disorder, and social phobia (Buhlmann, McNally, Etcoff, Tuschen-Caffier, & Wilhelm, 2004; Edwards, Jackson, & Pattison, 2002; Gilboa-Schechtman, Foa, & Amir, 1999; Kan, 1

PAGE 11

2 Kawamura, Hasegawa, Mochizuki, & Nakamura, 2002; Singh et al., 1998; Sprengelmeyer et al., 1996; Sprengelmeyer et al., 2003; Teunisse & de Gelder, 2001). These deficits in processing facial expressions appear to exist above and beyond disturbances in basic visual or facial identify processing and may reflect disruption of cortical and subcortical networks for processing nonverbal affect (Bowers, Bauer, & Heilman, 1993). In many cases, impairments in the recognition of specific facial expressions have been discovered. For example, bilateral damage to the amygdala has been associated with the inability to recognize fearful faces (Adolphs, Tranel, Damasio, & Damasio, 1994). One potential problem with these clinical studies is that they most often use static, typically photographic, faces as stimuli. This may be problematic for two reasons. First, human facial expressions usually consist of complex patterns of movement. They can flicker across the face in a fleeting and subtle manner, develop slowly, or arise with sudden intensity. The use of static stimuli in research and clinical evaluation, then, has poor ecological validity. Second, mounting evidence suggests that there are fundamental cognitive and neural differences between the perception of static-based and dynamic facial expressions. These differences, which can be subdivided into evidence from cognitive and more biologically based studies, are described in more detail in the following sections. The preceding highlights the need to incorporate dynamic facial expression stimuli in the re-evaluation of conditions currently associated with facial expression processing deficits, as argued by Kilts and colleagues (2003). This line of research would greatly benefit from the creation of a standardized battery of dynamic expression stimuli. Before

PAGE 12

3 a more ecologically valid dynamic battery can be developed, it is necessary to more precisely characterize how normal individuals respond to different types of facial expression stimuli. Although cognitive, behavioral, and neural systems have been examined in the comparing responses associated with static and dynamic face perception, no studies to date have compared differences in emotional reactivity using psychophysiologic indices of arousal and valence (i.e., startle reflex, skin conductance response). The two major goals of the present study, then, are as follows: first, to empirically characterize psychophysiologic differences in how people respond to dynamic versus static emotional faces, and second, to determine whether psychophysiologic response patterns differ when individuals view different categories of static and dynamic facial expressions (e.g., anger, fear, or happiness). The following sections provide the background for the current study in three parts: (1) evidence that suggests cognitive and neurobiological differences in the perception of static versus dynamic expressions, (2) dimensional and categorical approaches to studying emotion, and (3) emotional responses to viewing facial expressions. Specific hypotheses and predictions are presented in the next chapter. Perceptual Differences for Static and Dynamic Expressions Evidence that individuals respond differently to static and dynamic displays of emotion comes from two major domains of research. The first major domain is cognitive research. With regard to the present study, this refers to the study of the various internal mental processes involved in the perception of emotions in others (i.e., recognition and discrimination), as inferred by overt responses. The second major domain is neurobiological research. Again, specific to the present study, this refers to the physiological and neurological substrates involved during or after emotion perception.

PAGE 13

4 The following sections review the literature from these two domains with regard to differences in perception of static and dynamic expressions. Cognitive Studies Recent research suggests that facial motion influences several cognitive aspects of face perception. First, facial motion improves recognition of familiar faces, especially in less-than-optimal visual conditions (Burton, Wilson, Cowan, & Bruce, 1999; Lander, Christie, & Bruce, 1999). For example, in conditions such as low lighting or blurriness, the identity of a friend or a famous actor is more easily discerned through face perception if the face is moving. It is less clear whether this advantage of movement is also conferred to the recognition of unfamiliar faces (Christie & Bruce, 1998; Pike, Kemp, Towell, & Phillips, 1997). As reviewed by OToole et al. (2002), there are two prevailing hypotheses on how facial motion enhances face recognition. According to the first, facial movement provides additional visual information that helps the viewer assemble a three-dimensional mental construct of the face (e.g., Pike et al., 1997). A second view is that certain movement patterns may be unique and characteristic of a particular individual (i.e., movement signatures). These unique movement signatures, such as Elvis Presleys lip curl, are thought to supplement the available structural information of the face (e.g., Lander & Bruce, 2004). Either or both hypotheses can account for observations that familiar individuals are more readily recognized from dynamic than static pictures. One question that naturally arises is whether facial motion also increases recognition and discrimination of discrete types of emotional expressions. Like familiar faces, emotional expressions on the face have been shown to be similar across individuals and even across cultures (Ekman, 1973; Ekman & Friesen, 1976). Leonard and

PAGE 14

5 colleagues (1991) found that categorical judgments of happiness during the course of a smile occurred at the point of most rapid movement change in the actors facial configuration. Werhle and colleagues (2000) reported that recognition of discrete emotions was enhanced through the use of dynamic versus static synthetic facial stimuli. Other research extended the findings of Werhle et al. by finding that certain speeds of facial expressions are optimal for recognition, depending on the specific expression type (Kamachi et al., 2001). Altogether, these studies suggest that motion does facilitate the recognition of facial expressions. Some research suggests that the subjectively rated intensity of emotional displays might also be influenced by a motion component. For example, a study by Atkinson and colleagues (2004) suggested that the perceived intensity of emotional displays is dependent on motion rather than on form. Participants in this study judged actors posing full-body expressions of anger, disgust, fear, happiness, and sadness, both statically and dynamically. Dynamic displays of emotion were judged as more intense than static ones, both in normal lighting and in degraded lighting (i.e., in darkness with points of light attached to the actors joints and faces). Although this evidence suggests that dynamic expressions of emotion are indeed perceived as more intense than static ones, research on this topic has been sparse. Neural Systems and the Perception of Movement versus Form Previous research also suggests that distinct neural systems are involved in the perception of static and dynamic faces. A large body of evidence convincingly supports the existence of two anatomically distinct visual pathways in the cerebral cortex (Ungerleider & Mishkin, 1982). One visual pathway is involved in motion detection (V5) while the other visual pathway is involved in processing form or shape information

PAGE 15

6 (V3, V4, inferotemporal cortex) [for review, see Zeki (1992)]. As one example of evidence that visual form is processed relatively independently, microelectrode recordings of individual neurons in the inferotemporal cortex of monkeys have been shown to respond preferentially to simple, statically presented shapes (Tanaka, 1992). Preferential single-cell responses to more complex types of statically presented stimuli, such as faces, have also been shown (DeSimone, 1991). An example of evidence for the existence of a specialized motion pathway is provided by a fascinating case study describing a patient with a brain lesion later found to be restricted to area V5 [Zihl et al., 1983; as discussed in Eysenck (2000)]. This woman was adequate at locating stationary objects by sight, she had good color discrimination, and her stereoscopic depth perception was normal; however, her perception of motion was severely impaired. The patient perceived visual events as if they were still photographs. People would suddenly appear here or there, and when she poured her tea, the fluid appeared to be frozen, like a glacier. Humphreys and colleagues (1993) described findings from two brain-impaired patients who displayed different patterns of performance during the perception of static and dynamic facial expressions. One patient was impaired at discriminating facial expressions from still photographs of faces, but performed normally when asked to make judgments of facial expressions depicted by moving dots of light. This patient had suffered a stroke that involved the bilateral occipital lobes and extended anteriorly towards the temporal lobes (i.e., the form visual pathway). The second patient was poor at judging emotional expressions from both the static and dynamic displays despite being relatively intact in other visual-perceptual tasks of comparable complexity. This patient had two parietal lobe lesions, one in each cerebral hemisphere. Taken together,

PAGE 16

7 the different patterns of performance from these two patients suggest dissociable neural pathways between recognition of static and dynamic facial expressions. Additional work with microelectrode recordings in non-human primates suggests that static and dynamic facial stimuli are processed by visual form and visual motion pathways, respectively, and converge at the area of the superior temporal sulcus (STS) (Puce & Perrett, 2003). A functional imaging study indicates that the STS region performs the same purpose in humans (Puce et al., 2003). In monkeys, specific responses in individual neurons of the STS region have shown sensitivity to static facial details such as eye gaze and the shape of the mouth, as well as movement-based facial details, such as different types of facial motion (Puce & Perrett, 2003). The amalgamation of data from biological studies indicates that static and dynamic components of facial expressions appear to be processed by separable visual streams that eventually converge within the region of the STS. The next section provides a background for two major conceptual models of emotion. This information is then used as a backdrop for the current study. Dimensional versus Categorical Models of Emotion Dimensional Models of Emotion Historically, there have been two major approaches in the study of emotion. In what is often described as a dimensional model, emotions are characterized using chiefly two independent, bipolar dimensions (e.g., Schlosberg, 1952; Wundt, 1897). The first dimension, valence, has been described in different ways (i.e., pleasant to unpleasant, positive to negative, appetitive to aversive); however, it generally refers to a range of positive to negative feeling. The second dimension, arousal, represents a continuum ranging from very low (e.g., calm, disinterest, or a lack of enthusiasm) to very high (e.g.,

PAGE 17

8 extreme alertness, nervousness, or excitement). These two orthogonal scales create a two-dimensional affective space, across which emotions and emotional responses might be characterized. Other dimensional approaches have included an additional scale in order to more fully define the range of emotional judgments. This third scale has been variously identified as preparation for action, aggression, attention-rejection, dominance, and potency, and has been helpful for differentiating emotional concepts (Averill, 1975; Bush, 1973; Heilman, 1987, February; Russell & Mehrabian, 1977; Schlosberg, 1952). For instance, fear and anger might be indistinguishable within a two-dimensional affective space both may be considered negative/unpleasant emotions high in arousal. A third dimension such as dominance or action separates these two emotions in three-dimensional affective space. Briefly, dominance refers to the range of feeling dominant (i.e., having total power, control, and influence) to submissive (i.e., feeling a lack of control or unable to influence a situation). This construct has been discovered statistically through factor analytic methods based on the work of Osgood, Suci, and Tannenbaum (1957). Action (preparation for action to non-preparation for action), on the other hand, was proposed by Heilman [1987; from Bowers et al. (1993)]. This construct was based on neuropsychological evidence and processing differences between the anterior portions of the right and left hemispheres (e.g., Morris, Bradley, Bowers, Lang, & Heilman, 1991). Thus, in the present example for differentiating fear and anger, anger is associated with feelings of dominance or preparation for action, whereas fear is associated with feelings of submission (lack of dominance) or a lack of action (i.e., the freezing response in rats with a sudden onset of fear). In this way, then, a third

PAGE 18

9 dimension can sometimes help distinguish between emotional judgments that appear similar in two-dimensional affective space. Generally, however, the third dimension has not been a replicable factor across studies or cultures (Russell, 1978; Russell & Ridgeway, 1983). The present study incorporates only the dimensions of valence and arousal. Emotion researchers have measured emotional valence and arousal in several ways, including: (1) overt behaviors (e.g., EMG activity of facial expression muscles such as corrugator or zygomatic muscles), (2) conscious thoughts or self-reports about ones emotional experience, usually measured by ordinal scales, and (3) central and physiologic arousal and activation, such as electrodermal activity, heart rate, and the magnitude of the startle reflex (Bradley & Lang, 2000). All three components of emotion have been measured reliably in laboratory settings. Among the physiological markers of emotion, the startle eyeblink typically is used as an indicator of the valence of an emotional response (Lang, Bradley, & Cuthbert, 1990). The startle reflex is an automatic withdrawal response to a sudden, intense stimulus, such as a flash of light or a loud burst of noise. More intense eyeblink responses, measured from electrodes over the orbicularis oculi muscles, have been found in association with negative/aversive emotional material relative to neutral material. Less intense responses have been found for positive/appetitive material, relative to neutral material. Palm sweat, or SCR, is another physiological marker of emotion and typically is used as an indicator of sympathetic arousal (Bradley & Lang, 2000). Higher SCR has been shown to be associated with higher self-reported emotional arousal, relatively independent of valence (e.g., Lang, Greenwald, Bradley, & Hamm, 1993).

PAGE 19

10 Categorical Models of Emotion A second major approach to the study of emotion posits that emotions are actually represented by basic, fundamental categories (e.g., Darwin, 1872; Izard, 1994). Support for the discrete emotions view comes from two major lines of evidence: cross-cultural studies and neurobiological findings [although cognitive studies have also been conducted, e.g., Young et al. (1997)]. With regard to the former line of evidence, Darwin (1872) argued that specific emotional states are evidenced by specific, categorical patterns of facial expressions. He suggested that these expressions contain universal configurations that are displayed by people throughout the world. Ekman and Friesen (1976) developed this idea further and created an atlas describing the precise muscular configurations associated with each of six basic emotional expressions (e.g., surprise, fear, disgust, anger, happiness, and sadness). In a cross-cultural study, Ekman (1972) found that members of a preliterate tribe in the highlands of New Guinea were able to recognize the meaning of these expressions with a high degree of accuracy. Further, photographs of tribal members who had been asked to pose various emotions were shown to college students in the United States. The college students were able to recognize the meanings of the New Guineans emotions, also with a high degree of accuracy. Additional evidence supporting the categories of emotion conceptualization is derived from the neurobiological literature. For instance, electrical stimulation of highly specific regions of the brain has been associated with distinct emotional states. Hess and Brgger [1943; from Oatley & Jenkins (1996)] discovered that angry behavior in cats, dubbed sham rage (Cannon, 1931), were elicited with direct stimulation of the hypothalamus. Fearful behavior and autonomic changes have been induced (both in rats and humans) with stimulation of the amygdala, an almond-shaped limbic structure within

PAGE 20

11 the anterior temporal lobe. These changes include subjective feelings of fear and anxiety as well as freezing, increased heart rate, and increased levels of stress hormones [for review, see Davis & Whalen (2001)]. Positive feelings have also been elicited with direct stimulation of a specific neural area. Okun and colleagues (2004) described a patient exuding smiles and feelings of euphoria in association with deep brain stimulation of the nucleus accumbens region. These studies of electrical stimulation in highly focal areas in the brain appear to lend credence to the hypothesis that emotions can be categorized into discrete subtypes. The case for categorical emotions has been further bolstered with evidence that different emotional states have been associated with characteristic psychophysiologic responses. Several studies conducted by Ekman, Levenson, and Friesen (Ekman, Levenson, & Friesen, 1983; Levenson, Carstensen, Friesen, & Ekman, 1991; Levenson, Ekman, & Friesen, 1990) involved participants reliving emotional memories and/or receiving coaching to reconstruct their facial muscles to precisely match the configurations associated with Ekmans six major emotions (Ekman & Friesen, 1976). The results of these studies indicated that the response pattern from several indices of autonomic nervous system activity (specifically, heart rate, finger temperature, skin conductance, and somatic activity) could reliably distinguish between positive and negative emotions, and even among negative emotions of disgust, fear, and anger (Ekman et al., 1983; Levenson et al., 1991; Levenson et al., 1990). Sadness was associated with a distinctive, but less reliable pattern. Other researchers also have described characteristic psychophysiologic response patterns associated with discrete emotions (Roberts & Weerts, 1982; Schwartz, Weinberger, & Singer, 1981).

PAGE 21

12 Emotional Responses to Viewing Facial Expressions Emotion-specific psychophysiologic responses have been elicited in individuals viewing facial displays of different types of emotions. For instance, Balaban and colleagues (1995) presented photographic slides of angry, neutral, and happy facial expressions to 5-month-old infants. During the presentation of each slide, a brief acoustic noise burst was presented to elicit the eyeblink component of the startle reflex. Angry expressions were associated with significantly stronger startle responses than happy expressions, suggesting that at least in babies, positive and negative facial expressions could emotionally modulate the startle reflex. This phenomenon was explored in a recent study using an adult sample, but with the addition of fearful expressions as a category (Bowers et al., 2002). Thirty-six young adults viewed static images of faces displaying anger, fear, happy, and neutral expressions. Acoustic startle probes elicited the eyeblink reflex during the presentation of each emotional face. Similar to Balabans (1995) study, responses to angry faces were associated with significantly stronger startle reflexes than responses to other types of expressions. Startle eyeblinks during the presentation of neutral, happy, and fearful expressions did not significantly differ in this study. The observations that fear expressions failed to prime or enhance startle reactivity seem counterintuitive for two reasons (Bowers et al., 2002). First, many studies have indicated that the amygdala appears to play a role in danger detection and processing fearful material. Stimulation of the amygdala induces auras of fear (Gloor, Olivier, Quesney, Andermann, & Horowitz, 1982), while bilateral removal or damage of the amygdala is characterized by behavioral placidity and blunted fear for threatening material (Adolphs et al., 1994; Klver & Bucy, 1937). A few studies have even

PAGE 22

13 suggested that the amygdala is particularly important for identification of fearful facial expressions (Adolphs et al., 1994; J. S. Morris et al., 1998). A second reason why the null effect of facial fear to startle probes seems counterintuitive is derived from the amygdalas role in the startle reflex. Davis and colleagues mapped the neural circuitry of the startle reflex using an animal model [see Figure 1-1; for a review, see Davis (1992)]. Their work has shown that through direct neural projections, the amygdala serves to amplify the startle circuitry in the brainstem under conditions of fear and aversion. In light of this research, the finding that fearful faces exerted no significant modulation effects on the startle circuitry (Bowers et al., 2002) does appear counterintuitive, at least from an initial standpoint. Stimulus Input Sensory Cortex Sensory Thalamus N ucleus Reticularis Pontis Caudalis Potentiated Startle Ventral Central Gray (Freezing) Lateral Region Hypothalamus Autonomic NS (HR, BP) Dorsal Central Gray (Fight/Flight) Lateral Central Nucleus Nucleus Amygdala Figure 1-1. Neuroanatomic circuitry of the startle reflex (adapted from Lang et al., 1997) The authors, however, provided a plausible explanation for this result (Bowers et al., 2002). They underscored the importance of the amygdalas role in priming the subcortical startle circuitry during threat-motivated behavior. Angry faces represent personally directed threat, and, as demonstrated by the relatively robust startle response they found, induce a motivational propensity to withdraw or escape from that threat. Fearful faces, on the other hand, reflect potential threat to the actor, rather than to the perceiver. It is perhaps unsurprising in this light that fearful faces exerted significantly

PAGE 23

14 less potentiation of the startle reflex. The preparation for action dimension (Heilman, 1987) might account for this difference between responses to fearful and angry faces perhaps the perception of fear in another face involves less propensity or motivation to act than personally directed threat. Regardless of the interpretation, these findings suggest that different types of emotional facial expressions are associated with different, unique patterns of reactivity as measured by the startle reflex (also referred as emotional modulation of the startle reflex). The question remains as to whether the pattern of startle reflex responses while viewing different facial expressions is different when viewing dynamic versus static emotional facial expressions. This has only been evaluated previously for static facial expressions, but not for dynamic ones. It seems reasonable to hypothesize that the two patterns of modulation will be similar, as both dynamic and static visual information must travel from their separate pathways to converge on the area of the cortex that enables one to apply meaning (STS area of the cortex). Across emotions, the question also remains as to whether overall differences in physiologic reactivity exist. These questions are tested empirically in the present study.

PAGE 24

CHAPTER 2 STATEMENT OF THE PROBLEM Historically, the characterization of expression perception impairments in neurologic and psychiatric populations has been largely based on research using static face stimuli. The preceding literature suggests this may be problematic, as fundamental cognitive and neurobiological differences exist in the perception of static and dynamic displays of facial emotion. A long-term goal is to develop a battery of dynamic face stimuli that would enable investigators and clinicians to better evaluate facial expression interpretation in neurologic and psychiatric conditions. Before this battery can be developed, however, an initial step must be taken to characterize differences and similarities in the perception of static and dynamic expressions. To date, no study has used psychophysiological methods to investigate this question. This study investigates the emotional responses that occur in individuals as a result of perceiving the emotions of others via facial expressions. The two major aims of the present study are to empirically determine in normal, healthy adults (1) whether dynamic versus static faces induce greater psychophysiologic reactivity and self-reported arousal and (2) whether reactions to specific types of facial expressions (e.g., anger, fear, happiness) resolve into distinct patterns of emotional modulation based on the mode of presentation (i.e., static, dynamic). To examine these aims, normal individuals were shown a series of static or dynamically presented facial expressions (fear, anger, happy, neutral) while psychophysiologic measures (skin conductance, startle eyeblink) were simultaneously acquired. Following presentation of each facial stimulus, subjective 15

PAGE 25

16 ratings of valence and arousal were obtained. Thus, the primary dependent variables were included: (a) skin conductance as a measure of psychophysiologic arousal; (b) startle eyeblink as a measure of valence; and (c) subjective ratings of valence and arousal. Specific Aim I To test the hypothesis that dynamically presented emotional faces will induce greater psychophysiologic reactivity and self-reported arousal than statically presented faces. Based on the reviewed literature, it is hypothesized that the perception of dynamic facial expressions will be associated with greater overall physiological reactivity than will the perception of static facial expressions. This hypothesis is based on evidence suggesting that dynamic displays of emotion are judged as more intense, as well as the fact that the perception of motion in facial expressions appears to provide more visual information to the viewer, such as three-dimensional structure or movement signatures. The following specific predictions are made: (a) the skin conductance response will be significantly larger when subjects view dynamic than static faces; (b) overall startle magnitude will be greater when subjects view dynamic versus static faces; and (c) subjective ratings of arousal will be significantly greater for dynamic versus statically presented faces. Specific Aim II To test the hypothesis that the pattern of physiologic reactivity (i.e., emotional modulation) to discrete facial emotions (i.e., fear, anger, happiness, neutral) will be similar for both static and dynamically presented facial expressions. Based on preliminary findings from our laboratory, we expected that anger expressions would induce heightened reactivity (as indexed by the startle eyeblink reflex) than fear, happiness, or neutral expressions. We hypothesized that this pattern of emotion

PAGE 26

17 modulation will be similar for both static and dynamic expressions, since both modes of presentation presumably gain access to neural systems that underlie interpretation of emotional meaning. The following specific predictions are made: (a) for both static and dynamic modes of presentation, the startle response (as indexed by T-scores) for anger expressions will be significantly larger than those for fear, happy, and neutral ones, while magnitudes for fear, happy, and neutral expressions will not be significantly different from each other.

PAGE 27

CHAPTER 3 METHODS Participants Participants consisted of 51 (27 females, 24 males) healthy, right-handed adults recruited from the University of Florida campus. Exclusion criteria included: (1) a history of significant neurologic trauma or disorder, (2) a history of any psychiatric or mood disorder, (3) a current prescription for mood or anxiety-altering medication, (4) a history of learning disability, and (5) clinical elevations on the Beck Depression Inventory (BDI) (Beck, 1978) or the State-Trait Anxiety Inventory (STAI) (Spielberger, 1983). Participants gave written informed consent according to university and federal regulations. All participants who completed the research protocol received $25. Eleven of the 51 subjects were excluded from the final data analyses. They included 8 subjects whose psychophysiology data were corrupted due to excessive artifact and/or absence of measurable blink responses. The data from 3 subjects were not analyzed due to clinical elevations on mood questionnaires [BDI (N=2; scores of 36 and 20); STAI (N=1; State score = 56, Trait score = 61)]. Demographic variables for the remaining 40 participants are given in Table 3-1. As shown, subjects ranged in age from 18 to 43 years (M=22.6, SD=4.3) and had 12 to 20 years of education (M=15.3, SD=1.7). BDI scores ranged from 0 to 9 (M=3.8, SD=2.9), STAI-State scores ranged from 20 to 46 (M=29.2, SD=6.9), and STAI-Trait scores ranged from 21 to 47 (M=31.0, SD=6.9). The racial representation was 52.5% Caucasian, 18

PAGE 28

19 17.5% African American, 12.5% Hispanic/Latino, 12.5% Asian, 2.5% Native American, and 2.5% Multiracial. Table 3-1 Demographic characteristics of experimental participants Measure Mean (SD) Range Age 22.6 (4.3) 18 43 Education 15.3 (1.7) 20-Dec GPA 3.48 (0.49) 2.70 3.96 BDI 3.8 (2.9) 0 9 STAI-State 29.2 (6.9) 20 46 STAI-Trait 31.0 (6.9) 2147 Note. BDI = Beck Depression Inventory; GPA = Grade Point Average; STAI = State-Trait Anxiety Inventory. Materials Static and dynamic versions of angry, fearful, happy, and neutral facial expressions from 12 untrained actors (6 males, 6 females) were used as stimuli in this study. These emotions were chosen based on previous findings from our laboratory (Bowers et al., 2002). The following sections describe the procedure used for eliciting, recording, and digitally standardizing these stimuli. Collection of Facial Stimuli: Video Recording The stimulus set for the present study was originally drawn from 15 University of Florida graduate students (Clinical and Health Psychology) and undergraduates who were asked to pose various facial expressions. These untrained actors ranged in age from 19 to 32 years and represented Caucasian, African American, Hispanic, and Asian ethnicities. All provided informed consent to allow their faces to be used as stimuli in research studies.

PAGE 29

20 The videorecording session took place in the Cognitive Neuroscience Laboratory, where the actor sat comfortably in a chair in front of a continuously recording black-and-white Pulnix videocamera. The camera was connected to a Sony videorecorder and located approximately 2 meters in front of the actor. The visual field of the videocamera was adjusted to include only the face of the actor. A Polaris light meter was used to uniformly balance the incident light upon the patients left and right sides to within 1 lux of brightness. To minimize differences in head position and angle between captured facial expressions, the actors head was held in one position by a rigid immobilization cushion (Med-Tec, Inc.) during the entirety of the recording session. Prior to the start of videorecording, the experimenter verified that the actor was comfortable and that the cushion did not obstruct the view of the actors face. A standardized format was followed for eliciting the facial expressions. The actor was asked to pose 6 emotional expressions (i.e., anger, disgust, fear, happiness, sadness, and neutral) and to make each expression intense enough so that others could easily decipher the intended emotion. For neutral, the actor was told to look into the camera lens with a relaxed expression and blink once. Before each expression type was recorded, visual examples from Ekman & Friesens Pictures of Facial Affect (Ekman & Friesen, 1976) and Bowers and colleagues Florida Affect Battery (Bowers, Blonder, & Heilman, 1992) were shown to the actor. At least three trials were recorded for each of the six expression types. Selection of Facial Stimuli Once all the face stimuli were recorded, three nave raters from the Cognitive Neuroscience Laboratory reviewed all trials of each expression made by the 15 actors. The purpose of this review was to select the most easily identifiable exemplar from each

PAGE 30

21 emotion category (anger, disgust, fear, happiness, sadness, neutral) that was free of artifact (blinking, head movement) and most closely matched the stimuli from the Ekman series (Ekman & Friesen, 1976) and the Florida Affect Battery (Bowers et al., 1992). Selection was based on consensus by the three raters. The expressions from 3 actors (2 female, 1 male) were discarded due to movement artifact, occurrence of eyeblinks, and lack of consensus regarding at least half of the intended expression types. This resulted in 72 selected expressions (6 expressions x 12 actors) stored in videotape format. Digital Formatting of Facial Stimuli Each of the videotaped facial expressions were digitally formatted and standardized. Dynamic versions were created first. Each previously selected expression (the best exemplar from each emotion category) was digitally captured onto a PC using a FlashBus MV Pro framegrabber (Integral Technologies) and VideoSavant 4.0 (IO Industries) software. The resulting digital movie clips (videosegments) consisted of a 5.0-second sequence of 150 digitized images or frames (30 frames per second). Each segment began with the actors face in a neutral pose that then moved to peak expression. The temporal sequence of each stimulus was standardized such that the first visible movement of the face (the start of each expression) occurred at 1.5 seconds and that the peak intensity was visible and unchanging for at least 3.0 seconds at the end of the videosegment. To standardize the point of the observers gaze at the onset of each stimulus, 30 frames (1 s) of a white crosshairs over a black background were inserted before the first frame of the videosegment, such that the crosshairs marked the point of intersection over each actors nose. In total, each final, processed videosegment consisted of 180 frames (6.0 seconds). All videosegments were stored in 16-bit greyscale

PAGE 31

22 (256 levels) with a resolution of 640 x 480 pixels and exported to a digital MPEG movie file (Moving Picture Experts Group) to comprise the dynamic set of face stimuli. Unmoving, or static correlates of these stimuli were then created by using the frame representing the peak intensity of each facial expression. Peak intensity was defined as the last visible frame in the dynamic expression sequence of frames. This frame was multiplied to create a sequence of 150 identical frames (5.0 seconds). As with the dynamic stimuli, 1.0 second of crosshairs was inserted into the sequence prior to the first frame. The digital specifications of this stimulus set were identical to that of the dynamic stimulus set. Figure 3-1 graphically compares the content and timing of the both versions of these stimuli. Dynamic Stimuli Image Crosshairs Neutral Moving Peak Expression Expression Expression Seconds 0 1.0 2.5 ~3.0 6.0 Frame No. 0 30 75 90 180 Static Stimuli Image Crosshairs Peak Expression Seconds 0 1.0 6.0 Frame No. 0 30 180 Figure 3-1. Temporal representation of dynamic and static stimuli by time (s) and frame number. Each stimulus frame rate is 30 frames / s. After dynamic and static digital versions of the facial stimuli were created, an independent group of 21 nave individuals rated each face according to emotion category, valence, and arousal. Table 3-2 provides the overall mean ratings for each emotion

PAGE 32

23 category by viewing mode (static or dynamic). Ratings by individual actor are given in Appendixes A (static) and B (dynamic). Table 3-2 Mean (SD) recognition rates, valence, and arousal of static and dynamic face stimuli Measure Anger Disgust Fear Happiness Neutral Sadness Dynamic Faces (n = 12) % Correct 78.2 (16.7) 79.0 (17.5) 94.4 (6.5) 99.6 (1.4) 92.0 (4.2) 93.5 (10.0) Valence 3.34 (.40) 3.58 (.43) 4.12 (.29) 7.23 (.39) 4.68 (.65) 3.51 (.52) Arousal 5.28 (.38) 5.19 (.56) 6.00 (.47) 6.00 (.51) 3.63 (.50) 4.55 (.64) Static Faces (n = 12) % Correct 68.2 (21.3) 77.4 (16.6) 95.2 (5.0) 99.2 (1.9) 89.3 (8.1) 91.3 (11.0) Valence 3.04 (.39) 3.39 (.55) 3.60 (.41) 7.18 (.52) 4.95 (.41) 3.45 (.40) Arousal 5.13 (.61) 5.31 (.64) 5.96 (.53) 5.84 (.56) 3.26 (.39) 4.48 (.56) Final Selection of Stimuli for Psychophysiology Experiment The emotional categories of anger, fear, happiness, and neutral were selected for the present study based on previous results from our laboratory (Bowers et al., 2002). Thus, the final set of stimuli used in the present study consisted of static and dynamic versions of 12 actors (6 female, and 6 male) facial expressions representing these four emotion categories. The total number of facial stimuli was 96 (i.e., 48 dynamic, 48 static). Design Overview and Procedures Each subject participated in two experimental conditions, one involving dynamic face stimuli and the other involving static face stimuli. During both conditions, psychophysiologic data (i.e., skin conductance, startle eyeblink responses) were collected along with the participants ratings of each face stimulus according to valence (unpleasantness to pleasantness) and arousal. There was a 5-minute rest interval between the two conditions. Half the participants viewed the dynamic faces first, whereas the

PAGE 33

24 remaining viewed the static faces first. The order of these conditions was randomized but counterbalanced across subjects. Testing took place within the Cognitive Neuroscience Lab of the McKnight Brain Institute at the University of Florida. Informed consent was obtained according to University and Federal regulations. Prior to beginning the experiment, the participant completed several questionnaires including a demographic form, the BDI, the STAI, and a payment form. The skin from both hands and areas under each eye were cleaned and dried thoroughly. A pair of 3 mm Ag/AgCl sensory electrodes was filled with a conducting gel (Medical Associates, Inc., Stock # TD-40) and attached adjacently over the bottom arc of each orbicularis oculi muscle via lightly adhesive electrode collars. Two 12 mm Ag/AgCl sensory electrodes were filled with conducting gel (K-Y Brand Jelly, McNeil-PPC, Inc.) and were attached adjacently via electrode collars on the thenar and hypothenar surfaces of each palm. Throughout testing, the participant sat in a reclining chair in a dimly lit sound-attenuated 12 x 12 room with copper-mediated electric shielding. An initial period was used to calibrate the palmar electrodes and to familiarize the participant with the startle probes. The lights were dimmed, and twelve 95-dB white noise bursts were presented to the subject via stereo Telephonics (TD-591c) headphones. The noise bursts were presented at a rate of about once per 30 seconds. After the initial calibration period, the participant was given instructions about the experimental protocol. They were told they would see different emotional faces, one face per trial, and were asked to carefully watch each face and ignore the brief noises that would be heard over the headphones. During each trial, the dynamic or static face stimuli

PAGE 34

25 were presented on a 21 PC monitor, positioned 1 meter directly in front of the participant. Each face stimulus was shown for six seconds on the monitor. While viewing the face stimulus, the participant heard a white noise burst (95 db, 50 ms) that was delivered via headphones. The white noise startle probes were randomly presented at 4200 ms, 5000 ms, or 5800 ms after the onset of the face stimulus. At the end of each trial, the participant was asked to rate each face stimulus along the dimensions of valence and arousal. The ratings took place approximately six seconds following the offset of the face stimulus, when a Self-Assessment Manikin SAM; Bradley & Lang, 1994) was shown on the computer monitor. Valence ratings ranged from extremely positive, pleasant, or good (9) to extremely negative, unpleasant, or bad (1). Arousal ratings ranged from extremely excited, nervous, or active (9) to extremely calm, disinterested, or unenthusiastic (1). The participant reported their valence and arousal ratings out loud, and their responses were recorded by an experimenter in the next room, listening via a baby monitor. A new trial began 6 to 8 seconds after the ratings were made. Each experimental condition (i.e., dynamic, static) consisted of 48 trials that were divided into 6 blocks of 8 trials each. A different actor represented each trial within a given block. Half were males, and half females. One male actor and one female actor represented each of four emotions (neutral, happiness, anger, fear) to total the 8 trials per block. To reduce habituation of the startle reflex over the course of the experiment, 8 trials representing male and female versions of each expression category did not contain a startle probe. These trials were spread evenly throughout each slideshow.

PAGE 35

26 Following administration of both slideshows, the experimenter removed all electrodes from the participant, who was then debriefed on the purpose of the experiment, thanked, and released. Psychophysiologic Measures Acoustic Startle Eyeblink Reflex (ASR) Startle eye blinks were measured via EMG activity from the orbicularis oculi muscle beneath each eye. This measure was used as a dependent measure because of its sensitivity to valence, with larger startle eyeblinks associated with negative/aversive emotional states and smaller eyeblinks associated with positive emotional states (Lang, Bradley, & Cuthbert, 1990). The raw EMG signal was amplified and frequencies below 90 Hz and above 1000 Hz were filtered using a Coulbourn bioamplifier. Amplification of acoustic startle was set at 30000 with post-experimental multiplication to equate gain factors (Bradley et al., 1990). The raw signal was then rectified and integrated using a Coulbourn Contour Following Integrator with a time constant of 10 ms. Digital sampling began at 20 Hz 3 s prior to stimulus onset. The sampling rate increased to 1000 Hz 50 ms prior to the onset of the startle probe and continued at this rate for 250 ms after probe onset. Sampling then resumed at 20 Hz until 2 s after stimulus offset. The startle data were reduced off-line using custom software which evaluates trials for unstable baseline and which scores each trial for amplitude in arbitrary A-D units and onset latency in milliseconds. The program yields measures of startle response magnitude in arbitrary A-D units that expresses responses during positive, neutral, and negative materials on the same scale.

PAGE 36

27 Skin Conductance Response (SCR) The SCR was measured from electrodes attached to the palms with adhesive collars. This measure was used because it is an index of sympathetic arousal, correlates with self-reports of emotional arousal, and is relatively independent of valence (Bradley & Lang, 2000). Skin conductance data were sampled at 20 Hz using two Coulbourn Isolated Skin Conductance couplers in DC mode (this is a constant voltage system in which .5v is passed across the palm during recording). The SC couplers output to a Scientific Solutions A/D board integrated within a custom PC. The skin conductance response (SCR) was defined as the difference between the peak conductance during the 6-second viewing period and the mean conductance achieved during the last pre-stimulus second, derived independently for each hand. SCR was represented in microsiemens (S) units. Data Reduction of Psychophysiology Measures After the collection of the psychophysiologic data, the eyeblink and skin conductance data were reduced using custom condensing software. For startle eyeblink, data from trials without startle probes and the initial two practice trials were excluded from the statistical analyses. Trials containing physiological data containing obvious artifacts were also removed. For the remaining data, the peak magnitude of the EMG activity elicited by each startle probe within the recorded time window was measured (peak baseline in microvolts). Peak startle magnitudes were averaged for both eyes into a composite score when data from both eyes were available. If data from only one eye was available, this data was used in place of the composite score. Peak startle magnitudes were additionally translated into T-scores, which were then averaged for each expression type (i.e., happy, neutral, fear, and anger) and mode of presentation (i.e., static

PAGE 37

28 and dynamic stimuli). For both startle magnitudes and T-scores, the four expression categories were represented by no fewer than four trials each. For the skin conductance response, condensing consisted of measuring the peak magnitude of change relative to baseline activity at the start of each trial. Again, trials containing physiological data containing obvious artifacts were removed. The magnitude of change for each trial was measured and averaged for both hands, unless the data from one of the palms contained excessive artifact. In these cases, the data from the other hand was used in place of the composite data. Statistical Analysis Separate analyses were conducted for startle-blink, skin conductance, SAM Valence ratings, and SAM Arousal ratings. Repeated-measures ANOVA with adjusted degrees of freedom (Greenhouse-Geisser correction) were used, with a between-subjects factor of Order of Slideshows (dynamic, then static; static, then dynamic) and within-subjects factors of Expression Category (anger, fear, neutral, happiness) and Viewing Mode (dynamic, static). Analyses corresponding to a priori predictions were conducted using planned contrasts (Helmert) between the four expression categories. A significance level of alpha = 0.05 was used for all analyses. We predicted three changes corresponding to indices of greater psychophysiologic reactivity to dynamic expressions versus static expressions. These indices were: (1) greater magnitude of the startle reflex, (2) greater percent change in skin conductance, and higher self-reported SAM arousal ratings during perception of dynamic facial expressions. Additionally, we predicted that the pattern of T-scores for both dynamic and static facial expressions would show emotional modulation to the four different categories of facial expressions incorporated in the experimental study. That is, startle

PAGE 38

29 reflexes measured during the perception of anger would show larger startle reflexes than those measured during the perception of fear, neutral, and happy expressions. Startle responses measured during the perception of facial expressions represented by the latter three emotional categories would not be appreciably different. Finally, this pattern of modulation would not be significantly different between static and dynamic viewing modes.

PAGE 39

CHAPTER 4 RESULTS The primary dependent measures were the acoustic startle eyeblink response (ASR), the skin conductance response (SCR), and self-reported arousal from the Self-Assessment Manikin (arousal). As previously described, the ASR was quantified by measuring the change in EMG activity (mV) following the onset of the startle probes (i.e., peak minus baseline EMG). The SCR was calculated by the difference between the peak conductance in microsiemens (S) during the 6-second period of stimulus presentation and the mean level of conductance during a 1-s period immediately prior to the onset of the stimulus. Finally, self-reported arousal encompassed a range of 1 to 9, with higher numbers representing greater arousal levels. Table 1 gives the means and standard deviations of each of these dependent variables by viewing mode. Table 4-1 Mean (SD) dependent variable scores by Viewing Mode Viewing Mode Measure Dynamic Static ASR-M .0062 (.0054) .0048 (.0043) SCR .314 (.514) .172 (.275) Arousal 5.27 (.535) 5.30 (.628) Note. ASR = Acoustic Startle Eyeblink Response, Magnitude (mV); SCR = Skin Conductance Response (S); Arousal = Self-Assessment Manikin, Arousal Scale (1-9). Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces An initial set of analyses addressed the first hypothesis and investigated whether psychophysiologic reactivity (startle eyeblink, SCR) and/or self-reported arousal differed 30

PAGE 40

31 during the perception of dynamic versus static emotional faces. The results of the analyses for each of the three dependent variables are described below. Startle Eyeblink Response The first analysis examined whether the overall size of the startle eyeblink responses differed when participants viewed dynamic versus static facial expressions. A repeated-measures ANOVA was conducted using Viewing Mode (dynamic, static) as the within-subjects factor and Order of Presentation (dynamic then static, or static then dynamic) as the between-subjects factor. 1 The results of the ANOVA revealed a significant main effect for Viewing Mode [F(1, 38) = 9.003, p = .005, p 2 = .192, power = .832]. As shown in Table 1, startle eyeblink responses were greater during dynamic versus static expressions. The main effect of Order of Presentations was not significant [F(1, 38) = 1.175, p = .285, p 2 = .030, power = .185], nor was the Viewing Mode X Order of Presentations interaction [F(1, 38) = .895, p = .350, p 2 = .023, power = .152]. Skin Conductance Response (SCR) The second analysis examined whether the perception of the different types of facial emotions induced different SCR patterns between modes of viewing. A repeated measures ANOVA was conducted with Viewing Mode (dynamic, static) and Expression Category (anger, fear, happy, neutral) as the within-subjects factors and Order of Presentations (dynamic first, static first) as the between-subjects factor. The results of the ANOVA revealed that the main effect of Viewing Mode approached significance [F(1, 35) = 3.796, p = .059, p 2 = .098, power = .474], such that SCR tended to be larger 1 Expression Category was not used as a factor in this analysis. Examination of emotional effects on startle eyeblink is traditionally done using T-scores as the dependent variable rather than raw magnitude. Raw startle magnitude is more appropriate as an index of reactivity, whereas T-scores are more appropriate for examining patterns of emotional effects on startle.

PAGE 41

32 when participants viewed dynamic versus static faces (see Table 1). No other main effects or interactions reached trend level or significance {Order of Presentations [F(1, 35) = .511, p = .479, p 2 = .014, power = .107]; Viewing Mode X Order of Presentations [F(1, 35) = 1.559, p = .220, p 2 = .043, power = .229]; Expression Category X Order of Presentations [F(1.832, 64.114) = .942, p = .423, p 2 = .026, power = .251]}. Self-Reported Arousal The third analysis examined whether self-reported arousal ratings differed when participants viewed static versus dynamic facial expressions. Again, a 2 (Viewing Mode) X 4 (Expression Category) X 2 (Order of Presentation) repeated measures ANOVA was conducted. The results of this ANOVA revealed that no main effects or interactions were significant: {Viewing Mode [F(1, 38) = .072, p = .789, p 2 = .002, power = .058]; Order of Presentations [F(1, 38) = 2.912, p = .096, p 2 = .071, power = .384]; Viewing Mode X Order of Presentations [F(1,38) = .479, p = .493, p 2 = .012, power = .104]}. The effects related to Expression Category will be described in the next section (page 39). In summary, viewing dynamic facial stimuli was associated with significantly larger acoustic startle eyeblink responses and a tendency (trend, p = .059) for larger skin conductance responses than viewing static stimuli. There was no significant difference in self-reported arousal ratings between dynamic and static stimuli. Hypothesis 2: Emotion Modulation of Startle by Expression Categories An additional set of analyses addressed the second hypothesis, investigating emotional modulation of the startle eyeblink response via distinct categories of facial expressions (i.e., anger, fear, neutral, and happy). Because of individual variability in the size of basic eyeblink responses, the startle magnitude scores for each individual were converted to T-scores on a trial-by-trial basis. These T-scores were analyzed in a

PAGE 42

33 repeated-measures 4 (Expression Category: anger, fear, neutral, happy) X 2 (Viewing Mode: dynamic, static) X 2 (Order of Presentations: dynamic then static, or static then dynamic) ANOVA. Table 2 gives the means and standard deviations of these scores and other dependent variables by Viewing Mode and Expression Category. Table 4-2 Mean (SD) Dependent variable scores by Viewing Mode and Expression Category Expression Category Viewing Mode Measure Anger Fear Neutral Happy Dynamic ASR-M .0053 (.0052) .0049 (.0046) .0045 (.0037) .0046 (.0042) ASR-T 51.06 (3.43) 49.47 (3.01) 49.77 (3.47) 49.68 (3.14) SCR .1751 (.2890) .1489 (.2420) .1825 (.3271) .1768 (.3402) Valence 3.10 (.89) 3.44 (.99) 4.76 (.54) 7.19 (.84) Arousal 5.39 (1.05) 6.43 (.98) 3.41 (1.33) 5.96 (.88) Static ASR-M .0066 (.0061) .0059 (.0051) .0061 (.0051) .0061 (.0057) ASR-T 50.99 (3.79) 49.43 (3.92) 49.57 (4.30) 49.88 (3.21) SCR .3247 (.5200) .3583 (.8070) .2515 (.3911) .3212 (.5457) Valence 3.17 (1.00) 3.65 (1.21) 4.69 (.84) 7.17 (.84) Arousal 5.51 (1.05) 6.35 (.95) 3.29 (1.36) 5.95 (.87) Note. ASR=Acoustic Startle Response (mV); SCR=Skin Conductance Response (S); Valence=Self-Assessment Manikin, Valence Scale (1-9); Arousal=Self-Assessment Manikin, Arousal Scale (1-9). The main effect of Expression Category approached but did not reach significance [F(3, 117) = 2.208, p = .091, p 2 = .055, power = .548]. No other main effects or interactions reached trend level or significance {Viewing Mode: [F(1, 114) = .228, p = .636, p 2 = .006, power = .075]; Order of Presentations: [F(1, 38) = .336, p = .566, p 2 = .009, power = .087]; Viewing Mode X Order of Presentations: [F(1, 38) = .457, p = .503, p 2 = .012, power = .101]; Expression Category X Order of Presentations: [F(3, 114) = .596, p = .619, p 2 = .015, power = .171]; Expression Category X Viewing Mode: [F(3, 114) = .037, p = .991, p 2 = .001, power = .056]; Expression Category X

PAGE 43

34 Viewing Mode X Order of Presentations: [F(3, 114) = .728, p = .537, p 2 = .019, power = .201]}. The a priori predictions regarding the expected pattern of emotion modulation of the startle response [i.e., Anger > (Fear = Neutrality = Happiness)] warranted a series of planned comparisons (Helmert) on Expression Category. Results of these comparisons revealed that: (a) startle responses were significantly different for faces of anger than the other expressions [F(1, 38) = 8.217, p = .007, p 2 = .178, power = .798]; (b) there were no significant differences among the remaining emotional expressions [i.e., Fear = (Neutral and Happy): F(1, 38) =.208, p = .651, p 2 = .005, power = .073); and Neutral = Happy: F(1, 38) =.022, p = .882, p 2 = .001, power = .052)]. Figure 4-2 graphically displays the pattern of startle reactivity with T-scores among the four expression categories. 464748495051525354AngerFearNeutralityHappinessExpression CategoryStartle Eyeblink Response (T-scores) Figure 4-1. Startle eyeblink T-scores by expression category [A > (F = N = H)]. To summarize these results, viewing angry facial expressions was associated with significantly larger acoustic startle eyeblink responses than other types of facial

PAGE 44

35 expressions (i.e., fear, neutral, and happy), and the responses between the other expressions were not significantly different from each other. Additionally, the non-significant Expression Category X Viewing Mode interaction (p = .991) indicates that this response pattern was similar for both static and dynamic facial expressions. Other Patterns of Emotional Modulation by Viewing Mode The response pattern among different expression categories was also examined for SCR and self-reported arousal, as well as self-reported valence. Like arousal, valence was measured on a scale of 1-9, with higher numbers representing greater positive feeling, pleasure, or appetitiveness, and lower numbers representing greater negative feeling, displeasure, or aversiveness. For all three variables, the analyses were separate 3-way (4 x 2 x 2) repeated measures analyses of variance, using the within-subject factors of Expression Category (anger, fear, neutral, happy) and Viewing Mode (dynamic, static), and the between-subjects factor of Order of Presentations (dynamic then static, or static then dynamic). For SCR and arousal, these analyses were conducted in a preceding section (Differences in Reactivity to Dynamic vs. Static Faces, page 39). As such, for these two measures, this section provides only the results for the Expression Category main effect and associated interactions. The results for self-reported valence, however, are provided in full, as this is a novel analysis. Table 2 gives the means and standard deviations for each dependent variable by Viewing Mode and Expression Category. Skin Conductance Response For the skin conductance response, the main effect of Expression Category and all associated interactions were non-significant: Expression Category [F(1.832, 64.114) = .306, p = .821, p 2 = .009, power = .107], Expression Category X Viewing Mode

PAGE 45

36 [F(2.012, 70.431) = 1.345, p = .264, p 2 = .037, power = .349]; 2 Expression Category X Viewing Mode X Order of Presentations [F(2.012, 70.431) = 1.341, p = .265, p 2 = .037, power = .348]. Thus, differences in SCR for discrete expressions were not found. Self-Reported Arousal For self-reported arousal, the main effect of Expression Category was significant [F(2.144, 81.487) = 81.836, p < .001, p 2 = .683, power = 1.000], 3 indicating that arousal ratings were different while viewing different types of facial expressions. The results of Bonferroni-corrected post-hoc comparisons are provided graphically in Figure 4-2. Fearful faces (M = 6.39, SD = .91) were associated with significantly higher (p < .001) intensity ratings than angry faces (M = 5.45, SD = .96), which were in turn rated as higher (p < .001) in intensity than neutral faces (M = 3.35, SD = 1.22). Differences in intensity ratings associated with happy faces (M = 5.96, SD = .76) approached significance when compared to fearful (p = .082) and happy (p = .082) faces, and were rated as but significantly higher (p < .001) than neutral faces. 2Mauchleys test was significant for both Expression Category [W = .273, 2(5) = 43.762, p < .001] and the Expression Category X Viewing Mode interaction [W = .451, 2(5) = 26.850, p < .001]; thus, degrees of freedom for these effects were adjusted using the Greenhouse-Geisser method. 3 Mauchleys test was significant for both Expression Category [W = .507, 2(5) = 24.965, p < .001] and the Expression Category X Viewing Mode interaction [W = .403, 2(5) = 33.335, p < .001]; thus, degrees of freedom for these effects were adjusted using the Greenhouse-Geisser method.

PAGE 46

37 123456789AngerFearNeutralityHappinessExpression CategoryArousal (1-9) Figure 4-2. Self-reported arousal by expression category (F > A > N; H > N). Self-Reported Valence The final analysis explored the pattern of self-reported valence ratings for each of the facial emotion subtypes and viewing modes. The results of the ANOVA revealed a significant effect for Expression Category [F(2.153, 81.822) = 205.467, p < .001, p 2 = .844, power = 1.00], 4 indicating that valence ratings differed according to expression categories. Bonferroni-corrected pairwise comparisons among the four facial expression types indicated that faces of happiness (M = 7.18, SD = .78) were rated as significantly more pleasant than neutral faces (M = 4.73, SD = .59; p < .001), fear faces (M=3.54, SD=1.03, p < .001), and angry faces (M = 3.14, SD = .84; p < .001). Additionally, neutral faces were rated as significantly more pleasant than fearful (p < .001) or angry 4 A significant Mauchleys test for Expression Category [W = .566, 2(5) = 20.903, p = .001] and the Expression Category X Viewing Mode interaction [W = .504, 2(5) = 25.146, p < .001] necessitated the use of Greenhouse-Geisser adjusted degrees of freedom.

PAGE 47

38 faces (p < .001). Finally, anger faces were rated as significantly more negative than fearful faces (p = .014). This pattern is displayed graphically in Figure 4-3. No other main effects or interactions reached trend level or significance {Viewing Mode: [F(1, 38) = .646, p = .426, p 2 = .017, power = .123]; Order of Presentations: [F(1, 38) = 1.375, p = .248, p 2 = .035, power = .208]; Viewing Mode X Order of Presentations: [F(1, 38) = .047, p = .829, p 2 = .001, power = .055]; Expression Category X Order of Presentations: [F(2.153, 81.822) = 1.037, p = .363, p 2 = .027, power = .233]; Expression Category X Viewing Mode: [F(2.015, 76.554) = .933, p = .398, p 2 = .024, power = .207]; Expression Category X Viewing Mode X Order of Presentations: [F(2.015, 76.554) = 1.435, p = .244, p 2 = .036, power = .300]}. 123456789AngerFearNeutralityHappinessExpression CategoryValence (1-9) Figure 4-3. Self-reported valence by expression category (H > N > F > A). To summarize, these analyses revealed that the skin conductance response for different categories of emotional expressions were not different from one another. By

PAGE 48

39 contrast, both self-report measures did distinguish among the emotion categories. With regard to self-reported arousal, fearful faces were rated highest, significantly moreso than anger faces, which were in turn rated as significantly more arousing than neutral ones. The difference in arousal between happy and angry faces, as well between happy and fearful ones, approached but did not reach significance (p = .082, p = .082, respectively). Happy faces were, however, rated as significantly more arousing than neutral ones. For self-reported valence, each expression category was rated as significantly different from the other, such that angry expressions were rated as most negative, followed by fearful, neutral, and then happy faces.

PAGE 49

CHAPTER 5 DISCUSSION The present study examined two hypotheses. The first was that the perception of dynamic versus static faces would be associated with greater physiological reactivity in normal, healthy adults. Specifically, it was predicted that individuals would exhibit significantly stronger startle eyeblink reflexes, higher skin conductance responses (SCR), and higher levels of self-reported arousal when viewing dynamic expressions. These predictions were based on evidence from previous research suggesting that movement in facial expression (a) provides more visual information to the viewer, (b) increases recognition of and discrimination between specific types of emotion, and (c) may make the facial expressions appear more intense. The second hypothesis was that the perception of different categories of facial expressions would be associated with a distinct pattern of emotional modulation, and that this pattern would not be different for static and dynamic faces. In other words, it was hypothesized that the level of physiological reactivity while viewing facial expressions would be dependent on the type of expression viewed, regardless of the viewing mode. Specifically, the prediction was that normal adults would have increased startle eyeblink responses during the perception of angry faces, and that responses to fearful, happy, and neutral faces would not be significantly different from each other. Moreover, it was predicted that this pattern of responses would be similar for both static and dynamically presented expressions. 40

PAGE 50

41 The first hypothesis was partially supported by the data. The participants tested in the study sample exhibited larger startle eyeblink responses while viewing dynamic versus static facial expressions. Differences in SCR while viewing the expressions in these two modes reached trend level (p = .059), such that dynamic faces tended to induce greater responses than static ones. Self-reported arousal was not significantly different during either condition. Thus, the perception of moving emotional faces versus still pictures was associated with greater startle eyeblink responses, but not SCR or self-reported arousal. The second hypothesis was supported by the data. That is, the startle reflex was significantly greater for angry faces, and comparably smaller for the fearful, neutral, and happy faces. The data suggested that this pattern of emotional modulation was similar during both static and dynamic viewing conditions. In summary, participants demonstrated greater psychophysiological reactivity to dynamic faces compared to static faces, as indexed by the startle eyeblink response, and partially by SCR. Participants did not, on the other hand, report differences in perceived arousal. Emotional modulation of the startle response was similar for both modes of presentation, such that angry faces induced greater negative or aversive responses in the participants than did happy, neutral, and fearful faces. Interpretation and Relationship to Other Findings The finding that viewing faces of anger was found to increase the strength of the startle eyeblink reflex is consistent with other results. Currently, only two other studies are known that measured the magnitude of this reflex during the perception of different facial emotions. Balaban and colleagues (1995) conducted one of these studies. They measured the size of startle eyeblinks in 5-month-old infants viewing photographic slides

PAGE 51

42 of happy, neutral, and angry faces. Their results were similar to those of the current study, in that the magnitudes of startle eyeblinks measured in the infants were augmented while they viewed faces of anger versus faces of happiness. The other study was conducted by Bowers and colleagues (2002). Similar to the present experiment, participants were young adults (n = 36) who viewed facial expressions of anger, fear, neutral, and happiness. These stimuli, however, consisted solely of static photographs and were sampled from standardized batteries (The Florida Affect Battery: Bowers et al., 1992; Pictures of Facial Affect: Ekman & Friesen, 1976). The startle eyeblink responses that were measured while viewing these pictures reflected the pattern produced in the present study: greater negative or aversive responses were associated with angry faces than happy, neutral, or fearful faces. Responses to happy, neutral, and fearful faces yielded relatively reduced responses and were not different from each other in magnitude. The augmentation of the startle reflex during the perception of angry versus other emotional faces appears to be a robust phenomenon for several reasons. First, the findings from the present study were similar to those of previous studies (Balaban et al., 1995; Bowers et al., 2002). Second, this pattern of emotional modulation was replicated using a different set of facial stimuli. Thus, the previous findings were not restricted to faces from specific sources. Third, the counterbalanced design of the present study minimized the possibility that the anger effect was due to some imbalance of factors other than the portrayed facial emotion. Within each experimental condition, for example, both genders and each actor were equally represented within each expression category.

PAGE 52

43 Although the current results were made more convincing for these reasons, the implication that the startle circuitry is not enhanced in response to fearful expressions was unexpected for several reasons. The amygdala has been widely implicated in states of fear and processing fearful material (Davis & Whelan, 2001; Gloor et al., 1981, Klver-Bucy, 1939), and some investigators have even directly implicated the amygdala for processing facial expressions of fear (Adolphs et al., 1994; Morris et al., 1998). Additionally, the work of Davis and colleagues (Davis et al., 1992) uncovered direct neural projections from the amygdala to the subcortical startle circuitry, which have been shown to prime the startle mechanism under fearful or aversive conditions. This body of research suggests that fearful expressions might potentiate the startle reflex relative to other types of facial expressions; however, Bowers and colleagues study (2002) as well as the present one provide evidence that suggests otherwise. No other studies are known to have directly compared startle reactivity patterns among fearful and other emotionally expressive faces. Additionally, imaging and lesion studies have shown mixed results with respect to the role of the amygdala and the processing of fearful and angry faces per se. For instance, Sprengelmeyer and colleagues (1998) showed no fMRI activation in the amygdala in response to fearful relative to neutral faces. Young and colleagues (1995) attributed a deficit in recognition of fear faces to bilateral amygdala damage, but the much of the surrounding neural tissue was also damaged. So, how might one account for the relatively reduced startle response to fearful faces? Bowers and colleagues (2002) provided a plausible explanation, implicating the role of motivated behavior [i.e., Heilmans (1987) preparation for action scale] on these

PAGE 53

44 results. As previously described, angry faces represent personally directed threat, and, as might be reflected by the increased startle found in the present study, induce a motivational propensity to withdraw or escape from that threat. Fearful expressions, on the other hand, reflect some potential environmental threat to the actor, rather than to the observer. Thus, this would reflect less motivational propensity for action and might account for the reduced startle response. Methodological Issues Regarding Facial Expressions Before discussing the implications of this study more broadly, several methodological issues must be addressed that potentially influenced the present findings. The first relates to the reliability of the facial expression stimuli in depicting specific emotions. Anger was the emotion that elicited the greatest startle response overall. At the same time, anger facial expressions were least accurately categorized by a group of independent nave raters (see Table 3-2, page 23). 5 Whether there is a connection between these findings is unclear, particularly since the emotions that the raters viewed included a wider variety of options (i.e., 6 expressions) than those viewed by the participants in this study (4 expressions). For example, the raters were shown facial expressions of anger, disgust, fear, sad, happiness and neutral. Their accuracy in 1 A 2 (Viewing Mode: dynamic, static) X 6 (Expression Category: anger, disgust, fear, happy, neutral, sad) repeated-measures ANOVA was conducted with an alpha criterion of .05 and Bonferroni-corrected post-hoc comparisons. Results showed that dynamic expressions (M = .89, SD = .06) were rated significantly more accurately than static expressions (M = .87, SD = .07). Additionally, Expression Category was found to be significant, but not the interaction between Expression Category and Viewing Mode. Specific to the emotion categories used in the present study, it was also found that happy faces were rated significantly more accurately (M = .99, SD = .01) than neutral (M = .91, SD = .06) and angry (M = .73, SD = .18) faces, while fear (M = .95, SD = .05) recognition rates were not significantly different from the other three. Comparing each emotion across viewing modes, only anger was rated significantly more accurately in dynamic (M = .78, SD = .17), versus static (M = .68, SD = .21), modes, while the advantage for dynamic neutral faces (M = .92, SD = .04) over static versions (M = .89, SD = .08) only approached significance (p = .055). A static version of an emotional expression was never rated significantly more accurately than its dynamic version.

PAGE 54

45 identifying anger expression was around 78%. When errors were made, they typically (i.e., 95% of the time) judged the anger expressions as being disgust. In the psychophysiology study, the participants were shown only four expressions. It seems unlikely that participants in the psychophysiology study easily confused anger, fear, happiness, and neutral expressions. However, this could be addressed by examining the ratings that were made by the psychophysiology participants. Nevertheless, elevated startle reactivity for facial expressions that were less reliably categorized might occur for several reasons: (1) differences in attention between relatively poorly and accurately recognized stimuli, and (2) differences in perceived arousal levels between relatively poorly and accurately recognized stimuli. Regarding attention, previous researchers have suggested that visual attention inhibits the startle response when the modalities between the startle probe and stimulus of interest are mismatched (e.g., Ornitz, 1996). In this case, acoustic startle probes were used in conjunction with visual stimuli. Since anger was associated with the strongest startle reflexes, it was not likely inhibited. Thus, attention was probably not a mediating factor between lower recognition rates and this effect. Regarding arousal, researchers such as Cuthbert and colleagues (1996) indicated that potentiation of the startle response occurs with more arousing stimuli when the stimuli are of negative valence. Anger, was rated as the most negatively valenced, significantly more so than fear. Happy was rated most positively. Since anger was rated most negatively, the only way arousal could have been an influencing factor on angers potentiated startle response was if anger was more arousing than the other two expressions. However, it was rated as significantly less arousing than both fear and happiness.

PAGE 55

46 To conclude, it seems unlikely that ambiguity of the angry facial expressions significantly contributed to the current findings. However, examination of ratings made by the participants themselves might better clarify the extent to which anger expressions were less accurately categorized than other expressions. Other Considerations of the Present Findings One explanation for the failure to uncover more robust findings using the skin conductance response might relate to several of this measures attributes. First, although SCR can be a useful measure of emotional arousal, it does have considerable limitations. It is estimated that that 15-20% of healthy individuals are skin conductance non-responders; some individuals do not exhibit a discernable difference in this response to different categories of emotional stimuli, while others exhibit very weak responses overall (Bradley & Lang, 2000; O'Gorman, 1990). Moreover, the sensitive electrical signal that records SCR is vulnerable to the effects of idle, unconscious motor activity, especially considering that the electrodes are positioned on the palms of both hands. Because participants sat alone during these recordings, it was impossible to determine whether they followed instructions for keeping still. These factors suggest that the potential for interference during the course of the two slideshows in the present study is not insignificant and may have contributed to the null SCR findings, both for reactivity across emotions, and response differences between emotions. As such, this study uncovered only weak evidence that dynamic faces induced stronger skin conductance responses than static faces; only a trend towards significance was found. A significant difference might have emerged with more statistical power (dynamic: power = .47). Numerically, dynamic faces were associated with larger mean SCR values (.314) than

PAGE 56

47 static faces (.172). Therefore, a larger sample size would be required to increase our confidence about the actual relationship of SCR for these two visual modes. Several explanations might account for the finding that self-reported arousal ratings were not significantly different for static and dynamic expressions (contrary to one prediction in the current study). First, it is possible that the similar ratings between these two experimental conditions were the product of an insensitive scale. The choice between integers ranging only from 1 to 9 may have prohibited sufficient response variability for drawing out differences between viewing modes. Also, it is possible that subjects rated each expression in arousal relative to the expressions immediately preceding the currently rated one, and failed to consider their responses relative to the previously seen presentation. If this were the case, the viewed expressions might have been rated in arousal relative to the average score within the current presentation, and the means of arousal ratings from both presentations would be virtually identical. Limitations of the Current Study It is important to acknowledge some of the limitations of the current study. One limitation is that the specific interactions between participant and actor variables of gender, race, and attractiveness were not analyzed. It is likely that the emotional response of a given individual to a specific face is dependent upon these factors due to the individuals unique experiences. In addition, the meaning of some facial expressions may be ambiguous when they are viewed in isolation. Depending on the current situation, for instance, a smile might communicate any number of messages, including contentment, peer acceptance, sexual arousal, relief, mischief, or even contempt (i.e., a smirk). Taken together, averaging potentially variable responses due to highly specific interactions with non-expressive facial features or varying interpretations of facial stimuli

PAGE 57

48 between subjects might have contributed to certain non-significant effects, or created artificial ones. Secondly, the facial expression stimuli may have been perceived as somewhat artificial, which potentially reduced the overall emotional responses (and consequently, physiologic reactivity). The actors were recorded using black and white video with their heads surrounded on either side with an immobilization cushion. In addition, despite some pre-training, the actors deliberately posed the facial expressions; these were not the product of authentic emotion per se. Previous research has determined that emotion-driven and posed expressions are mediated by different neural mechanisms and muscular response patterns (Monrad-Krohn, 1924; for review, see Rinn, 1984). It is likely that some expressions might have been correctly recognized by emotional category, but not necessarily believed as having an emotional origin. The extent to which emotional reactivity is associated with perceiving genuine versus posed emotion in others remains the topic of future research. It is reasonable to conjecture, however, that based on everyday social interactions, the perception of posed expressions would be less emotionally arousing and would therefore be associated with reduced emotional reactivity. Directions for Future Research There are many avenues for future research. Further investigation into the effects of and interactions between factors of gender, race, age, and attractiveness and the characterization of these effects on patterns of startle modulation is warranted. The effects of these factors would need to be determined to clearly dissociate expression-specific differences in emotion perception. One of these factors may be implicated as being more influential than facial expressivity in physiological reactivity to facial stimuli.

PAGE 58

49 Further, the use of more genuine, spontaneous expressions as stimuli might be considered to potentially introduce greater levels of emotional arousal into studies of social emotion perception. Greater ecological validity might be gained via this route, as well as the use of color stimuli and actors given free range of head movement. Also, patterns of startle modulation to facial expressions should be further studied over different age groups to help uncover the development of emotional recognition and social cognition over the lifespan. This is especially warranted given the difference in the findings of the present study (i.e., increased startle response to anger with attenuated responses being associated with fearful, happy, and neutral expressions) in relation to those of Balabans (1995) study who tested infants. In her study, fearful expressions yielded significantly greater responses than neutral ones and neutral ones yielding greater responses than happy ones). Continued research with different age groups would help disentangle the ontogenetic responsiveness to the meaning conveyed through facial emotional signals and help determine the reliability of these few studies that have been conducted. To conclude, despite the limitations of the current study, dynamic and static faces appear to elicit qualitatively different psychophysiological responses; specifically, that dynamic faces induce greater startle eyeblink responses than static versions. This observation has not been previously described in the literature. Because they appear to differentially influence motivational systems, these two types of stimuli cannot be treated interchangeably. The results of this and future studies will likely play an important role in the development of a dynamic facial affect battery and aid in the race to extricate more

PAGE 59

50 precisely the social cognition impairments in certain neurologic, psychiatric, and brain injured populations.

PAGE 60

APPENDIX A STATIC STIMULUS SET Actor Measure Anger Disgust Fear Happiness Neutrality Sadness Male 1 % Recognition 47.6 66.7 90.5 100 100 85.7 Valence M (SD) 3.0 (1.6) 3.9 (1.5) 4.4 (1.7) 7.4 (1.3) 5.2 (0.9) 3.7 (1.2) Arousal M (SD) 5.5 (1.4) 5.4 (1.7) 5.8 (1.5) 6.3 (1.3) 3.5 (1.8) 4.6 (1.4) Male 2 % Recognition 90.5 85.7 100 100 90.5 95.2 Valence 2.8 (1.3) 3.5 (1.1) 4.5 (1.8) 7.2 (1.4) 4.2 (1.2) 2.6 (1.3) Arousal 5.1 (2.1) 5.0 (1.9) 6.8 (1.7) 5.7 (1.7) 3.7 (1.8) 5.0 (1.8) Male 3 % Recognition 71.4 81 90.5 100 100 Valence 3.2 (1.5) 3.2 (0.9) 4.2 (1.7) 7.3 (0.9) 4.7 (1.4) Arousal 5.2 (2.0) 5.1 (1.7) 6.3 (1.5) 5.9 (1.6) 3.7 (1.9) Male 4 % Recognition 57.1 71.4 85.7 100 95.2 95.2 Valence 3.3 (1.5) 3.6 (1.7) 3.8 (1.6) 7.0 (2.2) 4.6 (0.7) 3.1 (1.2) Arousal 5.4 (1.4) 5.5 (1.2) 6.0 (0.8) 6.7 (1.4) 3.3 (1.7) 4.5 (1.6) Male 5 % Recognition 57.1 76.2 95.2 95.2 81 100 Valence 4.1 (1.2) 4.6 (0.8) 4.5 (1.2) 7.0 (1.3) 5.4 (1.2) 4.1 (1.3) Arousal 4.6 (1.3) 4.0 (1.6) 5.5 (1.4) 5.4 (1.7) 3.9 (1.8) 4.1 (1.7) Male 6 % Recognition 71.4 61.9 95.2 100 90.5 76.2 Valence 3.1 (1.6) 3.0 (1.8) 3.6 (1.6) 6.9 (1.3) 4.6 (1.7) 3.5 (1.5) Arousal 5.1 (1.6) 6.1 (2.3) 5.8 (1.6) 5.3 (2.1) 3.9 (2.2) 5.3 (1.3) Female 1 % Recognition 61.9 76.2 100 100 85.7 90.5 Valence 3.3 (1.5) 3.3 (1.6) 3.9 (1.7) 6.7 (1.1) 4.5 (1.3) 2.9 (1.2) Arousal 6.1 (1.8) 5.3 (2.0) 6.3 (1.9) 6.0 (1.3) 3.4 (1.6) 4.7 (1.6) Female 2 % Recognition 28.6 100 100 100 76.2 66.7 Valence 3.2 (1.6) 3.5 (1.0) 3.9 (1.5) 7.1 (1.1) 3.3 (1.3) 4.4 (1.0) Arousal 5.5 (1.5) 4.7 (1.4) 5.9 (1.9) 5.8 (1.7) 2.8 (1.6) 2.9 (1.6) Female 3 % Recognition 95.2 71.4 95.2 100 90.5 100 Valence 3.9 (1.0) 3.6 (2.0) 4.0 (1.1) 7.7 (1.3) 4.4 (1.0) 3.4 (1.5) Arousal 5.0 (1.5) 6.0 (1.7) 5.5 (1.2) 6.4 (1.5) 3.5 (1.8) 4.8 (1.5) Female 4 % Recognition 95.2 100 100 100 95.2 100 Valence 2.9 (1.4) 3.7 (1.3) 4.3 (1.1) 7.1 (0.9) 4.8 (0.5) 3.7 (1.4) Arousal 5.6 (2.3) 5.5 (1.9) 5.9 (1.7) 5.9 (2.0) 3.3 (1.7) 4.6 (1.2) Female 5 % Recognition 90.5 95.2 100 95.2 90.5 95.2 Valence 3.8 (1.7) 3.3 (1.0) 4.1 (1.8) 7.2 (1.1) 4.5 (1.1) 3.7 (1.2) Arousal 5.5 (1.7) 5.2 (1.3) 7.0 (1.5) 5.7 (1.5) 4.1 (1.9) 4.8 (1.5) Female 6 % Recognition 52.4 42.9 90.5 100 76.2 100 Valence 3.5 (1.6) 3.9 (1.4) 4.1 (1.1) 8.1 (0.9) 5.9 (1.1) 3.7 (1.1) Arousal 5.0 (1.5) 4.9 (1.8) 5.6 (1.8) 7.1 (2.0) 4.8 (2.4) 5.1 (1.6) Note. The sad expression for male 3 was not created because of videotape corruption. 51

PAGE 61

APPENDIX B DYNAMIC STIMULUS SET Actor Measure Anger Disgust Fear Happiness Neutrality Sadness Male 1 % Recognition 76.2 52.4 90.5 100 95.2 95.2 Valence M (SD) 2.9 (1.2) 4.1 (1.3) 4.5 (1.9) 7.5 (0.9) 5.4 (0.7) 4.1 (1.1) Arousal M (SD) 5.7 (2.0) 5.1 (1.5) 6.1 (2.0) 6.1 (1.4) 3.2 (2.1) 3.4 (1.9) Male 2 % Recognition 95.2 85.7 100 100 95.2 100 Valence 3.2 (1.3) 3.7 (1.1) 3.6 (1.9) 7.0 (1.0) 4.9 (0.7) 3.1 (1.4) Arousal 4.0 (1.3) 4.6 (2.1) 6.3 (2.1) 5.6 (1.6) 3.1 (1.9) 4.9 (1.6) Male 3 % Recognition 71.4 85.7 95.2 100 95.2 Valence 2.9 (1.1) 3.1 (0.8) 3.7 (1.6) 6.5 (1.2) 4.7 (0.9) Arousal 5.3 (1.5) 4.8 (1.9) 6.2 (1.4) 5.4 (1.4) 3.2 (2.0) Male 4 % Recognition 95.2 85.7 90.5 100 90.5 100 Valence 3.6 (0.9) 3.3 (1.7) 4.0 (1.8) 6.9 (2.1) 5.0 (0.9) 3.3 (1.0) Arousal 4.5 (1.3) 5.8 (1.5) 5.9 (1.9) 6.4 (1.8) 3.6 (2.6) 4.4 (1.4) Male 5 % Recognition 71.4 52.4 95.2 100 85.7 100 Valence 3.2 (1.4) 4.1 (0.9) 3.8 (1.6) 6.9 (1.1) 4.9 (0.4) 3.2 (1.3) Arousal 5.2 (1.3) 4.5 (1.9) 5.8 (1.5) 5.2 (2.0) 3.1 (1.9) 4.7 (1.7) Male 6 % Recognition 66.7 85.7 100 95.2 95.2 90.5 Valence 3.0 (0.8) 2.9 (1.5) 4.1 (1.2) 6.9 (1.7) 4.8 (0.7) 3.3 (1.5) Arousal 5.4 (1.8) 5.9 (1.5) 4.6 (2.2) 5.8 (2.1) 2.9 (2.0) 5.1 (2.0) Female 1 % Recognition 57.1 57.1 100 100 95.2 85.7 Valence 2.7 (1.6) 2.1 (1.1) 3.2 (1.3) 6.9 (1.5) 4.5 (1.3) 3.1 (0.9) Arousal 5.7 (2.0) 5.8 (2.1) 6.3 (1.6) 5.9 (0.9) 3.3 (2.1) 4.8 (1.3) Female 2 % Recognition 52.4 100 100 100 85.7 66.7 Valence 2.6 (1.3) 3.6 (0.9) 3.4 (1.5) 7.3 (1.2) 4.3 (0.9) 4.2 (0.9) Arousal 5.1 (2.0) 4.4 (1.8) 5.8 (1.7) 5.5 (1.6) 2.8 (1.7) 3.5 (2.2) Female 3 % Recognition 100 81 80.1 100 90.5 100 Valence 3.5 (1.3) 3.7 (2.1) 3.1 (1.1) 7.9 (1.2) 4.9 (0.5) 3.2 (1.0) Arousal 4.3 (1.8) 6.4 (1.9) 5.6 (1.8) 6.8 (1.9) 3.3 (2.0) 4.6 (1.4) Female 4 % Recognition 100 100 95.2 100 95.2 100 Valence 2.3 (1.1) 3.4 (2.2) 3.5 (1.3) 7.3 (1.4) 5.1 (0.7) 3.1 (1.0) Arousal 6.1 (1.9) 5.5 (1.8) 6.1 (1.6) 5.9 (1.8) 3.0 (1.9) 4.9 (1.1) Female 5 % Recognition 85.7 95.2 100 100 95.2 95.2 Valence 3.4 (1.7) 3.3 (1.0) 3.2 (1.8) 6.9 (1.6) 5.14 3.6 (1.6) Arousal 5.0 (2.0) 5.4 (1.8) 6.8 (2.0) 4.9 (1.8) 3.2 (2.1) 4.2 (1.4) Female 6 % Recognition 66.7 66.7 85.7 100 85.7 95.2 Valence 3.2 (1.3) 3.5 (1.3) 3.3 (1.3) 8.3 (0.9) 6.0 (1.0) 3.5 (1.1) Arousal 5.1 (1.5) 5.6 (1.3) 6.1 (1.7) 6.7 (2.1) 4.3 (2.2) 4.8 (2.1) Note. The sad expression for male 3 was not created because of videotape corruption. 52

PAGE 62

LIST OF REFERENCES Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372(6507), 669-672. Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33(6), 717-746. Averill, J. R. (1975). A semantic atlas of emotional concepts. JSAS Catalogue of Selected Documents in Psychology, 5, 330. (Ms. No. 421). Balaban, M. T. (1995). Affective influences on startle in five-month-old infants: reactions to facial expressions of emotion. Child Development, 66(1), 28-36. Beck, A. T. (1978). Depression inventory. Philadelphia: Center for Cognitive Therapy. Bowers, D., Bauer, R., & Heilman, K. M. (1993). The Nonverbal Affect Lexicon: theoretical perspectives from neuropsychological studies of affect perception. Neuropsychology, 7(4), 433-444. Bowers, D., Blonder, L. X., & Heilman, K. M. (1992). Florida Affect Battery. University of Florida. Bowers, D., Parkinson, B., Gober, T., Bauer, M. C., White, E., & Bongiolatti, S. (2002, November). Two faces of emotion: patterns of startle modulation depend on facial expressions and on knowledge of evil. Poster presented at the Society for Neuroscience, Orlando, FL. Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. Journal of Behavioral Therapy and Experimental Psychiatry, 25(1), 49-59. Bradley, M. M., & Lang, P. J. (2000). Measuring emotion: behavior, feeling, and physiology. In R. D. Lane & L. Nadel (Eds.), Cognitive Neuroscience of Emotion (pp. 242-276). New York: Oxford University. Buhlmann, U., McNally, R. J., Etcoff, N. L., Tuschen-Caffier, B., & Wilhelm, S. (2004). Emotion recognition deficits in body dysmorphic disorder. Journal of Psychiatric Research, 38(2), 201-206. 53

PAGE 63

54 Burton, A. M., Wilson, S., Cowan, M., & Bruce, V. (1999). Face recognition in poor-quality video: evidence from security surveillance. Psychological Science, 10(3), 243-248. Bush, L. E., II. (1973). Individual differences in multidimensional scaling of adjectives denoting feelings. Journal of Personality and Social Psychology, 25, 50-57. Cannon, W. B. (1931). Again the James-Lange and the thalamic theories of emotion. Psychological Review, 38, 281-295. Christie, F., & Bruce, V. (1998). The role of dynamic information in the recognition of unfamiliar faces. Memory and Cognition, 26(4), 780-790. Cuthbert, B. N., Bradley, M. M., & Lang, P. J. (1996). Probing picture perception: activation and emotion. Psychophysiology, 33(2), 103-111. Darwin, C. (1872). The expression of the emotions in man and animals. Chicago: University of Chicago Press. Davis, M. (1992). The role of the amygdala in fear-potentiated startle: implications for animal models of anxiety. Trends in Pharmacological Science, 13(1), 35-41. Davis, M., & Whalen, P. J. (2001). The amygdala: vigilance and emotion. Mol. Psychiatry, 6(1), 13-34. DeSimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 3, 1-8. Edwards, J., Jackson, H. J., & Pattison, P. E. (2002). Emotion recognition via facial expression and affective prosody in schizophrenia: a methodological review. Clinical Psychology Review, 22(6), 789-832. Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In J. Cole (Ed.), Nebraska symposium on motivation, 1971 (pp. 207-283). Lincoln, NE: University of Nebraska Press. Ekman, P. (1973). Darwin and facial expression; a century of research in review. New York: Academic Press. Ekman, P. (1980). The face of man: expressions of universal emotions in a New Guinea village. New York: Garland STPM Press. Ekman, P. (1982). Emotion in the human face (2nd ed.). New York: Cambridge University Press. Editions de la Maison des Sciences de l'Homme. Ekman, P., & Davidson, R. J. (1994). The nature of emotion: fundamental questions. New York: Oxford University Press.

PAGE 64

55 Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto: Consulting Psychologists Press. Ekman, P., Levenson, R. W., & Friesen, W. V. (1983). Autonomic nervous system activity distinguishes among emotions. Science, 221(4616), 1208-1210. Ekman, P., & Rosenberg, E. L. (1997). What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (FACS). New York: Oxford University Press. Eysenck, M. W., & Keane, M. (2000). Cognitive Psychology: A Student's Handbook. Philadelphia: Taylor & Francis. Field, T. M., Woodson, R., Greenberg, R., & Cohen, D. (1982). Discrimination and imitation of facial expression by neonates. Science, 218(4568), 179-181. Gilboa-Schechtman, E., Foa, E. B., & Amir, N. (1999). Attentional biases for facial expressions in social phobia: the face-in-the-crowd paradigm. Cognition and Emotion, 13(3), 305-318. Gloor, P., Olivier, A., Quesney, L. F., Andermann, F., & Horowitz, S. (1982). The role of the limbic system in experiential phenomena of temporal lobe epilepsy. Annals of Neurology, 12(2), 129-144. Hargrave, R., Maddock, R. J., & Stone, V. (2002). Impaired recognition of facial expressions of emotion in Alzheimer's disease. Journal of Neuropsychiatry and Clinical Neurosciences, 14(1), 64-71. Hariri, A. R., Tessitore, A., Mattay, A., Frea, F., & Weinberger, D. (2001). The amygdala response to emotional stimuli: a comparison of faces and scenes. Neuroimage, 17(317-323). Heilman, K. M. (1987, February). Syndromes of facial affect processing. Paper presented at the International Neuropsychological Society, Washington, DC. Hess, W. R., & Brugger, M. (1943). Subcortical center of the affective defense reaction. In K. Akert (Ed.), Biological order and brain organization: selected works of W. R. Hess (pp. 183-202). Berlin: Springer-Verlag. Humphreys, G. W., Donnelly, N., & Riddoch, M. J. (1993). Expression is computed separately from facial identity, and it is computed separately for moving and static faces: neuropsychological evidence. Neuropsychologia, 31(2), 173-181. Izard, C. E. (1994). Innate and universal facial expressions: evidence from developmental and cross-cultural research. Psychological Bulletin, 115(2), 288-299. Johnson, M. H., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns' preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40(1-2), 1-19.

PAGE 65

56 Kamachi, M., Bruce, V., Mukaida, S., Gyoba, J., Yoshikawa, S., & Akamatsu, S. (2001). Dynamic properties influence the perception of facial expressions. Perception, 30(7), 875-887. Kan, Y., Kawamura, M., Hasegawa, Y., Mochizuki, S., & Nakamura, K. (2002). Recognition of emotion from facial, prosodic and written verbal stimuli in Parkinson's disease. Cortex, 38(4), 623-630. Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., & Hoffman, J. M. (2003). Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage, 18(1), 156-168. Klver, H., & Bucy, P. C. (1937). "Psychic blindness" and other symptoms following bilateral temporal lobectomy. American Journal of Physiology, 119, 352-353. Kohler, C. G., Bilker, W., Hagendoorn, M., Gur, R. E., & Gur, R. C. (2000). Emotion recognition deficit in schizophrenia: association with symptomatology and cognition. Biological Psychiatry, 48(2), 127-136. Lander, K., & Bruce, V. (2004). Repetition priming from moving faces. Memory and Cognition, 32(4), 640-647. Lander, K., Christie, F., & Bruce, V. (1999). The role of movement in the recognition of famous faces. Memory and Cognition, 27(6), 974-985. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1990). Emotion, attention, and the startle reflex. Psychological Review, 97(3), 377-395. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1997). Motivated attention: affect, activation and action. In P. J. Lang, R. F. Simons & M. T. Balaban (Eds.), Attention and orienting: sensory and motivational processes. Hillsdale, NJ: Lawrence Erlbaum. Lang, P. J., Bradley, M. M., Cuthbert, B. N., & Patrick, C. J. (1993). Emotion and psychopathology: a startle probe analysis. Progress in Experimental, Personality, and Psychopathological Research, 16, 163-199. Lang, P. J., Greenwald, M. K., Bradley, M. M., & Hamm, A. O. (1993). Looking at pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology, 30, 261-273. Leonard, C., Voeller, K. K. S., & Kuldau, J. M. (1991). When's a smile a smile? Or how to detect a message by digitizing the signal. Psychological Science, 2, 166-172. Levenson, R. W., Carstensen, L. L., Friesen, W. V., & Ekman, P. (1991). Emotion, physiology, and expression in old age. Psychology and Aging, 6(1), 28-35.

PAGE 66

57 Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology, 27(4), 363-384. Monrad-Krohn, G. H. (1924). On the dissociation of voluntary and emotional innervation in facial paralysis of central origin. Brain, 47(22-35). Morris, J. S., Friston, K. J., Buchel, C., Frith, C. D., Young, A. W., Calder, A. J., et al. (1998). A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain, 121 ( Pt 1), 47-57. Morris, M., Bradley, M. M., Bowers, D., Lang, P. J., & Heilman, K. M. (1991). Valence specific hypoarousal following right temporal lobectomy [Abstract]. Journal of Clinical and Experimental Neuropsychology, 14, 105. Nelson, C. A., & Dolgrin, K. G. (1985). The generalized discrimination of facial expressions by seven-month-old infants. Child Development, 56, 58-61. Oatley, K., & Jenkins, J. M. (1996). Understanding emotions. Cambridge: Blackwell Publishers. O'Gorman, J. G. (1990). Individual differences in the orienting response: nonresponding in nonclinical samples. Pavlov Journal of Biological Science, 25(3), 104-108; discussion 109-110. Okun, M. S., Bowers, D., Springer, U., Shapira, N., Malone, D., Rezai, A., Nuttin, B., Heilman, K. M., Morecraft, R., Rasmussen, S., Greenberg, B., Foote, K., Goodman, W. (2004). What's in a "smile?" Intra-operative observations of contralateral smiles induced by deep brain stimulation. Neurocase, 10(4), 271-279. Ornitz, E. M., Russell, A. T., Yuan, H., & Liu, M. (1996). Autonomic, electroencephalographic, and myogenic activity accompanying startle and its habituation during mid-childhood. Psychophysiology, 33(5), 507-513. Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning. Chicago: University of Illinois Press. O'Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: a psychological and neural synthesis. Trends in Cognitive Science, 6(6), 261-266. Pike, G. E., Kemp, R. I., Towell, N. A., & Phillips, K. C. (1997). Recognizing moving faces: the relative contribution of motion and perspective view information. Visual Cognition, 4(4), 409-438. Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological motion. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 358(1431), 435-445.

PAGE 67

58 Puce, A., Syngeniotis, A., Thompson, J. C., Abbott, D. F., Wheaton, K. J., & Castiello, U. (2003). The human temporal lobe integrates facial form and motion: evidence from fMRI and ERP studies. Neuroimage, 19(3), 861-869. Rinn, W. E. (1984). The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95(1), 52-77. Roberts, R. J., & Weerts, T. C. (1982). Cardiovascular responding during anger and fear imagery. Psychology Report, 50(1), 219-230. Rosen, J. B., & Davis, M. (1988). Enhancement of the acoustic startle by electrical stimulation of the amygdala. Behavioral Neuroscience, 102(2), 195-202. Russell, J. A. (1978). Evidence of convergent validity on the dimensions of affect. Journal of Personality and Social Psychology, 36, 1152-1168. Russell, J. A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11(273-294). Russell, J. A., & Ridgeway, D. (1983). Dimensions underlying children's emotion concepts. Developmental Psychology, 19, 785-804. Schlosberg, H. (1952). The description of facial expressions in terms of two dimensions. Journal of Experimental Psychology, 44(4), 229-237. Schwartz, G. E., Weinberger, D. A., & Singer, J. A. (1981). Cardiovascular differentiation of happiness, sadness, anger, and fear following imagery and exercise. Psychosomatic Medicine, 43(4), 343-364. Singh, S. D., Ellis, C. R., Winton, A. S., Singh, N. N., Leung, J. P., & Oswald, D. P. (1998). Recognition of facial expressions of emotion by children with attention-deficit hyperactivity disorder. Behavior Modification, 22(2), 128-142. Sorce, J., Emde, R., Campos, J., & Klinnert, M. (1985). Maternal emotional signaling: it's effect on the visual cliff behavior of 1-year-olds. Developmental Psychology, 21(1), 195-200. Spielberger, C. D. (1983). State-Trait Anxiety Inventory. Palo Alto, CA: Mind Garden. Sprengelmeyer, R., Rausch, M., Eysel, U. T., & Przuntek, H. (1998). Neural structures associated with recognition of facial expressions of basic emotions. Proceedings of the Royal Society of London Series B Biological Sciences, 265(1409), 1927-1931. Sprengelmeyer, R., Young, A. W., Calder, A. J., Karnat, A., Lange, H., Homberg, V., et al. (1996). Loss of disgust. Perception of faces and emotions in Huntington's disease. Brain, 119 (Pt 5), 1647-1665.

PAGE 68

59 Sprengelmeyer, R., Young, A. W., Mahn, K., Schroeder, U., Woitalla, D., Buttner, T., et al. (2003). Facial expression recognition in people with medicated and unmedicated Parkinson's disease. Neuropsychologia, 41(8), 1047-1057. Tanaka, K. (1992). Inferotemporal cortex and higher visual functions. Current Opinion in Neurobiology, 2, 502-505. Teunisse, J. P., & de Gelder, B. (2001). Impaired categorical perception of facial expressions in high-functioning adolescents with autism. Neuropsychology, Development, and Cognition. Section C, Child Neuropsychology, 7(1), 1-14. Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale & R. J. W. Mansfield (Eds.), Analysis of Visual Behavior (pp. 549-586). Cambridge: MIT Press. Walker, D. L., & Davis, M. (2002). Quantifying fear potentiated startle using absolute versus proportional increase scoring methods: implications for the neurocircuitry of fear and anxiety. Psychopharmacology(164), 318-328. Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78(1), 105-119. Wundt, W. (1897). Outlines of psychology (C. H. Judd, Trans.). New York: Gustav E. Stetchert. Young, A. W., Aggleton, J. P., Hellawell, D. J., Johnson, M., Broks, P., & Hanley, J. R. (1995). Face processing impairments after amygdalotomy. Brain, 118 (Pt 1), 15-24. Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997). Facial expression megamix: tests of dimensional and category accounts of emotion recognition. Cognition, 63(3), 271-313. Zeki, S. (1992). The visual image in mind and brain. Scientific American, 267(3), 68-76. Zihl, J., von Cramon, D., & Mai, N. (1983). Selective disturbance of movement vision after bilateral brain damage. Brain, 106 (Pt 2), 313-340.

PAGE 69

BIOGRAPHICAL SKETCH Utaka Springer was born in Menomonie, WI, and received his B.S. in biology from Harvard University. After gaining research experience in cognitive neuroscience at the McKnight Brain Institute in Gainesville, FL, he entered the doctoral program in clinical psychology at the University of Florida, specializing in neuropsychology. 60