Assessment of facial expression asymmetries utilizing digitized image analysis and impressionistic ratings


Material Information

Assessment of facial expression asymmetries utilizing digitized image analysis and impressionistic ratings
Physical Description:
vi, 182 leaves : ; 29 cm.
Richardson, Charles K., 1963-
Publication Date:


Subjects / Keywords:
Research   ( mesh )
Facial Expression   ( mesh )
Emotions   ( mesh )
Models, Psychological   ( mesh )
Image Processing, Computer-Assisted   ( mesh )
Image Interpretation, Computer-Assisted   ( mesh )
Data Collection   ( mesh )
Data Interpretation, Statistical   ( mesh )
Department of Clinical and Health Psychology thesis Ph.D   ( mesh )
Dissertations, Academic -- College of Health Professions -- Department of Clinical and Health Psychology -- UF   ( mesh )
bibliography   ( marcgt )
non-fiction   ( marcgt )


Thesis (Ph.D.)--University of Florida, 1996.
Bibliography: leaves 167-181.
Statement of Responsibility:
by Charles K. Richardson.
General Note:
General Note:

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002287251
oclc - 49346816
notis - ALP0404
System ID:

Full Text









ABSTRACT .... ........... ........................... ...... iv

INTRODUCTION ............................................ 1

Defining Emotion ...................................... 1
Neuropsychology of Emotion............................. 6
Evaluation of Nonverbal Emotional Signals............. 7
Valence Hypothesis .................................... 11
Expression of Emotional Signals........................ 14
Methodological Issues in Facial Expression Research... 15
Production of Facial Expressions....................... 24
Spontaneous Expressions in Normals: Laterality
Studies.......................... ................... 25
Spontaneous Expressions in Brain-Impaired Subjects.... 28
Posed Expressions in Brain-Impaired Subjects.......... 30
Posed Expressions in Normals............................ 31
Morphological Variation................................. 37
Methodological Differences............................ 41

GENERAL ISSUES ADDRESSED ................................ 56

HYPOTHESES........... .................................... 64

METHOD .................................................. 66

Overview ............................................. 66
Subjects.................. ................. ........... 66
Stimulus Generation................................. 67
Stimulus Preparation.................................. 69
Equipment............................................ 70
Research Assistants................................... 72
Digitization of Pixel Intensities...................... 74
Subjective Ratings .................................... 75
Raters................................................ 76
Ratings Procedure ..................................... 78

RESULTS ............................................... 79

Laterality of Emotional Expressions based on Entropy
Score................................................ 80
Laterality of Emotional Expressions based on Asymmetry
Score................................................ 82

Influence of Lighting on Expressive Asymmetries....... 85
Emotional Expression Laterality with Correction for
Lighting................. ........................... 87
Emotional Laterality based on Adjusted Asymmetry Score 91
Asymmetry Scores for Additional Emotional Expressions 94
Nonemotional Laterality based on Adjusted Entropy
Score.............................................. 96
Nonemotional Laterality based on Adjusted Asymmetry
Score ............................................... 100
Adjusted Entropy Scores of Additional Expressions..... 101
Adjusted Asymmetry Scores of Additional Expressions... 105
Association between Emotional and Non-emotional
Expressions........................................ 108
Non-parametric Tests of Laterality..................... 109
Subjective Ratings of Expressive Asymmetries.......... 110
Effect of Lighting Bias on Subjective Ratings......... 111
Agreement between Digitized Data and Subjective
Ratings............................................. 112
Vertical Focus of Subjective Ratings................... 113
Effect of Training on Subjective Ratings.............. 115

DISCUSSION .............................................. 118

Asymmetries in the Lower Face: Valence Dependent..... 122
Asymmetries in the Upper Face: Right Upper Bias...... 129
Relationship between Digitized Data and Subjective
Ratings .............................................. 141
Methodological Issues and the Subjective Ratings...... 144

CONCLUSION AND FUTURE DIRECTIONS......................... 150

Evidence Supporting the Valence Hypothesis............ 151
Evidence Supporting the Right Hemisphere Hypothesis... 153
Evidence Supporting the Facial Mobility Hypothesis.... 154
Multiple Factors Contributing to Expressive Biases.... 156
Methodological Issues and Future Directions........... 158

APPENDIX................................................. 162

REFERENCES..................... ........................ 166

BIOGRAPHICAL SKETCH...................................... 181


Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy



Charles K. Richardson

December 1996

Chair: Dawn Bowers
Cochair: Russell M. Bauer
Major Department: Clinical and Health Psychology

Facial expressions encompass a prominent aspect of

emotional behavior. Prior research has generally found that

the left hemiface is more expressive than the right hemiface

while posing emotional expressions. There are three major

proposals that account for expressive asymmetries and they

are the Right Hemisphere, Valence and Facial Mobility

hypotheses. Research reporting expressive asymmetries has

primarily relied upon subjective ratings, while studies

failing to replicate these findings have generally utilized

a more objective measurement system.

In an effort to find an "objective" method which has

the sensitivity to uncover possible facial asymmetries,

digitized image analysis has been proposed as an alternative

approach. In this study, digitized image analysis was used

to scrutinize videotaped images of posed facial expressions.

Facial movement was inferred from changes in pixel

intensities during facial expressions. The degree of

movement for each hemiface was measured during emotional and

nonemotional expressions. Subjective ratings were also

obtained and compared with the digitized image analysis.

The primary questions of interest were whether hemiface

asymmetries in dynamic expressions would be present using

digitized image analysis and to what degree are subjective

ratings consistent with the results of the digitized data.

Other major questions of interest were whether expressive

biases conform to our current knowledge of neuroanatomy and

which of the major competing hypotheses would be most

consistent with the findings of the digitized analysis.

The results indicated the presence of hemifacial

expression asymmetries using digitized image analysis. The

pattern of lateral biases differed depending upon the

portion of the face examined. In the lower face, negative

expressions were left-biased and the positive expression

displayed a right-sided trend. The nonemotional expressions

did not suggest a clear pattern of asymmetry. In the upper

face, nearly all of the emotional and nonemotional

expressions were right-biased. Expressive asymmetries were

more consistent with the Facial Mobility hypothesis in the

upper face and the Valence hypothesis in the lower face. In

contrast, the subjective ratings indicated that no emotional

expression was asymmetrical. There were significant

methodological weaknesses associated with the subjective

ratings in this study. The discussion focuses on the

implications of the digitized data.


Defining Emotion

The study of emotion has been an integral aspect of

psychology and philosophy since the beginnings of these

disciplines. One of the most enduring contributions of the

founding father of American psychology, William James, is

his theory of emotions. Despite the attention emotion has

received, a consensus has yet to be developed regarding the

definition of emotion. Most definitions tend to include

four major components: physiological arousal, motor

activity, subjective feeling and an evaluative capacity

(Borod, 1993; Bowers, Bauer and Heilman, 1993). Fehr and

Russell (1984) noted, however, that attempts to incorporate

these four components within a definition has failed to

produce one which precludes other cognitive processes, such

as attitudes or motives. Consequently, they concluded that

a classical definition of emotion is likely unattainable.

Despite the difficulty of delineating an airtight

definition of emotion, vigorous scientific investigation has

continued. In general, the question has been addressed by

focusing on one or more of the aforementioned components of

physiological arousal, motor activity, subjective feeling

and evaluative ability. While there may be substantial

agreement concerning these features, the role of cognition

is hotly debated. For example, Zajonc (1980) has proposed a

distinct emotional system that does not require cognitive

processes to be activated. He further asserts that the

affective reactions occur more rapidly than the more

analytical, cognitive processes.

The ideas of Zajonc and other similarly minded

theorists share some aspects of the James (1892) view of

emotions which emphasizes the role of primitive perceptual

analysis leading to autonomic and motor responses. Zajonc's

views (1980), however, differ from those of James (1892) in

that subjective emotional experience is posited to occur

almost simultaneously with the rapid affective perceptual

system. James (1892), on the other hand, argued that

subjective feeling developed after one sensed a change in

one's autonomic system.

In this regard, James' (1892) theory is more consistent

with the proposals that favor a cognitive role in emotional

experience (Schacter and Singer, 1962; Lazarus, 1990). They

suggest that there cannot be emotion without cognition, and

thus, a separate entity of emotion is not possible. For

instance, Schacter and Singer (1962) have asserted that

cognitive attributions must exist prior to the subjective

feeling of emotion, and it is through the feedback process

between cognition, physiological arousal and motor activity

that the subjective feeling of emotion is experienced.

In support of their proposition, they reported an

experiment in which subjects injected with epinephrine

experienced emotion only if an emotional stimulus was

present in the environment. They argued that one's

physiological arousal must be attributed to an emotional

stimulus in order for the subjective experience of emotion

to develop (Schacter and Singer, 1962). Consistent with

this viewpoint is evidence suggesting that cognitive

decisions are made more rapidly than ones requiring an

affective response. In a review of a series of studies,

Feyereisen (1989) concluded that when examining a set of

stimuli, subjects are slower at making affective decisions

than cognitive ones, a conclusion which contradicts Zajonc's

(1980) position that the affective system operates more

rapidly than the cognitive system.

On the other hand, there are several lines of evidence

that are inconsistent with the cognitive perspective on

emotion. For example, several studies have described

seizure patients who experience the sensation of an

emotional state, usually fear, at the initial stages of a

seizure (Strauss, 1989). The fact that these patients

experience other emotions besides fear suggests that this

phenomena is not the result of their sensing an impending

seizure. Furthermore, seizure patients who have had their

amygdala electrically stimulated have also described various

emotional sensations (Halgren, Walter, Cherlow and Crandall,

1978; Gloor, Olivier, Quesney, Anderman and Horowitz, 1982).

In addition, single cell recordings in the amygdala of

primates have found that certain cells are particularly

sensitive to emotional stimuli, or more specifically,

emotional faces (Leonard, Rolls and Wilson, 1985). This

might provide a link between the evaluation and the

experience of emotion (LeDoux, 1989). Taken together, these

findings suggest that there are neurons which are

specialized in producing subjective emotional states and

that cortical involvement may not be necessary for the

experience of emotion.

Lazarus (1990) has asserted that the conflict between

the cognitive and non-cognitive viewpoints of emotion

originates in a simplistic understanding of cognition. He

disputes the commonly held notion that cognitions are solely

the function of conscious processes. He argues, instead,

that cognitions include even basic mental operations that

are calculated without the awareness of the individual

(Lazarus, 1990; Buck, 1993) In defense of his position,

there exists substantial evidence of a dissociation between

implicit and explicit learning on semantic memory tasks

(Schacter, 1987; Squire, 1987). A dissociation of implicit

and explicit memory indicates that one can develop a

cognitive representation of a stimulus and not be aware of



Buck (1993) contends, however, that if one accepts the

broad view of cognition advanced by Lazarus (1990), then it

becomes increasingly difficult to distinguish cognition from

perception even at the most elemental levels. If cognition

encompasses nearly all forms of perception, then there is

little that separates the cognitive and non-cognitive

viewpoints (Buck, 1993). Consequently, one may wonder

whether the debate on the definition of emotion is actually

a debate on the meaning of cognition (Buck, 1993).

With this in mind, it is conceivable that the

cognitive/perceptual component to emotion is basically

equivalent to the evaluative component of emotion.

Accepting the view that emotion consists of several

components has been the predominant opinion among

psychologists and has been associated with more fruitful

research by allowing scientists to focus on particular

elements, rather than grapple with all aspects of emotion.

The focus of this research endeavor is on the motor activity

component of emotion as it involves the production of

emotional facial expressions.

The motor activity component of emotion, as well as

emotion in general, has both biological and psychological

dimensions. As a result, some researchers have approached

the question of emotion from a more psychological or

cognitive perspective while others have addressed it from a

more biological viewpoint. Because neuropsychology attempts

to integrate these perspectives, research from a

neuropsychological approach may prove useful in improving

our understanding of emotion (Heller, 1990).

Neuropsvcholorv of Emotion

An assumption of the neuropsychological approach is

that the central nervous system is organized such that

specific anatomical entities control distinctive aspects of

an organism's response repertoire. These specialized

operations include cognition, attention, emotion, arousal,

etc. An overwhelming array of evidence has accumulated such

that this assumption is no longer seriously debated within

the neuroscientific community (Heller, 1990). One of the

basic anatomical divisions of the primate brain is that of

the left and right hemispheres. It is well established that

the two hemispheres each have capabilities for which they

are undoubtedly specialized (McGlone, 1980; Bradshaw and

Nettleton, 1981; Heilman and Valenstein, 1993). A wide body

of evidence indicates that the left hemisphere is involved

in the processing and production of verbal information.

Many nonverbal processes, however, appear to be under the

control of the right hemisphere. The most notable example

is that of interpreting and generating spatial

configurations and relationships (McGlone, 1980; Bradshaw

and Nettleton, 1981; Benton and Tranel, 1993).

In the past 25 years, research has suggested an

additional specialization of the right hemisphere involving

emotion. As previously mentioned, there are several

components associated with emotions including physiological

arousal, motor activity, subjective feeling and a

perceptual/cognitive function. Neuropsychological research

indicates that these elements may be dissociated and that

support for a right hemisphere advantage depends upon the

component studied (Tucker, 1993). Scientific inquiries

involving the evaluative (perception/cognition) and

expressive (motor activity) aspects of emotion have

suggested a greater contribution of the right hemisphere

system (Bowers and Heilman, 1984; Borod, 1993; Heilman,

Bowers and Valenstein, 1993). Consequently, the right

hemisphere has been hypothesized to be specially equipped to

analyze and interpret emotional stimuli (e.g. facial

expression, tone of voice) as well as express these same

signals. The results which have been consistent with a

right hemisphere specialization have more often been found

associated with the evaluation or interpretation of emotion,

whereas the evidence for a similar specialization in the

production or expression of emotion is less conclusive

(Bryden and Ley, 1983; Borod and Koff, 1984; Bruyer, 1986).

Evaluation of Nonverbal Emotional Signals

Two distinct approaches have been used to make

inferences about brain involvement in the processing of

nonverbal emotional stimuli. The first approach has relied

on evidence obtained from brain-impaired subjects and the

second has utilized normals while participating in tasks in

which stimuli are presented to a specific hemisphere. Two

early studies using the former approach were conducted by

Heilman and colleagues (Heilman, Scholes and Watson 1975;

Tucker, Heilman and Watson, 1977). They introduced

propositionally neutral sentences to right hemisphere

diseased (RHD) subjects, left hemisphere diseased (LHD)

subjects and normal controls. The prosody of the sentences,

however, varied between four different emotions. RHD

subjects were found to be impaired in their ability to

identify the emotional prosody. Ross (1981) also reported

RHD impairment in their ability to comprehend emotional


Weintraub, Mesulam and Kramer (1981), on the other

hand, proposed that RHD difficulty processing emotionally

intoned speech was the result of a general prosodic defect,

rather than one specific for emotional prosody. Consistent

with the view of Weintraub et al. (1981), Van Lancker and

colleagues have reported that RHD subjects have difficulty

discriminating and recognizing familiar voices (Van Lancker

and Kreiman, 1988; Van Lancker, Kreiman and Cummings, 1989).

Heilman, Bowers, Speedie and Coslett (1984) also found that

both LHDs and RHDs were disrupted in their ability to

understand nonemotional prosody; however, RHDs were

significantly more impaired than LHDs in their ability to

comprehend emotional prosody. Their findings, which were

corroborated by Bowers, Coslett, Bauer, Speedie and Heilman

(1987), suggest that a general prosodic defect may be the

consequence of cortical damage to either hemisphere, yet RHD

specifically enhances the impact of this defect for

processing emotional prosody.

Further research with focal lesion patients has

indicated that a right hemisphere specialization for

processing affective stimuli is not limited to the auditory

channel. Dekosky, Heilman, Bowers and Valenstein (1980)

assessed RHD and LHD subjects capacity to name, select and

discriminate emotional faces. They found that although LHD

subjects had difficulty with some tasks, RHD subjects were

more impaired. In addition, Cicone, Wapner and Gardner

(1980) also reported that RHDs had difficulty processing

facial emotions. To rule out the possibility that an RHD

impairment in emotional face recognition was secondary to a

general defect in visuo-spatial processing, Bowers, Bauer,

Coslett and Heilman (1985) conducted a further study of

facial affect processing. They statistically equated RHDs,

LHDs and controls on a measure of visuo-spatial ability and

found that RHD patients remained impaired (relative to LHD

and controls) in their ability to distinguish the emotional

category of a facial expression. Their findings suggest

that there is a right hemisphere advantage for processing

emotional faces in addition to a specialization for spatial-

configuration analysis.

Along with studies of brain damaged subjects,

additional research has been conducted using normals.

Through dichotic or monoaural listening tasks (Safer and

Leventhal, 1977; Ley and Bryden, 1982) and tachistoscopic

studies (Ley and Bryden, 1979; Safer, 1981; Strauss and

Moscovitch, 1981), further evidence has accumulated that is

consistent with a right hemisphere advantage in the

processing of nonverbal affective signals. In general,

these studies have supported a right hemisphere

specialization; however, there has been some discrepancy in

the findings.

For example, Lavadas, Umilta and Ricci-Bitti (1980)

found that only women displayed a right hemisphere/left

visual field (LVF) advantage for naming emotional faces.

Safer (1981) also found sex differences in visual field

superiority for matching emotions, although it was males,

rather than females who displayed the LVF advantage. In a

review of the literature, Bryden and Ley (1983) reported

that of the investigations finding a gender effect for

lateralization, neither gender was consistently associated

with greater lateralization for processing visual, affective


In addition to possible sex effects, there is some

evidence that the valence of the target emotion impacts the

visual field advantage. For example, Reuter-Lorenz and

Davidson (1981) found a LVF advantage for positive (happy)

affective material and a right visual field advantage for

negative (sad) expressions. Other researchers, however,

have not reported an association between visual field

superiority and emotional valence (Duda and Brown, 1984;

Bryson, McLaren, Wadden and MacLean, 1991). Thus, neither

the sex of the subject nor the valence of the emotion were

consistently associated with visual field advantages. Taken

together, the majority of studies using normals and brain-

impaired subjects across several channels of perception is

consistent with a right hemisphere specialization in the

evaluation of nonverbal affective signals.

Valence Hypothesis

The results of the Reuter-Lorenz and Davidson (1981)

study were inconsistent with most research in this area,

yet, they were compatible with an alternative hypothesis

involving a hemispheric specialization for emotion.

Proponents of this, the Valence hypothesis, originally

proposed that the left hemisphere is specialized for

positive emotions and the right hemisphere for negative ones

(Davidson, Schwartz, Saron, Bennett and Goleman, 1979).

Support for the Valence hypothesis is based both on clinical

observations of brain-impaired patients (Gainotti, 1972) as

well as electroencephalograms (EEG) from either depressed

patients (Robinson and Szetela, 1981) or normals undergoing

mood induction (Davidson et al., 1979). In addition,

studies in which subjects have had one hemisphere of the

brain anesthetized (via sodium amytal injections into the

contralateral femoral artery) have responded in a manner

consistent with the Valence hypothesis (Lee, Loring, Meador

and Flanigin, 1987).

Gainotti (1972) reported that the affect of LHD

patients appeared to be more depressed and the affect of RHD

patients seemed to be either neutral or euphoric. Robinson

and coworkers (Robinson and Szetela, 1981; Robinson, Kubos,

Rao and Price, 1984) have found that post-stroke depression

is more often associated with left hemisphere lesions and

the severity of the depression increases with greater damage

concentrated in the anterior portion (i.e. left frontal).

In regards to depressed patients, EEG data indicates that

there is a greater increase in right versus left frontal

activity (Schaffer, Davidson and Saron, 1983). Furthermore,

Perris (1975) has found that the severity of the depression

is correlated with the activity level in the right frontal

region. Similarly, normals receiving mood induction display

greater right hemisphere activity in the sad or depressed

condition and greater left hemisphere activity in the

positive condition (Davidson et al., 1979). Finally, the

affective demeanor of those whose left hemisphere has been

anesthetized is generally negative (anxious and frightened),

while those whose right hemisphere has been anesthetized is

generally neutral or euphoric (Terzian, 1964; Lee et al.,


Because research supportive of the Valence hypothesis

has generally been associated with the experience of emotion

and research consistent with the RH hypothesis has

frequently been connected to the evaluation of emotion,

Davidson has proposed a model which accounts for the

discrepant findings between the evaluation and the

experience of emotion (Fox and Davidson 1984; Davidson,

1993). His model not only distinguishes between these

components of emotion, but it also challenges the core

assumptions of the Valence hypothesis. He argues that what

appears to be differences in emotional valence may actually

be differences in a more fundamental system. He proposes

that this system is based upon tendencies to approach and

withdraw such that the left hemisphere is involved with

approach behaviors and the right hemisphere is associated

with avoidant behaviors.

Whether affective valence or approach/withdrawal is

more fundamental remains debatable as does the Valence

hypothesis in general. There have been other challenges to

the Valence hypothesis. For example, Heilman and Watson

(1989) have proposed an alternative model based on a right

hemispheric specialization of both arousal and the

evaluation of nonverbal stimuli and a left hemispheric

specialization for the interpretation of verbal stimuli.

Their model suggests that hemispheric activation as measured

by EEG is related to whether a stimulus is arousing and

requires verbal or nonverbal processing. Thus, it is the

arousing aspects of an effectively negative experience,

rather than its emotionality that is associated with

heightened RH activity.

Expression of Emotional Signals

Hemispheric specializations may not be limited to the

evaluation of nonverbal emotional signals. A hemispheric

advantage for the production of emotion may also exist. As

previously reviewed (see section on evaluation of nonverbal

emotional signals), there are several different channels

through which emotional expression can occur. Hand

movements or body posture can convey an emotional message.

Facial expressions are also a rich source of affective

information as is the intonation of speech. Consequently,

many of these channels have been utilized to explore

hemispheric advantages in the expression of emotion.

For example, Tucker et al. (1977) investigated the

performance of brain injured patients on a task which

requested effectively intoned speech. The results indicated

that the speech of RHD subjects was particularly flat and

the production was severely impaired as compared to controls

and LHD patients. Similarly, Ross (1981) reported that RHD

patients were impaired in their ability to convey the

emotional meaning of a sentence through intonation. Borod,

Welkowitz et al. (1990) as well as Shapiro and Danly (1985)

have also found that RHDs display deficits in prosodic


expressivity. Cancelliere and Kertesz (1990), however, have

reported results inconsistent with the majority of the

research on prosodic expression in focal lesion patients.

In response, Borod (1993) noted that Cancelliere and

Kertesz's (1990) studied patients with acute lesions which

may have contributed to their anomalous findings. In

general, research indicates that the right hemisphere may be

critical for the production of affective prosody.

Further analysis has focused on the facial channel of

emotional expression. A substantial collection of studies

during the past three decades have explored a possible right

hemisphere advantage in the production of voluntary and

spontaneous facial expressions. The results have generally

been inconsistent in their support of a right hemisphere

specialization for the production of facial emotions. The

rest of the introduction will be devoted to the exploration

of this body of research and its seemingly conflictual


Methodological Issues in Facial Expression Research

Prior to reviewing this literature, however, the basic

set-up of this research will be described. The typical

experiment begins with subjects producing facial expressions

in response to a verbal command (e.g. show me how you would

look if happy). Subjects may be asked to generate a facial

expression that matches the emotional content of a cartoon

drawing (e.g. person receiving an award) or a verbal

description of an emotional scene (e.g. the university has

just informed you that your daughter has made straight A's).

In addition, subjects may be requested to mimic the facial

expression of another person in a photograph. The room is

usually well lit so that investigators are able to videotape

or photograph the subjects' facial expressions. Subjects

may or may not be informed that they are being videotaped or

photographed. Facial expressions produced in these

conditions are considered posed or voluntary.

In other designs, subjects may view film clips or

recall memories from their past and their spontaneous

reactions are unobtrusively recorded. The key aspect to

these elicitation procedures is that subjects are never

asked to make facial expressions. Thus, the expressions in

these conditions are considered to be spontaneous or

involuntary. The distinction between voluntary/posed and

involuntary/spontaneous is important since different neural

circuitries are thought to underlie each (for review, see

Rinn, 1984).

In brief, the neural pathways responsible for

spontaneous expressions are extrapyramidal and involve the

subcortex, possibly originating in the thalamus or

hypothalamus (Poeck, 1969). The neural pathways associated

with posed expressions, on the other hand, are pyramidal and

receive both direct and indirect input from the motor cortex

(Noback and Demerest, 1975; Brodal, 1981). Evidence for a

dissociation between these expressive systems comes from

patients with motor cortex lesions who cannot produce

voluntary emotional expressions contralateral to the lesion,

yet do display spontaneous emotional expressions without

asymmetry (Monrad-Krohn, 1924; Rinn, 1984). In addition,

patients with Parkinson's disease appear to have flat affect

lacking a spontaneous emotional quality, yet can mimic

emotional expressions on command (Kahn, 1964; Rinn, 1984;

Pizzamiglio, Caltagirone and Zoccolotti, 1989). Further

evidence of a dissociation can be found with patients

receiving facial nerve surgery and with patients suffering

from pathological laughing or crying (Poeck, 1969; Rinn,

1984; Pizzamiglio et al., 1989).

Because pyramidal and extrapyramidal tracts differ to

the degree in which they innervate the ipsilateral and

contralateral sides of the face, it is critical that

researchers differentiate between the two systems (Kahn,

1964; Rinn, 1984; Thompson, 1985). Neuroanatomical evidence

suggests that there is contralateral innervation associated

with posed expressions (Crosby and Dejong, 1963; Kahn, 1964;

Rinn, 1984), although the pattern of innervation differs

between the upper one-third of the face and the lower two-

thirds of the face. According to Kuypers (1958), there is a

bilateral pattern of innervation of the forehead and brow

with an apparent equal distribution of contralateral and

ipsilateral fibers. The lower portion of the face is

believed to be primarily innervated by fibers from the

contralateral hemisphere. The innervation pattern related

to spontaneous emotional facial expressions appears to be

bilateral, although this may be debatable (Thompson, 1985;

Pizzamiglio et al., 1989). Thus, inferences about

hemispheric cortical activity based upon facial asymmetries

is more consistent with neuroanatomical evidence for posed

expressions than spontaneous ones.

Once a facial expression is elicited, whether

spontaneous or posed, a variety of methodological

differences exist in terms of how or what portions of the

face are captured for eventual measurement and rating of

emotionality. Some studies have utilized the whole face in

its natural form and asked raters to judge which side is

more expressive (Borod, Caron and Koff, 1981; Borod and

Koff, 1983). The whole face image represents the most

ecologically valid representation, since it is a

reproduction of an actual face. The disadvantage of

presenting a normal image of the whole face, however, is the

lack of control for possible visual field preferences of the

raters. In other words, one side of the face may receive

greater attention or superior processing due to the visual

field advantages of the raters. Therefore, ratings favoring

one side of the face may reflect an asymmetry in the visual

processing of emotional stimuli, rather than an expressive

asymmetry of the posers. Studies have indicated that there


is a left visual field bias for processing emotional facial

expressions (Gilbert and Balkan, 1973; Bennett, Delmonico

and Bond, 1987).

To adjust for visual field advantages, researchers have

created mirror reversals of emotional expressions (Borod,

Koff, Lorch, Nicholas & Welkowitz, 1988; Moreno, Borod,

Welkowitz and Alpert, 1990). Mirror reversals are the

actual whole face with the exception that each side of the

face is repositioned to the opposite side. If used alone, a

mirror reversal is also susceptible to a rater's visual

field bias. However, if an equal number of normal whole

face images and mirror reversals are presented, a visual

field advantage is negated since the bias is distributed

equally to each side of the face.

Another approach has been to present each hemiface (one

side of the face) separately. Investigations which present

each hemiface alone reduce the distracting effect of the

other hemiface, yet these studies sacrifice the real world

impression of a whole face. Other alternative methods of

presentation include the creation of hemiface chimerics

(Campbell, 1978; Levy, Heller, Banich and Burton, 1983) and

hemiface composites (Sackeim, Gur and Saucy, 1978; Moreno et

al., 1990). Hemiface chimerics are constructed by creating

a mirror reversal of one side of the face (e.g. left

hemiface) prior to the initiation of an expression and

combining it with the same hemiface at the peak of the


expression. In effect, a rater would see one half of a face

making an expression while the other half remains in a

neutral pose (both halves originate from the same side of

the face). The advantage of this procedure is that the

degree of facial movement for a particular hemiface can be

compared to its initiation point. The disadvantage is that

the expression has little similarity to a natural


Hemiface composites are similar to hemiface chimerics

in that one side of the face is copied and repositioned to

the other half of the face forming a whole face constructed

from only one half of the face. They are different in that

each side is an exact duplication of the other half.

Hemiface composites have the advantage of reducing

distraction effects of the competing side and increasing the

sensitivity to small differences by effectively "doubling

the hemiface." On the other hand, composite images are

frequently described as bizarre in appearance, and thus, may

skew raters' judgments due to a lack of ecological validity.

The choice between the aforementioned approaches has

generally varied as a function of the subject population.

Whole face ratings and mirror reversals have been the

preferred choice with brain-impaired subjects. Hemiface

composites are generally not used with brain-impaired

subjects because of the frequent occurrence of a hemiface

paresis (partial paralysis of one side of the face)

contralateral to the side of the lesion. In addition,

bilateral innervation of the face for spontaneous expression

and partial bilateral innervation of the face for posed

expressions suggests that a reduction in facial

expressiveness should be, at least, partially distributed to

both sides of the face. With normal subjects, any of the

above means of presentation can and have been utilized with

the advantages and disadvantages previously discussed.

The next issue is the manner of recording and

presentation. Much of the research on facial expressions

has relied upon still images, regardless of whether the

initial images were videotaped or photographed. A few

studies by Borod and colleagues have presented videotaped

facial images in a dynamic fashion (Borod et al., 1981;

Borod and Koff, 1983; Borod, Koff and White, 1983).

Another research issue involves the number and valence

of the target expressions. The most commonly elicited

emotional expressions include happy, sad, angry, fear,

disgust and surprise. Some studies have elicited all of

these emotions (Sackeim et al., 1978; Braun et al., 1988),

while most have targeted only a few (Campbell, 1978; Sirota

and Schwartz, 1982; Dopson, Beckwith, Tucker and Bullard-

Bates, 1984). Several studies have elicited only one target

emotion (Heller and Levy, 1981; Wylie and Goodale, 1988).

For investigations relying on an impressionistic

judgment of hemiface differences, the dimensions of the

rating scale is an important variable. Some studies have

requested judgments of facial asymmetry (muscle movement

bias to one side of the face: Borod and Caron, 1980; Borod

and Koff, 1983; Borod et al., 1983) or level of intensity

(either undefined: Sackeim et al., 1978; or defined as

muscle movement: Borod et al., 1981; Moreno et al., 1990) of

an expression. Two forms of accuracy have been utilized and

defined in terms of whether the subjects' expression matches

the target emotion (category accuracy) or is consistent with

the valence of the target emotion (Kent, Borod, Koff,

Welkowitz and Alpert, 1986; Borod et al, 1990). Other

rating dimension scales such as appropriateness (Borod,

Koff, Lorch and Nicholas, 1986; Caltagirone et al., 1989;

Weddell, Miller and Trevarthen, 1990) and expressivity

(Dopson et al., 1984; Mammucari et al., 1989) have been

utilized by some, yet were undefined. Appropriateness of an

expression in some studies, however, has been defined as its

consistency with the emotional character of the elicitor

stimuli (Borod et al., 1988; Martin, Borod, Alpert, Brozgold

and Welkowitz, 1990). Finally, Blonder, Burns, Bowers,

Moore and Heilman (1993) had raters judge expressivity with

anchor referents on a seven-point Likert scale.

For the analysis of the facial expressions, approaches

have varied considerably across studies and basically fall

into two distinct categories--one involving holistic,

impressionistic ("subjective") ratings by trained and

untrained judges and the other involving more "objective"

methods. There have been an assortment of techniques

utilized for both subjective and objective methods. For

example, Borod and colleagues have relied upon judges who

rate the level of intensity (muscle movement) or asymmetry

(muscle movement bias to one side of the face) of a given

emotional facial expression (Borod and Caron, 1980; Borod et

al., 1981; Borod and Koff, 1983; Moreno et al., 1990). She

and other investigators have also had their judges rate the

appropriateness or determine the accuracy of an expression

(Borod et al., 1988; Weddell et al., 1990; Borod et al.,

1990; Richardson, Bowers, Eyeler and Heilman, 1992). Many

of the studies have instructed the raters to utilize likert-

type scales for the emotional dimension of interest (Sackeim

et al., 1978; Borod et al., 1981, Moreno et al., 1990).

Other studies have used a forced choice procedure in which

the rater must choose which of two images appears more

intense, asymmetrical, etc. (Campbell, 1978; Rubin and

Rubin, 1980) All of these methods are subjective and they

emphasize the judges' holistic impression of the facial


In contrast to these holistic emotional rating

approaches, there have been several attempts at a more

objective analysis of facial expressions. Both Izard and

Ekman have developed methods in which discrete muscle

movements of the face are measured by trained raters (Ekman

and Friesen, 1978; Thompson, 1985). Each of these systems

require their raters to undergo intensive training to detect

isolated muscle contractions of the entire face or discrete

regions of the face and to estimate the intensity of the

contraction. Thus, a quantified measure of facial activity

can be estimated by summing the intensity scores from each

discrete facial unit. The Facial Action Capture System

(FACS) developed by Ekman is more widely used and is

considered more comprehensive than the Maximally

Discriminative Facial Movement Coding system (MAX) created

by Izard (Rinn, 1984; Thompson, 1985).

An additional approach involves surface

electromyographic (EMG) recordings of the face to obtain a

more objective measure of facial activity (Schwartz, Ahern

and Brown, 1979; Sirota and Schwartz, 1982). Wylie and

Goodale (1988) have employed yet another approach. In their

case, they utilized digital image analysis in which changes

in spatial positioning of several facial markers were

calculated. The markers were placed on the nose, cheeks and

each side of a subject's lips. The difference in marker

position from the beginning until the peak of an expression

was calculated to provide a measure of facial activity.

Production of Facial Expressions

Research on emotional facial expressions is generally

divided into studies which explore spontaneous (involuntary)

expressions and studies which examine posed (voluntary)

expressions. Despite the bilateral innervation associated

with involuntary emotional expressions (Thompson, 1985;

Pizzamiglio et al, 1989), researchers have studied

spontaneous expressions in order to make inferences about

hemispheric specialization (Dopson et al., 1984; Borod et

al., 1990; Blonder et al., 1993). The approach to these

studies has generally included a comparison of right and

left brain diseased subjects with normal controls or a

comparison of left and right hemiface activity for non-

impaired subjects.

In the case of hemiface activity, differences are

thought to be the result of superior processing and greater

involvement of the contralateral hemisphere. The assumption

being that the fibers innervating the face are crossed, and

therefore, it is the contralateral hemisphere that is

responsible for hemiface activity. In regards to

spontaneous expressions, however, this supposition may be

incorrect. As mentioned previously, there is evidence that

the innervation pattern associated with spontaneous

expressions is actually bilateral in nature (Thompson, 1985;

Pizzamiglio et al., 1989).

Spontaneous Expressions in Normals: Laterality Studies

Research assessing facial asymmetries during

spontaneous emotional expression has generally found that

the left hemiface is more active and expresses emotions more

intensely than the right. For example, Borod et al. (1983)


conducted a study in which positive and negative slides were

shown to subjects to elicit emotional facial expressions.

Three trained judges rated the asymmetries of dynamically

presented whole faces. Results indicated that male subjects

displayed a significant left-sided asymmetry for all

expressions, while females were considered to be left-biased

for negative emotions only. A sex by valence interaction

has been reported by Borod in several studies of posed and

spontaneous emotional facial expressions (Borod and Caron,

1980; Borod et al., 1983; Borod et al., 1990).

There have been other studies that have reported a

left-sided bias during involuntary emotional expressions.

For example, both Moscovitch and Olds (1982) and Dopson et

al. (1984) instructed subjects to describe emotional

experiences and had judges rate their expressivity.

Moscovitch and Olds (1982) videotaped the expressions and

asked judges to rate the quantity of expressive movements

for each hemiface, while Dopson et al. (1984) photographed

subjects, created hemiface composites, and then had judges

rate the quality of the expressions on a seven point likert

scale. Both studies suggested a left hemiface bias for

expressivity. In a similar manner, Wylie and Goodale (1988)

reported a left-sided asymmetry for spontaneous emotional

expressions. In their study, they utilized digitized image

analysis to measures changes in the spatial position of

facial markers. The distance the facial markers moved

during an expression was considered the dependent variable.

Not all research, however, has found a left hemiface

bias. For instance, Hager and Ekman (1985) employing FACS,

a system which rates individual muscle movements, found that

spontaneous smiles and startle reactions were symmetrical.

In addition, Rothbart, Taylor and Tucker (1988) examined the

whole face of infants during emotional expressions. They

reported that the still images from videotapes displayed a

right-sided, rather than left-sided bias. Furthermore,

several studies have suggested that the left hemiface is

more expressive for negative emotions and the right hemiface

is at least as intense as the left hemiface for positive

emotions (Schwartz et al., 1979; Sirota and Schwartz, 1982;

Borod et al., 1983; Hartley, Coxon and Spencer, 1987).

In sum, the research on spontaneous expressions

indicates that expressive asymmetries generally exist,

although the direction of the bias may depend upon the

valence of the emotion. Due to the bilateral pattern of

innervation associated with involuntary expressions

(Thompson, 1985; Pizzamiglio et al., 1989), it is unclear

whether these results are indicative of a hemispheric

specialization for emotional production. Because

expressivity is assessed across the entire face for studies

of brain damaged subjects, an investigation of spontaneous

expressions in these patients might be more relevant for

testing the RH and Valence hypotheses. Consequently, the

problem of bilateral innervation becomes irrelevant.

Spontaneous Expressions in Brain-Impaired Subjects

As with the study of spontaneous expressions in

normals, research with brain-impaired patients is generally,

but not conclusively, consistent with the RH hypothesis.

For example, Buck and Duffy (1980) instructed eight

untrained raters to examine a videotape of subjects'

responses to affectively-laden slides. They reported that

in comparison with LBDs and NCs, RBDs were less accurate and

less expressive. Several studies by Borod and colleagues

also indicate that subjects with right brain disease are

impaired in either their appropriateness (consistency with

emotional character of elicitor stimulus), intensity (muscle

movement) or responsivity (occurrence of an expression) of

their emotional facial expressions (Borod, Koff, Lorch and

Nicholas, 1985; Borod et al., 1988; Martin et al., 1990).

Furthermore, in a study of RBDs, LBDs and NCs, Blonder et

al. (1993) reported RBD impairment for the expression of

positive, but not negative emotion. They measured

expressivity during natural conversation using a seven point

anchored (with emotional facial referents) likert scale.

Not all studies, however, have concluded that RBDs are

less impaired in their expressivity than LBDs. For example,

Mammucari et al. (1988) employed FACS as well as subjective

ratings of emotional expressivity and found that emotionally


evocative films elicited similar levels of expressivity for

RBDs and LBDs. On the other hand, a study by Weddell,

Trevarthen and Miller (1988) which also utilized both

subjective raters and FACS found a dissociation between the

measures. In their study of RBDs and LBDs, subjective

raters indicated that RBDs displayed less intense negative

expressions to a frustrating card-sorting task, yet the

groups did not differ from one another based on FACS. In

both the Weddell et al. (1988) and the Mammucari et al.

(1988) studies, impressionistic ratings of whole face

dynamic images were employed.

Taken together, research on the spontaneous expressions

of brain-impaired subjects does not provide clear support

for the RH hypothesis. It is noteworthy that most of the

research consistent with the RH hypothesis has been

conducted by Borod and colleagues, while the negative

findings have occurred in other laboratories. Furthermore,

the results of the Blonder et al. (1993) study which appear

to support the RH hypothesis are also consistent with the

view that the lower expressivity of RHD subjects are caused

by an arousal deficit (Heilman and Watson, 1989; Heilman,

Bowers and Valenstein, 1993). Consequently, the question

remains debatable as to whether a hemispheric specialization

for emotion is the source of lowered expressivity among

brain-impaired subjects.

Posed Expressions in Brain-Impaired Subjects

Posed emotional expressions by brain-impaired subjects

has been generally, but not conclusively, consistent with

the RH hypothesis. For example, Borod and colleagues have

presented several studies which suggest that RBDs are less

accurate and less expressive in their production of

emotional faces as compared to LBDs and NCs (Borod et al.,

1986, Borod, St, Clair et al., 1990, Borod, Welkowitz et

al., 1990). In one of the more recent articles, trained

judges examined the facial asymmetry of subjects posing

angry and happy expressions (Borod, St. Clair et al., 1990).

Using both whole images and mirror reversals, they found

that there was a smaller level of asymmetry for RBD

subjects. Because less expressivity would be associated

with less asymmetry due to the contrast between the paretic

and nonparetic sides of the face, they concluded that the

right hemisphere was intimately involved in the production

of emotion. In another study, Richardson et al. (1992)

instructed judges to view dynamic whole images of LBDs, RBDs

and NCs posing facial expressions. They rated the accuracy

of five facial expressions across a variety of elicitation

procedures and found that RBDs were particularly impaired

when nonverbal means were used to elicit the posed

expression. Weddell et al. (1990) employed FACS and

impressionistic ratings in their study. Post hoc analyses

suggested an expressive deficit for RBDs. Subjective


ratings of dynamically presented whole faces indicated that

the appropriateness of their expressions were not

significantly different. Unlike nearly all other research,

the lesions in the Weddell et al (1990) study were tumoral,

rather than stroke-related.

Although the above studies suggest a right hemisphere

advantage for the production of emotional behavior, other

research has not been supportive of the RH hypothesis. For

example, Heilman, Watson and Bowers (1983) examined five RBD

patients and five LBD patients and did not find any

differences in their ability to pose facial expressions upon

command. Furthermore, Caltagirone et al. (1989) used both

FACS and subjective ratings of dynamically presented whole

faces, and regardless of the measurement system employed,

did not find any group differences between RBDs, LBDs and

NCs. Finally, Kolb and Taylor (1990) reported a collection

of their studies in which hemispheric differences were

either mild, nonexistent or less compelling than frontal vs.

parietal differences. Thus, the evidence has been mixed in

its support of the RH hypothesis.

Posed Expressions in Normals

The final approach utilizing the facial channel

involves the comparison of expressive asymmetries by normals

during posed emotional expressions. Because the lower face

is contralaterally innervated for voluntary expressions

(Crosby and Dejong, 1963; Kahn, 1964; Rinn, 1984),

inferences about hemispheric involvement can be made more

readily. Consistent with other research suggesting a RH

involvement, the majority of these studies indicate that

asymmetries favor the left side of the face which is

predicted by the RH hypothesis. A summary of studies on

posed expressions by normals is located in the appendix.

The first major study examining facial asymmetries

during posed emotional expressions was conducted by Sackeim

et al. (1978). They presented a series of composite slides

(a whole face constructed of one hemiface and its mirror

image) to 86 students and instructed them to rate the

intensity of the emotional expression. The results

indicated that the composite slides from the left side of

the face were more intense. They considered the left-sided

bias in their study to be the consequence of a right

hemisphere advantage in processing emotional material. In

the same year, Campbell (1978) published a study in which

left-sided composite slides of smiles were rated as happier

and left-sided composite slides of neutral expressions were

rated as sadder. A follow-up study with left-handers as

subjects found the same pattern for smiles only. For the

neutral expression, the opposite results were reported such

that the right composites were rated as being sadder

(Campbell, 1979).

A set of studies published by Borod and colleagues in

the early 1980s were generally consistent with the findings

of Sackeim et al. (1978), although they reported some

evidence of sex differences interacting with the emotional

valence of the expressions. For example, in their first

paper, Borod and Caron (1980) indicated that males displayed

a significantly greater left bias than females for negative

emotions. In their procedure they produced videotaped

stills of the peak expressions and had three raters estimate

the degree of asymmetry on a 15 point scale (-7 to +7, left

vs. right bias). In a follow-up study, Borod et al. (1981)

reported that the intensity of the facial expressions were

also left-biased. In 1983, Borod and colleagues published

two studies which generally indicated that there was an

asymmetrical bias toward the left side of the face and that

morphological features appeared unrelated to the bias (Borod

and Koff, 1983; Borod et al., 1983). Both of the studies

relied on a dynamic presentation as well as multiple

emotions representing both a positive and negative valence.

In one of the studies, only males showed a left-sided bias

for positive emotional expressions (Borod et al., 1983).

In addition to the Borod group, other researchers were

finding results generally consistent with Sackeim et al.

(1978). For example, Rubin and Rubin (1980), Heller and

Levy (1981) and Dopson et al. (1984) all reported a left-

sided bias for posed smiles. In the former study, the left-

sided composite photographs of children were more often

chosen (forced choice) as more intense for both negative and

positive emotional expressions. In the second study,

hemiface chimeric photographs were assembled from happy and

neutral expressions of the same hemiface so as to account

for any hemiface differences at rest. Only a positive

expression was examined and the results indicated a left-

sided bias. Finally, in the latter study, composite

photographs of sad expressions were investigated as well as

smiles with similar results. All of the above studies

relied on at least two dozen untrained raters to determine

the degree of emotional expressivity or intensity.

During this time, however, two separate laboratories

conducted research using different methodologies and

generally failed to uncover significant facial asymmetries.

For example, Schwartz and his colleagues published two

studies in which they measured EMG at the corrugator and

zygomatic muscle sites during voluntary emotional facial

production (happy and sad) and found very little evidence of

significant differences between the hemifaces (Schwartz et

al., 1979; Sirota and Schwartz, 1982). In addition, Ekman's

group reported results from two studies which were more

favorable to the RH theory, yet generally unsupportive of it

(Ekman, Hager and Friesen, 1981; Hager and Ekman, 1985). In

the former study, children initiated facial movements

suggestive of an emotion. Approximately one-fifth of the

subjects displayed a left-sided bias. In the latter study

with adults, the overall effect along with most of the

individual muscle sites were rated as symmetrical.

Nevertheless, one muscle pair zygomaticc) which is involved

in smiling did display a left-sided bias.

Following these investigations, Borod and her

colleagues reported further evidence of left-sided asymmetry

in two studies in which male and female posers were studied

separately (Borod et al., 1988; Moreno et al., 1990). For

both experiments, multiple positive and negative emotions

were posed. In the study with female subjects, composite

photographs of women of varying ages were rated on a seven

point intensity scale. Evidence of greater asymmetry on the

left side of the face was found across all age groups

(Moreno et al., 1990). In the study of male posers, left

and right hemiface video stills were presented in normal and

mirror images. The poses were elicited through command and

mimicry. Once again, a left-sided bias was found (Borod et

al., 1988).

In contrast to these two studies, Wylie and Goodale

(1988) reported no significant differences in hemiface

activity during posed smiles. Their technique for assessing

facial activity relied on the digitized analysis of changes

in spatial positioning of facial markers. Once again, an

"objective" approach failed to find clear evidence of

expressive asymmetries in normals.

Before discussing possible methodological issues, two

additional studies that investigated expressive asymmetries


of brain diseased subjects and normal controls are deserving

of mention. The first study was noteworthy in that it

employed both a subjective rating system and a more

objective or less impressionistic method. It was conducted

by Caltagirone et al. (1989) who examined the hemifaces of

brain impaired subjects and normal controls using both

subjective ratings and FACS. Neither FACS nor the

subjective ratings, however, revealed any evidence of a

left-sided advantage for normals. In addition, both LBDs

and RBDs displayed a small hemiface bias ipsilateral to

their lesion. This bias was expected given a likely

hemiface paresis contralateral to their lesions, and indeed,

facial action scores were significantly correlated with the

degree of contralateral hemiface paresis.

In the following year, however, Borod, St. Clair et al.

(1990) conducted comparable research using only subjective

raters. Similar to the prior study, subjects produced

expressions on command while being videotaped. Trained

raters then examined still images and their mirror reversals

and judged the degree of asymmetry of each on a -7 to + 7

left-right scale. Consistent with her previous work, normal

controls were rated as having a left-sided asymmetry. In

addition, LBDs were also determined to have a left-sided

asymmetry which was hypothesized to be primarily related to

a right hemiface paresis. For RBDs, however, no significant

asymmetries were found. Borod, St. Clair et al. (1990)

concluded that the left hemiface paresis of RBDs was

compensated for by a left-sided bias produced by the

remnants of the emotional production center contained in the

right hemisphere a view consistent with the RH hypothesis.

Morphological Variation

Proponents of the RH hypothesis have argued that left-

sided asymmetries during emotional expressions are the

consequence of greater RH specialization for emotional

production. Others have suggested that morphological

explanations provide a more parsimonious interpretation of

expressive asymmetries. Indeed, following the assertion by

Sackeim et al. (1978) that emotions are more intensely

expressed on the left side of the face, critics have argued

that basic morphological factors (e.g. differences in the

size and structure of the hemifaces) are the likely

determinants of a lateral bias (Ekman, 1980; Nelson and

Horowitz, 1980). Nelson and Horowitz (1980) measured the

widths of the left and right sides of the face and reported

that the right side of the face was smaller than left. They

believed that the left side appeared more expressive than

the right because the facial movement involved a greater

percentage of the left hemiface. Ekman (1980) asserted

that, in general, one side of the face contains a greater

percentage of fatty deposits or soft tissue matter and that

such a difference may have influenced the subjective



If morphological asymmetries influence one's impression

of expressive intensity, the same bias may also affect one's

perception of a resting face. It is conceivable that a

neutral face is the absence of an emotional expression, and

therefore, perceived emotional asymmetries on a resting face

are the result of morphological asymmetries, rather than a

hemispheric specialization of emotion. Campbell (1978)

reported that the resting left hemiface of normals is

perceived to be more expressive (sad) than the right.

Additionally, Schwartz et al. (1979) found that the left

hemiface displayed greater EMG ratings at rest than did the

right, although it was unclear whether the EMG ratings were

the result of soft tissue differences or unintended anxiety

caused by the laboratory setting. In contrast, Hager and

Ekman (1985) did not find any facial asymmetries at rest

when FACS was employed.

As Schwartz et al. (1979) reported, it is debatable

that a neutral expression is nonemotional. A morphological

explanation may account for perceived emotional asymmetries

during neutral expressions, yet there are alternative

interpretations that are consistent with the RH hypothesis.

First, the production of a neutral expression may reflect an

internal emotional state which requires the involvement of

the same process associated with the production of other

emotional states. If a RH emotional production system were

utilized to initiate a neutral expression, then a left

hemiface asymmetry for a neutral expression would be

consistent with the RH hypothesis. Second, as Schwartz et

al. (1979) recognized, subjects are not an empty vacuum of

emotions during an experiment. They may be anxious about

their performance, or the nature of the study, and subtle

emotional expressions may seep into their purportedly

neutral countenance.

Given the argument that a neutral expression may, in

fact, be emotional, a more direct measure of morphological

asymmetries is necessary to address this question.

Consequently, several authors have reported information on

the facial structure of their subjects. Sackeim and Gur

(1980), Borod et al. (1983) and Sackeim, Weiman and Forman

(1984) examined whether morphological biases were creating

the impression of expressive asymmetries. They concluded

that the facial structure of their subjects was generally

symmetrical. Furthermore, Sackeim (1985) conducted a

comprehensive review of the literature on facial structure

which indicated that the face was remarkably symmetrical.

Sackeim's (1985) review of facial structure included

studies which measured both hard and soft tissue factors.

The soft tissue that was most likely accounted for in these

studies was that of muscle and fat. Other soft tissue

factors may influence facial expressivity. Perhaps,

patterns of facial innervation may be involved in expressive

asymmetries. For example, pure muscle strength may not be

as important as one's adeptness in performing discrete

muscle movements. Crisp and precise facial movements may be

responsible for perceived facial expressivity. Asymmetries

in neural involvement might explain a hemiface superiority

in movement precision. In that case, the role of the

peripheral nervous system would be critical and may be the

source of expressive asymmetries (Thompson, 1985; Borod and

Koff, 1990).

Regardless of which factor is most crucial (e.g. muscle

mass, differences in innervation or other structures) Borod

and Koff (1983) examined the general question of facial

mobility. They instructed subjects to make nonemotional,

unilateral facial movements centered around the mouth and

eyes. Raters compared which side of the face displayed

superior movement and the left side was rated as

significantly more mobile. The Borod and Koff (1983) study

suggests that the left side of the face is more facile than

the right in producing nonemotional as well as emotional

expressions. Consequently, asymmetries for emotional facial

expressions may be secondary to biases in hemiface mobility.

Borod and Koff (1990), however, argued that a left

hemiface advantage for nonemotional expressions does not, in

itself, negate the possibility of a hemispheric influence on

emotional expressions. It may be that the two systems are

relatively unrelated to one another. In the same manner

that there exists a partial dissociation between the spatial


processing and emotional face processing capabilities of the

right hemisphere (Bowers et al., 1985), so too, may

emotional and nonemotional expressivity be partially

disassociated. Borod and Koff (1983) examined this very

question and found that a left-sided bias in mobility was

generally unrelated to a left-sided bias in emotional

expressivity. The statistical methodology in the Borod and

Koff (1983) study has received criticism. For instance,

Campbell (1986) voiced concern about the stringency of their

test of association which might be susceptible to a type II

statistical error. Consequently, Campbell (1986) argued

that a left-sided bias in mobility may be responsible for

the lateral bias in emotional expressions.

Given the research published thus far, it appears that

morphological factors (particularly in regards to facial

mobility) cannot be dismissed as a possible explanation of

facedness during emotional expressions. On the other hand,

there has been a dearth of evidence linking expressive

asymmetries with morphological factors. As a result, the

design of this study will attempt to test for advantages in

facial mobility, the most promising morphologically related


Methodological Differences

To date, the neuropsychological literature on

asymmetries of voluntary emotional expression for normals

remains inconclusive, although the majority of studies do

favor a left-sided bias for emotional expressions. A

variety of methodological factors exist which may have

contributed to the contradictory results. Clearly, no

identifiable one completely accounts for the conflicting

data. Factors suggested by Borod and Koff (1990) in their

review of the literature include the following: (a) method

of elicitation (command, imitation, imagery), (b) gender,

(c) process for capturing expression (dynamic vs. static),

(d) subject awareness of camera (naive, alerted but unseen,

completely aware), (e) method of presentation of faces

(composites, mirror reversals, hemifaces, normal images),

(f) method of analysis (subjective impressions, FACS, EMG,

mathematical analysis of movement), (g) dimension of rating

scale (intensity, asymmetry, appropriateness, expressivity)

as well as (h) the number and valence of emotions posed. A

summary of the research on posed expressions by normals is

located in the appendix.

The first factor to be considered is the method of

elicitation. There are a variety of techniques employed to

evoke a target emotion. The most commonly used approach is

the direct "verbal request" or command (e.g., show me a sad

face). Because nearly every study has utilized the command

procedure, it seems unlikely that this variable explains the

discrepant results within the literature. When other

approaches have been utilized to elicit the desired

expression, they have generally been used in conjunction

with direct commands (Borod et al, 1983, 1988; Wylie and

Goodale, 1988). The other approaches include facial

imitation, verbal scenarios and scenic photographs. In

these conditions, subjects are expected to express emotions

that are consistent with verbally presented scenarios or

emotionally laden pictures. Borod et al. (1981) conducted

the only study which did not employ direct commands and

their findings suggested a general left-sided bias. Given

that no other study refrained from employing direct

commands, it is unclear whether expressive biases are more

likely when a poser must determine the target emotion by

interpreting nonverbal stimuli.

Based on research with focal lesion patients, there is

evidence that expressive deficits may be affected by the

method of elicitation. For example, Richardson et al.

(1992) found that impairments in voluntary facial

expressivity were secondary to deficiencies in the

evaluation of nonverbal stimuli. In their study, RHDs were

no different than LHDs or NCs in their ability to produce

accurate emotional expressions when a stimulus was presented

verbally, but were more likely to display expressive

deficits for a stimulus presented nonverbally. Based on

these results, expressive asymmetries with normals may be

the consequence of hemispheric advantages in processing

affective information. Perhaps, nonverbal stimuli more

strongly engages the right hemisphere which enhances the

likelihood of a left-sided expressive asymmetry. Although

this question deserves further inquiry, the fact that

expressive biases have generally been found under command

conditions suggests that it does not explain the discordant

results of prior research.

Another manner in which the research has varied is in

the number and valence of the emotions posed. For number,

there exists no discernible pattern which indicates that

this factor explains previous discrepancies. The valence of

the target emotions also appears inadequate as an

explanation. For example, some studies have failed to find

an expressive bias for the happy expression (Schwartz et

al., 1979; Sirota and Schwartz, 1982; Wylie and Goodale,

1988), but most research has been associated with left

facedness for positive expressions (Campbell, 1978, 1979;

Heller and Levy, 1981; Borod and Koff, 1983; Braun,

Baribeau, Ethier, Guerette and Proulx, 1988; Moreno et al.,

1990). Furthermore, nearly all of the studies that failed

to find expressive asymmetries included positive and

negative emotions, thus suggesting valence was not the

fundamental issue (Sirota and Schwartz, 1982; Hager and

Ekman, 1985).

For research relying upon subjective raters, an

additional variation has been the nature and dimension of

the rating scales. In regards to dimension, judges have

been asked to rate the levels of asymmetry (Borod and Koff,

1983), intensity (Sackeim et al., 1978; Borod et al., 1981),

expressivity (Dopson et al., 1984) and emotionality

(Cacioppo and Petty, 1981) of facial expressions.

Concerning the nature of the scale, raters have used both

likert-type scales (Sackeim et al., 1978; Borod and Caron,

1980; Dopson et al., 1984) and forced choice procedures

(Campbell, 1978, 1979; Heller and Levy, 1981; Braun et al.,

1988) in making their judgments of hemiface differences.

Regardless of the scale, the results have nearly always been

associated with left facedness. Thus, it appears that the

type of scale is unrelated to the discrepant findings within

the literature.

Another way in which the research has differed is in

the method of capturing and presenting facial expressions.

Some studies have relied on static means, such as still

photography (Dopson et al., 1984; Braun et al., 1988), while

others have utilized more dynamic methods, such as

videotaping (Borod et al., 1981; Borod and Koff, 1983; Hager

and Ekman, 1985). A combined approach has been to videotape

facial expressions and present to the raters, a one frame

video still (Borod and Caron, 1980). For research relying

upon subjective ratings, dynamic vs. still images appears

unrelated to any discrepancies within the research.

Expressive asymmetries have been found with both approaches.

The one study in which subjective raters failed to find a

lateral bias utilized a dynamic method for both the capture

and the presentation of the emotional expressions

(Caltagirone et al., 1989). In general, however, the

dynamic presentation of facial expressions has been

associated with expressive asymmetries (Borod et al., 1981;

Borod and Koff, 1983; Borod et al., 1983). Therefore, the

method of capturing and presenting the emotional stimuli

does not explain the discordant results of prior research.

Another factor that differed between studies is the

strategy employed for the presentation of the target

emotions. For example, judges have been instructed to make

ratings of pictures with normal images (real life

photographs or videotaped images), mirror reversed images

(reversal of hemiface position such that left side appears

on right and vice versa), hemiface composites (whole face

constructed of a hemiface and its mirror reversal) and

hemifaces alone (only one-half of the face). Each of these

methods has its advantages and disadvantages. Nevertheless,

there have been no systematic differences associated with

any of the different presentation procedures.

One methodological factor which may explain some of the

prior discrepancies is that of camera obtrusiveness. In two

of the studies not supporting the RH hypothesis (Hager and

Ekman, 1985; Caltagirone et al., 1989), subjects were either

unaware of being videotaped or the camera was nearly hidden

from view. For most other research, the camera has been

placed in plain view of the subject. It is possible that

the subjects altered their expressions because of the demand

factors associated with being videotaped. Perhaps,

videotaping elicits an inhibitory response in subjects, and

thus, asymmetrical differences appear as a result of the

right hemiface being more inhibited than the left.

The role of inhibitory factors in the production of

emotional facial asymmetries has been debated. Rinn (1984)

argued that the left hemisphere may have an advantage in the

inhibition of emotion. He reasoned that because of the

cognitive complexity of verbal processing, the left

hemisphere has an advantage in modulating affect. A left

hemisphere advantage for inhibiting emotion should result in

greater control of the right hemiface, and consequently, an

appearance of greater left hemiface expressivity. This

effect is predicted to be more salient under conditions in

which the desire to inhibit an emotional expression is

enhanced. Because social demand factors may be greater

during videotaping, Rinn's (1984) inhibitory hypothesis

would predict greater left-sided asymmetries. Conversely,

when the photographs or videotaping occurs without the

subjects' knowledge, lateral biases should be less likely.

Consistent with Rinn's (1984) inhibitory hypothesis, there

was a lack of emotional facial asymmetries found by Hager

and Ekman (1985) and Caltagirone et al. (1989) when either

an unobtrusive or hidden camera was employed. Further

research is needed to explore this possibility.


Gender is the next factor which may be associated with

the presence of an emotional facial asymmetry. When facial

asymmetry research is divided by gender, asymmetries are

more often associated with male subjects, than female

subjects. For example, two of the three studies which

included only female subjects failed to find substantial

evidence of facedness during emotional expressions (Hager

and Ekman, 1985; Sirota and Schwartz, 1982). In addition,

Borod et al. (1983) found no evidence of lateral biases for

females posing positive emotions; although, negative

expressions were asymmetrical. Most notably, all three male

only studies were associated with a significant lateral


The finding that males are more likely to display

expressive asymmetries is consistent with neurological

evidence that the male cortex may be more lateralized and

that the female cortex may have greater interhemispheric

connectivity (McGlone, 1980). Both the enhanced laterality

of the male cortex and the greater interhemispheric

connectivity of the female cortex may contribute to greater

expressive asymmetries for males. Consequently, research

including a large percentage of female subjects may be less

likely to find expressive asymmetries.

The final factor, method of analysis, appears to be an

important source for much of the discrepancies in prior

research. The methods of analysis are generally divided

into studies emphasizing subjective vs. "objective"

techniques. Nearly all studies supportive of a left-sided

asymmetry have relied on subjective, holistic impressions,

whereas most of the studies which have failed to find

substantial asymmetries have utilized more objective

procedures. Critics of the subjective methods have argued

that impressionistic ratings are less reliable and more

vulnerable to possible errors (Hager and Ekman, 1985).

Indeed, it has been suggested that apparent asymmetries in

emotional production may be the result of a perceiver bias,

whereby symmetrical expressions appear asymmetrical because

human raters have a perceptual advantage for the left visual

field (Borod and Koff, 1990). Research indicates that a

perceiver bias does exist and it directly contributes to the

perception of facial asymmetries (Gilbert and Balkan, 1973;

Bennett et al., 1987).

On the other hand, perceptual biases have been

controlled for in most investigations. In some experiments,

removing the bias has been accomplished by presenting each

test face and its mirror image to the raters (Borod et al.,

1988; Moreno et al., 1990). In other studies, hemiface

composites have been used such that the mirror image of each

hemiface is attached to its original (Sackeim et al., 1978;

Dopson et al., 1984; Braun et al, 1988). In this way, a

left-left composite may be compared to a right-right

composite. In addition, research reporting a perceiver bias

have consistently found a left visual field (LVF) advantage

such that the right hemiface (falling in the perceiver's

LVF) is preferred. A right hemiface bias works against the

hypothesized direction of left facedness, and therefore,

reduces the likelihood of a positive finding. Thus, it

seems unlikely that a LVF bias could create the illusion of

a left-sided expressive asymmetry.

Composite images and mirror reversals may control for

visual field advantages of the raters, however, it cannot

eliminate the effect of structural asymmetries within the

posers. It is conceivable that biases are elicited by

certain facial features such as bone size and muscle mass.

Such hemiface differences may affect subjective ratings. No

evidence, however, has been found that purported structural

differences are associated with emotional asymmetries (Borod

and Koff, 1990; Sackeim, 1985).

To date, nearly all research which has found

asymmetries in emotional facial expressivity has relied on

the holistic impressions of subjective raters. In contrast,

nearly all of the studies utilizing more "objective" means

have failed to find the association. For example, studies

employing EMG as a measure have generally not supported a

left hemiface bias in expressivity (Schwartz et al., 1979;

Sirota & Schwartz, 1982). Furthermore, no significant

asymmetries were reported using a digitized analysis of

movement of facial markers, although a weak right-sided bias

was found for right-handers only (Wylie and Goodale, 1988).

Finally, those who have employed FACS have generally failed

to find any substantial left-sided bias among normals

(Caltagirone et al. 1989; Hager and Ekman, 1985).

While subjective rating systems may be susceptible to

perceptual error, the more objective measures may have

lacked sensitivity. Perhaps, human raters have a greater

appreciation of the facial gestalt which may be missed by

the objective, more reductionistic techniques (Buck, 1990;

Blonder et al., 1993). For example, Ekman et al. (1981)

reported that only one quarter of the subjects had

asymmetrical smiles, mostly favoring the left hemiface. In

a subsequent study, Hager and Ekman (1985) found a left-

sided asymmetry for the zygomatic muscle during smiles.

Since the zygomatic muscle is quite powerful in altering the

contours of the emotive face, it is not surprising that this

area would be the one action unit where FACS detected an

effect, especially if the technique lacked sensitivity.

As for the two EMG studies, the generally negative

findings may be related to several possibilities. First,

only two of the facial muscle groups were monitored such

that a complete and accurate portrayal of all muscle

activity throughout each hemiface was not available. In

addition, surface EMG, rather than a more invasive EMG

technique was used which decreases the accuracy of the

measurement and increases the error variance. Finally, the

mere presence of the electrodes may have altered the facial

expressions of the subjects. Thus, it remains debatable as

to whether EMG is an effective measurement device for

assessing emotional facial asymmetry.

The last "objective" method employed was that of

digital image analysis in which changes in spatial

positioning of several facial markers were calculated.

Wylie and Goodale (1988) marked several points on the left

and right side of a subject's lips along with one mark on

each cheek and another on the nose. They examined smiles in

both spontaneous and posed conditions and reported that

there were significant asymmetries only in the spontaneous

condition. Because changes in just a few facial points were

measured, one might question the sensitivity of this

technique. On the other hand, the fact that positive

results were found in the spontaneous condition suggests

that the method appears to be sensitive enough. Other

factors, however, may have been associated with the

discrepancy between this method and the impressionistic

approaches. For example, the lighting during the experiment

was so bright that dark sunglasses were required. Perhaps,

the discomfort from the bright lighting inhibited facial

expressivity. In addition, wearing dark sunglasses may have

seemed unnatural and interfered with normally posed

expressions. Finally, an interaction effect between the

procedure and a borderline level of sensitivity may have

reduced the likelihood of finding emotional facial


Taken together, peculiar characteristics of each of the

"objective" methods may have reduced their sensitivity. Of

the three approaches, the digitized image analysis holds the

greatest promise as a powerful, nonintrusive measure. By

employing an alternative strategy with the digitized image

analysis, sensitivity can be increased and artificiality can

be decreased. Instead of examining changes in spatial

positioning, it is proposed that measuring changes in

greyness across the face is a more powerful approach since

more of the face is taken into account. Furthermore, the

lighting required for this approach would not have to be so

intensely bright. Consequently, any artificiality produced

by wearing sunglasses would be avoided. For these reasons,

we have introduced the digitized approach for the assessment

of expressive asymmetries.

Evidence supporting the efficacy of this technique was

found in a study conducted by Leonard, Voeller and Kuldau

(1991) who employed both digital image analysis and

subjective ratings of smiles. Although facial asymmetries

were not the focus of their study, they did find strong

agreement between the output of the digital and subjective

approaches. Their results indicated that digitized image

analysis may be sensitive enough to capture signals that

impressionistic raters perceive, but other more objective

measures appear to miss.

In a pilot study conducted in our laboratory, further

evidence was found supporting the utility of digitized image

analysis (Richardson, Bowers, Leonard and Heilman, 1994).

In the study, dynamic emotional facial expressions were

obtained from 20 self-reported right-handed subjects. Five

different emotions were videotaped and digitized using the

Xybion Image Capture Analysis System (XICAS). Change in

pixel intensities between adjacent video frames was

calculated by subtracting the level of greyness at each

pixel location. Level of greyness is a numerical

representation from 0 to 256 of the brightness of each pixel

on the video monitor. A mean difference score from two

adjacent video frames was computed by summing all the

difference scores from each pixel location and dividing by

the total number of pixels utilized. Because 13 frames were

analyzed for each expression, 12 mean difference scores were

summed together to produce a grand total difference score

which reflected the overall change in pixel intensities over

a 400 msec period.

The grand total difference score for the greyness level

represented a quantified measure of movement and was

calculated for each hemiface across all subjects in the

pilot study. More movement as indicated by the grand total

difference score was detected on the left side of the face

for males while posing angry and frightened faces. A left-

sided trend was also found for males posing sad expressions.

No significant differences were found in the happy and

disgust condition. Nor were any significant differences or

trends found for females in any of the conditions. Thus,

the results of the pilot study (10 males and 10 females)

suggested that digitized image analysis has the sensitivity

to detect facial asymmetries. A larger number of subjects,

however, is required to determine the robustness of these



Several questions were addressed in this study:

1) The first question of interest asked whether emotional

expressions are asymmetrical and whether the valence of the

emotional expression impacts the direction of the asymmetry.

Previous research has been somewhat supportive of an

asymmetrical bias during voluntary emotional expressions.

The two most commonly cited hypotheses in the

neuropsychological literature are referred to as the Right

Hemisphere (RH) hypothesis and the Valence hypothesis. The

former proposes that regardless of emotional valence,

emotional processing and production is primarily a right

hemisphere function. Consequently, a left-sided asymmetry

for voluntary emotional expressions would be consistent with

the RH hypothesis due to the contralateral innervation of

the lower face (Rinn, 1984; Thompson, 1985). The

alternative perspective is the Valence hypothesis, and it

proposes that the right hemisphere is specialized for the

processing of negative emotions, while the left hemisphere

is involved in the processing of positive emotions. Some

researchers have conceptualized the dichotomy of the Valence

hypothesis in terms of approach and withdrawal, rather than

positive and negative valence (Davidson, 1993).

In general, the research on emotional production is

more consistent with the RH hypothesis than the Valence

hypothesis, although there are several studies which have

failed to support either perspective. In order to test

these two hypotheses, subjects were asked to pose both a

positive and negative facial expression. Both digitized

analysis and impressionistic judgments were employed to

determine whether emotional valence interacts with the

direction of a hemiface asymmetry.

2) The second question of interest was whether the pattern

of expressive asymmetry is consistent with our current

knowledge of facial innervation. In other words, which

portion of the face (upper vs. lower) was a lateral bias

more likely. Neuroanatomical research indicates the lower

and upper portions of the face have a different pattern of

innervation. Rinn (1984) reported that there is a

significant amount of both ipsilateral and contralateral

innervation from the brow region of the face down to the

upper eyelids. Below this point, the face is thought to

receive primarily contralateral innervation. If expressive

asymmetries are primarily the result of a hemispheric

specialization, then the lower portion of the face should

display the greatest asymmetry. Results indicating either

no difference in the degree of asymmetry or greater upper

face asymmetry would suggest that other processes were at

work. Morphological factors such as hard or soft tissue

differences have been proposed as alternative sources of

expressive asymmetries (Ekman, 1980; Nelson and Horowitz,

1980). They propose that expressive asymmetries are a

perceptual phenomena caused by differences in fatty deposits

or bone structure in the posers.

Although structural factors appear to be a credible

explanation for expressive asymmetries, there is little

evidence supporting this position. In a review of the

anatomical literature, Sackeim (1985) concluded that the

face was remarkably symmetrical. Furthermore, he and his

colleagues did not find a correlation between hemiface size

and ratings of emotional expressivity (Sackeim et al.,

1980). Thus, neuropsychological mechanisms appear to be

more capable of accounting for expressive asymmetries than

structural biases. On the other hand, if neuropsychological

hypotheses are correct, then the pattern of expressive

asymmetry should be consistent with our current knowledge of

neuroanatomy. Therefore, the degree of emotional asymmetry

in the upper or lower portions of the face was assessed

utilizing digitized image analysis.

3) The third question of interest asked whether another

morphological characteristic (facial mobility) is correlated

with emotional facial asymmetries. Perhaps, the underlying

mechanism for the laterality of emotional expressions is a

general asymmetry in facial agility. Accordingly, all

expressions, regardless of emotionality may display a

lateral bias.

There is some evidence consistent with an asymmetry in

facial mobility. Chaurasia and Goswami (1975) and Borod et

al. (1983) reported a left-sided bias for nonemotional

expressions. Borod et al. (1983) indicated, however, that

the left-sided advantage for mobility was uncorrelated with

emotional facial asymmetry. Given that Borod et al. (1983)

found a difference in hemiface mobility, the degree of

hemiface activity during nonemotional facial expressions

will be assessed. Likewise, the level of correlation

between emotional and nonemotional facial activity was

calculated to determine whether hemiface mobility is

associated with emotional facial asymmetry.

4) The fourth question of interest asked to what degree

does the digitized image analysis correspond to

impressionistic ratings. Previous research has failed to

find a correlation between impressionistic ratings and more

objective measures of expressive asymmetry. It has been

suggested that the objective methods previously employed may

have lacked the sensitivity required to detect what

subjective raters perceive (Blonder et al., 1993). A

digitized approach may be capable of such sensitivity.

Nonetheless, results from the impressionistic portion

of the pilot study did not indicate a significant level of

agreement between the digitized image analysis and the


subjective ratings (Richardson et al., 1994). In the pilot

study, three untrained raters were presented a left-left and

right-right composite image for each posed emotion and were

asked to select the more intense expression. Results for

male subjects indicated that the left composites were rated

as more intense for disgust, while a trend favoring the

right composites were found for sadness. For females, a

left-sided bias was found only for the happy condition. The

discrepancy between the subjective results and the digitized

analysis was striking. Indeed, a direct comparison between

the two for each individual face indicated that the

agreement was at chance levels.

There were, however, several methodological weaknesses

in the pilot study that likely affected both the digitized

data and the subjective ratings. First, the control of head

movement was not completely secured. Because movement

alters the level of greyness at each pixel, any concurrent

head movement would likely distort the pixel intensities and

possibly provide inaccurate information regarding facial

emotional intensity. Second, the brightness of the lighting

was not specifically measured at each hemiface, thus leaving

open the possibility of a lighting bias which may have

falsely created an asymmetrical bias. Third, because some

studies have found that males tend to be more asymmetrical

in the production of emotional facial expressions, a

significant hemiface advantage in expressivity may have been

limited by the small number of males (10) included in the

pilot study. Finally, subjective raters were untrained and

intrarater reliability was not measured. There may have

been a substantial degree of variability within the raters'


Despite weaknesses in the pilot study, it did

demonstrate that digitized imaging is an effective tool for

the measurement of expressive asymmetries. In addition, it

is the first "objective" method to validate the consistent

findings of the impressionistic approaches. Although the

correlation between the digitized analysis and the

subjective ratings in the pilot study suggested no

relationship between the two methods, it is likely that

methodological issues are responsible for the dissociation.

Subjective ratings from nearly all prior studies of

voluntary emotional production have found a left-sided bias.

Given the unusual findings from the subjective raters in the

pilot study, impressionistic ratings were also obtained and

compared to the results of the digitized image analysis.

5) The fifth question of interest was from what portion of

the face (upper vs. lower) do subjective raters primarily

base their judgments of emotional facial asymmetry.

Neuroanatomical evidence has suggested that the lower

portion of the face receives more contralateral innervation

than the upper portion. The upper region is thought to

receive predominantly bilateral innervation (Rinn, 1984).

Consequently, greater emotional asymmetry in the lower

portion of the face is consistent with the emotion based

hypotheses. Using the digitized image analysis, the degree

of asymmetrical motion for each portion of the face can be

determined. The digitized image analysis is based on a

dynamic measure of expressivity and the subjective ratings

are made from static images. Nonetheless, one would expect

that the greater the asymmetry in dynamic movement, the more

the static image should reflect that bias. A comparison

between the digitized analysis for the lower and upper

portions of the face and the subjective ratings of the whole

face was taken to determine which portion of the face has a

greater influence on the impressionistic ratings.

6) Finally, the sixth question of interest explored to

what degree does training raters increase the likelihood of

reliable and consistent findings. For the studies of

emotional facial asymmetry that have employed subjective

ratings, often, only three raters were used to assess

expressivity (Borod et al., 1981; Borod and Koff, 1983;

Borod et al., 1988; Moreno et al., 1990). Generally the

raters were trained in rating emotional expressions. Other

studies have relied upon many untrained raters (Sackeim et

al., 1978; Heller and Levy, 1981; Dopson et al., 1984; Braun

et al., 1988). In the digitized pilot study, three

untrained raters were used and atypical results were found.

Perhaps, the reliability of untrained raters is questionable

and can only be offset by obtaining a large number of

raters. Consequently, three raters will receive training

and their judgments will be compared to twelve untrained


In summary, the evidence for voluntary emotional facial

asymmetries is inconclusive. Although many studies indicate

that there is a left-sided bias for emotional facial

expressions, several investigations have failed to find such

a bias. One variable which separates the positive and

negative findings is the rating system employed. Studies

relying on subjective ratings have generally found positive

results, while studies employing more objective methods have

frequently found negative outcomes. It has been proposed

that the more objective approaches have lacked the

sensitivity required to detect emotional facial asymmetries.

If this is the case, then a more sensitive objective measure

is required. A study which includes both approaches is

necessary to address this research problem. As a result of

the apparent lack of sensitivity of other "objective"

methods, digitized image analysis was employed as an

alternative approach in order to test the major hypotheses

presented below.


1) Right Hemisphere hypothesis of emotion: The right

hemisphere is specialized for the evaluation and production

of emotion. Based on the innervation of the lower portion

of the face (greater percentage of contralateral vs.

ipsilateral input from the frontal cortex), a left-sided

hemiface advantage for voluntary emotional expressions would

be consistent with this viewpoint. No expressive

asymmetries would be expected for nonemotional facial


2) Valence hypothesis of emotion: The right hemisphere is

specialized for the evaluation, production and experience of

negative emotion and the left hemisphere is specialized for

these components of positive emotion. This perspective

would be supported if a left-sided bias were found for

negative emotional expressions and a right-sided bias was

found for positive emotional expressions. No expressive

asymmetries would be expected for nonemotional facial


3) Facial Mobility hypothesis of facial expression

asymmetries: A bias in neural input either through superior

peripheral innervation or a hemispheric specialization of

facial movement. Expressive asymmetries would be the result


of an underlying advantage in facial mobility. Individual

differences in the direction of the asymmetry would not be

inconsistent with this viewpoint. Thus, a left-sided,

right-sided or no bias across subjects would all be possible

outcomes. However, a strong correlation between emotional

and nonemotional facial expression asymmetries would be

predicted by this hypothesis.


In order to assess emotional facial asymmetries,

subjects were recruited to produce a variety of emotional

and nonemotional facial expressions. The stimulus of

interest was the video images of their facial expressions.

Digitized image analysis and subjective ratings of intensity

were utilized to assess facial expression asymmetries.

Research assistants were recruited to prepare the stimuli

for digitization and raters were recruited to make

subjective ratings of the intensity of the facial


Forty right-handed male volunteers between the ages of

18 and 31 were recruited from the university student

population. Subjects were either directly asked to

volunteer or were recruited from the General Psychology

Subject Pool. Because there is evidence of greater

variability concerning hemispheric specialization of

cognitive functions among left-handers, only right-handers

were asked to participate in the experiment. Handedness was

assessed with the Briggs and Nebes handedness questionnaire

(1975). Additional exclusionary criteria included: (a)

presence of facial hair which might appear on the digitized

image (i.e. mustaches, beards and long sideburns) (b) self-

report of a current or past mood disorder (i.e. depression,

anxiety); (c) self-reported history of head injury, seizures

or other neurological disorders; (d) self-reported history

of learning disability.

Stimulus Generation

Subjects were told that they were participating in a

study of facial expressions. After obtaining informed

consent, subjects were videotaped while sitting down with

their head placed in a restraining device. The experiment

consisted of three conditions, one involving emotional

facial expressions and two involving nonemotional facial

expressions. The nonemotional expressions were separated

into those that mimicked emotional movements and those which

have been used to test for buccal facial praxis. (Buccal

Facial Apraxia is a neurological disorder in which there is

impairment in the ability to make task oriented facial

movements such as blowing out a match). The order of the

conditions was randomized and counterbalanced such that one-

third of the Ss were asked to produce each emotional

expression first, one-third of the Ss were requested to

produce the mimicked expressions first and the final third

were asked to make the movements used to test buccal facial

praxis first.

In the voluntary emotional expression condition, Ss

were asked to pose a series of emotional facial expressions.

For example, Ss were asked to pose a given emotion in the

following manner: "Without moving your head, show me the

most intense expression of anger that you can make". After

two emotional faces were elicited, the experimenter

shortened the elicitation instructions for the rest of the

target emotions (e.g. "Show me disgust"). Subjects were not

asked to maintain the expressions since only the first 400

msec of the expressions were utilized for the analysis.

Five different emotions were elicited (happy, angry, sad,

disgust and fear) and the order of elicitation was

randomized and counterbalanced. A negative and positive

emotion (angry, happy) were chosen for the data analysis.

After eliciting all of the emotions once, the procedure was


In the emotional homologue condition, Ss were asked to

display 5 different facial movements. These muscle

movements were selected based on their correspondence with

the muscle movements associated with the five emotions in

the emotional expression condition (Izard, 1977). Ss were

asked to (1) squint their eyes (disgust), show as many teeth

as possible with mouth closed (happy), (3) raise and (4)

knit their eyebrows (fear and anger) and (5) pull both

corners of their lips downwards (sad). Once again, Ss

received requests to produce each facial movement twice and

the order of presentation was randomized and

counterbalanced. The two expressions that mimicked the

angry and happy expressions were chosen for the data


In the buccal praxis condition, Ss were asked to pose

five expressions used to test for buccal facial praxis. The

expressions were (1) wrinkle brow (2) suck on a straw (3)

blow out a match (4) crinkle nose (5) puff out cheeks. Once

again, Ss were requested to pose each expression twice and

the order of presentation was randomized and

counterbalanced. The wrinkle brow and suck on straw

expressions were chosen for the data analysis.

Stimulus Preparation

A 400 msec, 13 frame portion of each expression was the

stimulus of interest. Each of the 13 frames of each

expression was edited into two partial hemifaces such that

each hemiface contained a rectangular area from the eyebrow

down to the top of the chin and from the outside of the eye

across to the middle of the nose and lip filtrum. The exact

determination of the facial midline was made by calculating

the midpoint of the lip filtrum and the midpoint between the

inner canthi of the eyes. The vertical line which was

equidistant from these two midpoints was used to divide the

face into two hemifaces.

For some of the analyses, the two hemifaces were also

divided into a lower and upper portion. Because the area

from the lower eyelid down receives contralateral

innervation, a horizontal line dividing the upper and lower

portions of the face was constructed. The line dividing the

upper and lower portions of the face was tangential to the

lowest observable edge of each S's eyes.


Video Equipment: An MTI Dage model 68 black and white

videocamera with a 35mm lens (promaster spectrum 7) was used

for videotaping facial expressions onto a Panasonic AG-6200

VHS video cassette recorder. High Standard VHS videotape

(T-120HSN) by TDK was utilized for the videotaping.

Illumination: Two 150 watt Tungsten lightbulbs were

employed as the primary light source in order to provide a

sufficient and balanced level of lighting. The wattage of

lightbulb had been tested previously and had produced a

level of brightness that was effective for measuring changes

in pixel intensity. The most optimal level of brightness

produces a bell-shaped distribution of greyness levels for

the digitized image of a light-skinned person. The peak of

this curve is located at about the midpoint on the greyness

scale. Once the optimal distance was determined, the floor

was marked to keep the position of the lightbulbs constant

across subjects. Indirect lighting was produced by

reflecting the two lightbulbs into white, photography

umbrellas. Indirect lighting was used to decrease the level

of shadows on the face. To insure that both sides of the


face were receiving equal levels of light, the brightness of

the light was measured at the face by a light meter. The

light on each side of the face was determined to be within

one lux (a unit of measure of brightness).

Head Restraint: A restraining device was used to

eliminate significant head movements. The device consisted

of a three to five foot adjustable shaft which was attached

to a base. Protruding from near the top of the shaft were

two adjustable, padded arms that was designed to be placed

at both sides of the head. In addition, a head rest was

connected to the shaft so the back of the head remained

stable. The adjustable arms and shaft allowed the height

and the width of the head restraint device to be altered as

necessary. Because the head restraint devise was subject to

lateral movements, an additional bolt was installed to more

securely connect the headrest portion with the main shaft,

and thereby, increase stability.

Computer Software: For the purposes of editing and

digitizing the videotape images, the Xybion Image Capture

Analysis System (XICAS) was employed. This system allowed

for the capture of a 13 frame, 400 msec sequence of a

videotaped image. It was also capable of creating mirror

reversals of the images so that composite faces (left-left

and right-right) were constructed from each hemiface.

Video Monitor: A 15" x 19" Conrac monochrome monitor

(model 2600) was utilized for viewing the images. In

conjunction with XICAS, the monitor provided approximately

30,000 pixels for the analysis of a given facial area.

VideoQraphic Printer: A series of 2.5" x 2.5"

digitized photographs of the facial expressions were

produced by a Sony videographic printer (UP 701N) onto 110mm

Type I Sony photograph paper (UPP-110S).

Research Assistants

Four undergraduate psychology students were recruited

to assist in the capture and analysis of the digitized

images as well as the creation of hemiface composites from

videotapes of facial expressions. They were blind to the

experimental hypotheses and were trained to utilize the

computer program for digitized image analysis. For their

participation, assistants received three credit hours toward

a course in psychology research.

The training process began with an orientation to the

computer equipment. Assistants were taught the commands

necessary to operate the computer program for the capture

and digitization of videotaped images. In addition, they

were given practice videotapes of facial expressions until

they demonstrated an adeptness at capturing the facial

expressions at the appropriate moment. The appropriate

moment was defined as the initiation of the expression.

From a training perspective, the advantage of defining the

initiation of the expression as the appropriate moment for

capture was that assistants obtained tangible evidence of an

effective capture (changes in pixel intensities).

The criteria of an effective capture was determined by

the rate of increase in the mean difference scores across

the 13 frames. When the mean difference scores are plotted

and the image is captured at the onset, the plot displays a

gradual rise time across the first few frames and then an

acceleration. On the other hand, a rapid rise time from the

first frame is indicative of an expression captured after

its onset. To insure that the target expression was

captured prior to its initiation, the investigator examined

the rise time of the quantitative measure of facial

movement. A capture was considered successful if the sum of

the difference score for the first 3 frames was less than

one-sixth of the grand total difference score. Upon

completion of three consecutive captures in which a gradual

rise time was displayed, Ss were deemed ready to work with

the experimental facial expressions.

In addition to being trained to capture digitized

images, assistants were trained to make composite photos of

the facial expressions. This involved dividing the face in

half and creating whole faces from one hemiface and its

mirror reversal. An important step in this process involved

the selection of a midline that most accurately divided the

face. The experimenter reviewed the practice composite

images and measured the distance between the midline and two

facial referents (the midpoint between the inner canthi of

each eye and the midpoint of both edges of the lip filtrum)

to insure that the midline was equidistant from these two

points. Assistants were given non-experimental digitized

images to practice the selection of an accurate midline.

Following three consecutive selections in which the position

of the midline chosen by the assistants matched that of the

experimenter, assistants began the creation of composite


Digitization of Pixel Intensities

Although Ss posed each facial expression twice, blinded

research assistants were instructed to select from the

videotape only one image per expression. The criteria for

selection included degree of head movement, clarity of onset

and rapidity of the expressions. Assistants made this

determination based on their impression of these factors.

Once an expression was chosen, the assistants examined the

videotape in slow motion to determine the initiation of the

expression. From that point, 13 still frames (30.75 msec

apart) from a 400 msec portion of the expression was

captured for analysis by the experimenter. A 400 msec

portion of the videotape is the maximum amount of time that

XICAS is capable of capturing an expression.

To obtain a quantitative measure of expression change,

the difference in greyness level at each pixel location was

calculated for two adjacent frames of the expression

vignette. A mean difference score between two adjacent

frames was computed by summing all the difference scores at

each pixel location and dividing by the total number of

pixels utilized. For each of the target expressions, a

summation of the absolute value of the mean difference

scores across 13 frames (the grand total difference score)

was computed for each hemiface. The grand total difference

score was considered a measure of entropy. Greater entropy

is associated with greater facial movement. An asymmetry

score was calculated using the following formula: (L-R)/

(L+R) with L representing the grand total difference score

for the left hemiface and R representing the grand total

difference score of the right hemiface.

Subjective Ratings

In addition to the digital analysis, subjective ratings

of expressivity were also collected. This was done by first

having the research assistants construct composite images

from the final frame of each facial expression. A composite

image is constructed from one hemiface and its mirror

reversal. Composite images of both the left and right

hemifaces were assembled for each facial expression. Using

the videographic printer, a digitized photograph was

produced for each hemiface composite. Each pair of

composite photographs (one left hemiface composite and one

right hemiface composite) was mounted onto a 5" x 8" index

card. The left and right hemiface composite photographs

were vertically positioned on the card such that one was

directly above the other. The top and bottom positions were

randomly determined.


Twelve untrained (eight female and four male) and three

trained raters (two female and one male) were employed to

make subjective ratings of facial expressivity. Untrained

raters were obtained from the General Psychology Subject

Pool and their participation satisfied a class requirement.

Trained raters received three credit hours towards a course

in psychology research. The trained raters were screened on

the facial subtests of the Florida Affect Battery (Bowers,

Blonder and Heilman, 1991). The Florida Affect Battery

consists of a series of affect perception tests and is an

assessment instrument of the ability to perceive and

interpret facial emotions. Prospective raters were not

accepted if they scored under one standard deviation below

the mean on any facial subtest in the battery.

Viewing video images of emotional expressions formed

the core of the training procedure for the trained raters.

The raters were shown 13 separate images of an expression

vignette. This process was repeated with additional

vignettes and included a variety of posers and emotions

(i.e. happy, angry, sad, disgust and fear). Thus, the

trained raters were exposed to a range of emotional

intensities, a collection of individual faces, and a set of

particular emotions.

Raters were instructed to attend to the movement of

various facial features that were associated with given

emotions. For example, they were asked to concentrate on

the lowering of the eyebrows, the narrowing of the eyes and

the widening of the nose for an angry face. For sadness,

raters were directed to examine the lowering of each end of

the lips and the wrinkling of the brow. The experimenter

advised the raters to attend to the widening of the eyes,

raising of the eyebrows and the opening of the mouth for

fear expression. Raters were also instructed to concentrate

on the crinkling of the nose, the raising of the cheeks, the

lowering of the ends of the lips and the lowering of the

brow for a disgusted face. For happiness, raters were

directed to examine the raising of the cheeks and the

raising of the ends of the lips.

Following this procedure, raters were given a pair of

faces of the same individual posing the same emotional

expression at different points in time. They were

instructed to choose the most intense. They viewed a total

of 20 faces. If they obtain an 85% accuracy rate (selection

of the face at the later point in time), they proceeded to

the experimental facial expressions. Otherwise, they

received additional training and were retested for adequate

accuracy. One rater achieved the cutoff on the first

attempt, a second rater achieved it on the second and the

last rater reached the cutoff on the fourth attempt.

Ratings Procedure

Each rater (trained and untrained) was shown each card

and asked to choose which of the two faces was more

intensely expressive. Because there were 40 subjects

portraying two emotional and four nonemotional movements,

240 target stimulus cards were created. Duplicates of 24

cards (four of each expression emotional and nonemotional)

were included in the rating task in order to assess

intrarater reliability. Thus, a total of 264 stimulus cards

were presented to the raters. All of the cards associated

with a particular expression formed a set. The raters

viewed all of the cards from one set before making ratings

on the next set. The order of the sets were counterbalanced

and randomized.


In the digital analysis, the primary dependent measures

of facial activity were the entropy score and the asymmetry

score. As previously described, an entropy score was

calculated for each expression posed by each subject. The

entropy score was based on the average change in pixel

intensity across all the relevant pixels in an expression.

The change in pixel intensity was calculated by summing the

absolute value of the difference scores for each individual

pixel. A difference score was determined by subtracting the

difference in pixel intensities of adjacent frames in a

facial expression. There were 13 frames per expression. An

entropy score can be computed for any portion of the face.

Table 1 depicts whole face entropy scores by expression.


Entropy Change for Whole Face

M sd Min Max

Angry 104.37 (52.54) 50.63 268.95

Happy 156.49 (84.33) 39.64 356.64

Lower Eyebrows 100.17 (46.25) 42.99 267.21

Show Teeth 180.71 (62.49) 68.13 358.13

Wrinkle Brow 134.75 (79.91) 46.26 373.76

Suck on Straw 91.22 (36.76) 47.47 247.74


Laterality of Emotional Expressions based on Entropy Score

The first question of interest explored whether there

were lateral asymmetries associated with emotional

expressions, and if so, was the direction of the asymmetry

associated with the valence of the emotion. The second

question asked whether lateral asymmetries interacted with

the area of the face examined (i.e. top, bottom). To

address both of these questions, a 3-way (2x2x2) Repeated

Measures Analysis of Variance (ANOVA) was performed with

entropy score as the dependent measure. The within factors

were emotion (angry, happy), side of face (left, right) and

vertical portion of face (top, bottom). A significant main

effect was found for emotion [F(1,39) = 8.11, p= .007]

indicating that the happy expression (M= 152.00, sd= 91.34)

was more active than the angry expression (M= 108.33, sd=

56.28). The analysis revealed two significant interactions

which will be discussed below.

The first significant interaction was between the side

of face and vertical portion of face [F(1,39) = 5.75, p=

.021]. Post hoc t-tests indicated that the only significant

difference in quadrant activity was between the top right

(M= 143.45, sd= 104.53) and top left (M= 130.35, sd= 98.44)

quadrants with the top right being greater [t(39) = -2.98,

p= .005].

The second significant interaction was between emotion

and vertical portion of face [F(1,39) = 5.33, p= .026]. The

means of the upper and lower portions of the face for the

angry and happy expressions are displayed in Table 2. Post

hoc t-tests suggested that the emotion x vertical

interaction was a consequence of lower activity in the lower

portion of the face during an angry expression. The bottom

angry expression was less active than the top angry [t(39) =

2.44, p= .019], top happy [t(39) = 2.18, p= .035] and bottom

happy [t(39) = 5.06, p< .001] expressions. No other pairs

were significantly different.


Mean Entropy Change


TOP 130.94 (102.31) a 142.86 (149.32) b

BOTTOM 85.73 (52.18) c 161.15 (95.23) d

Post hoc summary: a>c*, b>c*, d>c***, a=b=d

+ p< .10 p< .05 ** p< .01 *** p< .001

In summary, the results of the entropy score analysis

indicated that the activity level was small in the bottom of

the face of the angry expression. The activity level was

lower there than in the top of the angry expression and both

the top and bottom of the happy expression. In relation to

question #1, the analysis also indicated that a right-sided

bias was found in the upper face for both the angry and

happy expression. No significant lateral effects were found

in the lower face.

Laterality of Emotional Expressions based on Asymmetry Score

One possible confound using raw entropy scores is that

differences in facial asymmetry as measured by changes in

pixel intensity might be weighted in favor of the more

expressive subjects. To address this concern, a second

Repeated Measures ANOVA was conducted using an asymmetry

score as a dependent measure. The asymmetry score was

calculated by subtracting the entropy score on the right

side of the face from the entropy score on the left side of

the face, and then dividing the difference by the sum of the

entropy scores on the left and right side of the face. An

asymmetry score can be determined for the whole face or

portions of the face (i.e. top, bottom). The asymmetry

score was calculated for each expression produced by each


With the asymmetry scores as the dependent variable, a

2x2 Repeated Measures ANOVA was performed. The two within

subject factors were expression (angry, happy) and vertical

portion of face (top, bottom). The side of face variable

was embedded within the dependent measure itself.

Results of the analysis revealed a significant main

effect for vertical portion of face [F(1,39) = 9.25, p=

.004] indicating that the top portion (M= -0.03, sd= 0.08)

of the expression was significantly more right-biased than

the bottom portion (M= 0.02, sd= 0.08), regardless of

emotional expression.

The vertical x emotion interaction was also found to be

significant [F(1,39) = 5.04, p= .031]. The means of the

upper and lower portions of the face for the angry and happy

expressions are displayed in Table 3. Post hoc t-tests

revealed that the bottom angry expression was significantly

more left-biased than the top angry [t(39) = 4.17, p< .001],

top happy [t(39) = 2.55, p= .015] and bottom happy [t(39) =

2.08, p= .045] expressions. The top angry expression was

nearly significantly more right-sided than the bottom happy

expression [t(39) = -1.99, p= .054]. There were no other

significant differences.


Asymmetry Scores


TOP -0.05 (0.10) a -0.02 (0.11) b

BOTTOM 0.04 (0.10) c 0.00 (0.11) d

Post hoc summary: c>a*, c>b***, c>d*, d>a+, a=b, b=d

+ p< .10 p< .05 ** p< .01 *** p< .001

The prior analyses measured differences in the

asymmetry scores between two factors. A significant

difference in the asymmetry scores between two factors,

however, does not guarantee that any individual factor is


significantly asymmetrical. For example, two factors which

are not significantly asymmetrical may be significantly

different from one another if one score displays a left-

sided tendency and the other displays a right-sided

tendency. One sample t-tests allow a measurement of

asymmetry for an individual expression. Thus, to determine

whether an individual expression was asymmetrical, one

sample t-tests were conducted for the upper and lower

portions of the face.

The results indicated that only the angry expression

was significantly asymmetrical and the direction of the bias

was dependent upon the vertical position. As suggested in

Table 3, the bottom angry expression (M= 0.04, sd= 0.10) was

significantly left-sided [t(39) = 2.51, p= .016] and the top

angry expression (M= -0.05, sd= 0.11) was significantly

right-sided [t(39) = -2.88, p= .006].

In summary, when controlling for overall expressivity

(by using an asymmetry score as the dependent variable),

there was a significant left-sided bias in the lower portion

of the face for the angry expression. The asymmetry score

analysis also indicated that only the angry expression

displayed a significant right-sided asymmetry in the upper

face. (The asymmetry score of the happy expression in the

upper face was consistent with the direction of the angry

expression). The results from the entropy score analysis

suggested that the right-sided bias in the upper face was

evident regardless of the emotional expression. However, in

contrast to the asymmetry score data, the entropy score

analysis revealed no evidence of a bias in the lower portion

of the face for either expression.

Influence of Lighting on Expressive Asymmetries

Critics have proposed that lighting biases are

responsible for detected expressive asymmetries (Spinrad,

1980). A lighting bias is an asymmetry in light intensity

such that one side of the face receives brighter lighting

than the other. A method utilizing changes in pixel

intensities is particularly susceptible to contamination by

uneven lighting because the baseline pixel intensities would

differ between the sides. As discussed previously, the

pixel intensity is directly correlated with the brightness

of lighting.

Thus, a baseline measure of mean pixel intensity for

each side of the face was calculated to determine the

existence of a possible lighting bias. The baseline measure

was calculated by subtracting the average pixel intensity of

the right side of the face from the average pixel intensity

of the left side of the face. The baseline measure was

taken during a neutral expression prior to the posing of the

target expressions. A one sample t-test was conducted with

the difference in baseline mean pixel intensity as the

dependent measure. The results indicated that the

difference (M= -3.39, sd= 10.49) was significantly biased

towards the right side of the face [t(39) = -2.04, p=.048].

Having established the existence of a lighting bias, an

association between the asymmetry scores and the lighting

was evaluated. The asymmetry scores for the angry and happy

expressions (top, bottom and whole face) were correlated

with the lighting difference. Additional expressions

included in the study (lower eyebrows, show teeth, wrinkle

brow and suck on straw) were also correlated with the

difference in lighting.

Correlations between lighting and the asymmetry scores

of the whole face were found for the happy (r= .4520, p=

.003) and show teeth (r= .4071, p= .009) expressions. Only

one correlation between lighting and the asymmetry scores

for the top portion of the face obtained significance and

that was for the suck on straw expression (r= .3666, p=

.046). In the bottom of the face, the happy (r= .5183,

p=.001), lower eyebrow (r= .4122, p= .008), show teeth (r=

.5217, p= .001) and suck on straw (r= .4915, p= .001)

expressions were all significantly correlated with lighting.

Taken together, the results suggest greater light

intensity on the right side of the face, which appeared to

be biasing the asymmetry scores in that direction. The bias

was particularly evident in the lower portion of the face.

For example, significant correlations were found there

suggesting that a right-sided lighting bias may be masking

actual left-sided expressive asymmetries in the lower face.

Table 4 depicts the correlations between the asymmetry

scores of each expression and the lighting bias.


Correlation Asymmetry Score & Lighting Bias

Expression Whole Top Bottom

Angry .1857 .0113 .2672+

Happy .4520** .1282 .5183**

Lower Eyebrow .2945+ .1610 .4122**

Show Teeth .4071** .1344 .5217**

Wrinkle Brow .2216 .1905 .2087

Suck on Straw .3007+ .3666* .4915**

+ p< .10 p< .05 ** p< .01 *** p< .001

Emotional Expression Laterality with Correction for Lighting

Given the potential of a lighting confound, a method

was sought to statistically adjust for the bias. Because

baseline differences in the measure of interest was the

potential confound, a statistical method which compensated

for the baseline differences was deemed most appropriate.

One research area which has often encountered the

difficulty of individual baseline differences is that which

utilizes psychophysiological measures. To adjust for

individual baseline differences, Lacey (1956), a notable

psychophysiological researcher, developed a formula for

calculating a residualized gain score. The residualized

gain score is the measure of interest after the individual

baseline differences have been factored out. The formula is

as follows:
Zd=50+10 ( Yd-Xd Y)

where Z= residualized gain score; Y= dependent measure; X=

baseline level of dependent measure; R= correlation; d= per

individual subject.

Using Lacey's formula for calculating the residualized

gain score, the first two questions were re-examined while

statistically adjusting for individual differences in

baseline lighting intensity. A 3-way (2x2x2) Repeated

Measures Analysis of Variance (ANOVA) was then performed

with the adjusted entropy score as the dependent measure.

The within factors were emotion (angry, happy), side of face

(left, right) and vertical portion of face (top, bottom). A

significant main effect was found for emotion [F(1,39) =

8.86, p= .005] indicating that the happy expression (M=

1639.42, sd= 929.33) was more active than the angry

expression (M= 1175.68, sd= 565.12). A significant main

effect was also found for vertical portion of face [F(1,39)

= 4.08, p= .05] suggesting that the top part of the face (M=

1594.08, sd= 1016.68) displayed greater movement than the

bottom portion (M= 1221.02, sd= 589.18). A third main

effect was found for side of face [F(1,39) = 20.87, p< .001]


indicating that right side (M= 1482.17, sd= 609.16) was more

active than the left side (M= 1332.93, sd= 589.94). The

analysis revealed two significant interactions which will be

discussed below.

The first significant interaction was between the side

of face and vertical portion of face [F(1,39) = 24.17, p<

.001]. The means of each quadrant are displayed in Table 5.

Post hoc t-tests were conducted with a correction for

multiple comparisons utilizing the Bonferroni method. The

t-tests suggested that the side x vertical interaction was a

consequence of greater activity in the top right quadrant.

The top right portion of the face was more active than the

top left [t(39) = -7.09, p< .006] and bottom left [t(39) = -

2.80, p= .048]. A trend suggested that the top right was

also more active than the bottom right [t(39) = 2.75, p=

.054] expressions. No other pairs were significantly



Mean Adjusted Entropy Change


TOP 1435.42 (987.77) a 1752.74 (1063.80) b

BOTTOM 1230.44 (600.00) c 1211.61 (619.38) d

Post hoc summary: b>a***, b>c*, b>d+, a=c=d

+ p< .10 p< .05 ** p< .01 *** p< .006


The second significant interaction was between emotion

and side of face [F(1,39) = 4.21, p= .047]. The means of

the left and right portions of the face for the angry and

happy expressions are displayed in Table 6. Bonferroni

corrected t-tests suggested that the emotion x side

interaction was a consequence of greater activity in the

right side of the face during a happy expression. The right

happy expression was more active than the right angry [t(39)

= -3.13, p= .018], left angry [t(39) = -3.97, p< .006] and

left happy [t(39) = -4.50, p< .006] expressions. There was

also a trend for the left happy to display greater entropy

than the left angry [t(1,39) = -2.70, p= .06]. No other

pairs were significantly different.


Mean Adjusted Entropy Change


ANGRY 1131.20 (524.41) a 1220.16 (630.58) b

HAPPY 1534.66 (930.86) c 1744.18 (950.84) d

Post hoc summary: d>a***, d>b*, d>c***, c>a+, a=b, b=c

+ p< .10 p< .05 ** p< .01 *** p< .006

Taken together, a comparison of the entropy change data

with and without the adjustment suggests some change in the

results associated with the correction for individual

baseline differences in lighting. For example, only the

adjusted analyses indicated greater activity in the right

side of the happy expression. In addition, the vertical x

emotion interaction disappeared in the adjusted analyses.

Both the adjusted and non-adjusted data indicated that the

top right quadrant displayed the greatest entropy. To

pursue whether the adjustments for lighting significantly

impacted the questions of interest, a conversion of the

corrected entropy score into a corrected asymmetry score was

conducted. The calculation of the adjusted asymmetry score

was performed as previously described with the adjusted

entropy scores inserted in place of the original entropy


Emotional Laterality based on Adjusted Asymmetry Score

With the adjusted asymmetry scores as the dependent

variable, a 2x2 Repeated Measures ANOVA was performed. The

two within subject factors were expression (angry, happy)

and vertical portion of face (top, bottom). The side of

face variable was embedded within the dependent measure


Results of the analysis revealed a significant main

effect for emotion [F(1,39) = 22.69, p< .001] indicating

that the happy expression (M= -0.08, sd= 0.08) was more

right-biased than the angry expression (M= -0.01, sd= 0.07).

A significant main effect was also found for vertical

portion of face [F(1,39) = 69.06, p< .001] suggesting that

the top portion (M= -0.11, sd= 0.07) of the expression was


significantly more right-biased than the bottom portion (M=

0.02, sd= 0.09), regardless of emotional expression.

In addition to the main effects, the vertical x emotion

interaction was found to be significant [F(1,39) = 8.67, p=

.005]. The means of the upper and lower portions of the

face for the angry and happy expressions are displayed in

Table 7. Also included in the table is the number of

subjects determined to be left-biased or right-biased. The

bias was determined by the asymmetry score: any score

greater than zero was considered left-biased and any score

less than zero was considered right-biased.

An a priori t-test was conducted to examine whether a

difference in asymmetry in the bottom of the face was

associated with the valence of the emotion. The results

indicated that the bottom angry expression was significantly

more left-biased than the bottom happy [t(39) = 5.16, p<

.001] expression. Post hoc t-tests with a bonferroni

correction also revealed that the bottom angry expression

was significantly more left-biased than the top angry [t(39)

= 8.46, p< .006] and top happy expression [t(39) = 9.51, p<

.006]. The bottom happy expression was also found to be

less right-biased than the top angry [t(39) = 3.04, p= .024]

and top happy [t(39) = 4.17, p< .006] expressions. There

were no other significant differences.


Adjusted Asymmetry Scores


M sd L/R M sd LR

TOP -0.10 (0.09) 4/36 a -0.12 (0.09) 4/36 b

BOTTOM 0.07 (0.10) 37/3 c -0.03 (0.11) 14/26 d

Post hoc summary: c>a***, c>b***, c>d***, d>a*, d>b***, a=b

+ p< .10 p< .05 ** p< .01 *** p< .006

The prior analyses measured differences in the

asymmetry scores between two factors. As discussed

previously, a significant difference in the asymmetry scores

between two factors does not guarantee that any individual

factor is significantly asymmetrical. Thus, to determine

whether an individual expression was asymmetrical, two a

priori one sample t-tests were conducted for the lower

portion of the face and two post hoc one sample t-tests with

a bonferroni correction were conducted for the upper portion

of the face.

The results indicated that the direction of the

asymmetry was dependent upon the valence of the emotion and

the vertical position of the face. As suggested in Table 7,

the bottom angry expression was significantly left-sided

[t(39) = 4.51, p< .001], while the top angry [t(39) = -7.20,

p< .006] and top happy [t(39) = -8.54, p< .006] expressions

were significantly right-sided. A trend for right-sidedness

was found for the bottom happy expression [t(39) = -1.92, p=


In summary, when controlling for overall expressivity

(by using an asymmetry score as the dependent variable) and

correcting for baseline differences in lighting (by

calculating the residualized gain score), there was a

significant left-sided bias in the lower portion of the face

for the angry expression. Consistent with the Valence

hypothesis, there was a trend toward right-sidedness in the

bottom portion of the happy expression. The asymmetry score

analysis also indicated that both the angry and happy

expressions displayed a significant right-sided asymmetry in

the upper face. Consistent with the adjusted asymmetry

scores, the results from the entropy score analysis

suggested that the right-sided bias in the upper face was

evident regardless of the emotional expression. However, in

contrast to the asymmetry score data, the entropy score

analysis revealed no evidence of a bias in the lower portion

of the face for either expression.

Asymmetry Scores for Additional Emotional Expressions

Because additional emotional expressions were obtained

from the study subjects, these expressions were analyzed to

determine the consistency of the above findings. To conduct

the analysis, adjusted asymmetry scores were calculated for

the two expressions utilizing the procedure previously

discussed. The two expressions, sadness and fear, are both