<%BANNER%>

Differences in Psychophysiological Reactivity to Static and Dynamic Displays of Facial Emotion

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20110320_AAAAAW INGEST_TIME 2011-03-20T09:43:33Z PACKAGE UFE0010570_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 2762 DFID F20110320_AAALTS ORIGIN DEPOSITOR PATH springer_u_Page_53.txt GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
a7fef494d33528a4e42e6ce64e51fe0a
SHA-1
50d0f36076e108fa5a47981e14ba0908624ec61a
6770 F20110320_AAALOV springer_u_Page_19thm.jpg
5adcdc8e03f06463b0291514ff1f038d
2d9865dd094a4233f550085d01b1bdaf66c8289b
7619 F20110320_AAALYQ springer_u_Page_38.QC.jpg
aad22ced3f276ad1b5b611082025a95d
c190eef6e71e513bf2d863b5bf7422d232a594e5
1051986 F20110320_AAAMAQ springer_u_Page_05.jp2
7cab0eb86527a155f9e121226a3b1c04
6b193f560b556737a7b956e155c745cdcbc6fee0
2000 F20110320_AAALTT springer_u_Page_54.txt
d1e4c2d579cede95fc5b91535ce4f1c7
9312b719533aeeffcb4e3dec25d31186eca8189e
1590 F20110320_AAALOW springer_u_Page_59thm.jpg
34248fbd10f87eb5b75f68db1fd61d65
77507b09ae48a157ba76035571ac31ee244d54d3
23160 F20110320_AAALYR springer_u_Page_40.QC.jpg
53017a55c09047b29acd475b81cf6cad
b4fd36dc7f6f1bf072cea64bc6aa02d2f44f211e
559342 F20110320_AAAMAR springer_u_Page_06.jp2
e086a8a654fbfa3e3df2f46e2ac85e76
d2ee447dddc88c6a9bdf821edeba2c709288c370
2013 F20110320_AAALTU springer_u_Page_55.txt
e5fa9c51b11e4941e41a4369d7749d29
d23e828ad082f59749c88ea25ebe32ca229cf230
48722 F20110320_AAALOX springer_u_Page_11.pro
503b98a8813967abc8018aae666c7d56
4c604f286a94e626476c2810d8aed333ea49dc5e
73554 F20110320_AAALYS springer_u_Page_41.jpg
a6215e8afaeb2609a97b22ec26235231
bb82464c30b53732b6dff706e55c6373eca79da2
492044 F20110320_AAAMAS springer_u_Page_07.jp2
d735aa090421e425275b6a466fca86cb
1abae36883c7c762e039d8837ccbbd299de698f9
2133 F20110320_AAALTV springer_u_Page_56.txt
0c16a384d31e1f27a3ebfc8d7463220a
18e7b14c23f00e45c29585c9d3e85318999c2812
114203 F20110320_AAALMA springer_u_Page_12.jp2
e532ff75fff9d5270f4598beddf17793
b14252f3477a7d157394490c7176d00f0716ff70
101033 F20110320_AAALOY springer_u_Page_24.jp2
2f434a7338b7e6a09d094b25e4722fd3
7674cf397b077dc936d236a9304a1782964ee8c2
81905 F20110320_AAAMAT springer_u_Page_08.jp2
a93ad9f4564bb4e24941820e83dc78c3
8355781358c79656022cad8f1b0acc7041dd7ca7
1935 F20110320_AAALTW springer_u_Page_58.txt
89ac82ca50ea808d8b6035e1eefc2c50
a3ad27174bdcc4fec67e0d5ac4d10976fb0b8e25
2046 F20110320_AAALMB springer_u_Page_41.txt
64d03847931ab47235490af0b484ada4
87379c25b55ff282f39a582737d7fb9bf76d64b1
5477 F20110320_AAALOZ springer_u_Page_39thm.jpg
daf41a3942148541025f0ee141d6c0b0
e98ec214a186a02a2aab3e05ce601d74f6a02856
24417 F20110320_AAALYT springer_u_Page_41.QC.jpg
d6bcb1fbd461054c93abc1c4ce40608b
ff47b29b083c400e1feee263444c5474d7bdf6e2
98448 F20110320_AAAMAU springer_u_Page_09.jp2
a6563358e7299a236befbc2f5aec7620
0e05a1f6594642c1c761dfbb7214080c729e7885
3700 F20110320_AAALTX springer_u_Page_60.txt
b07d935530e9c7a84d4416a9ea2cfe43
2570934f9b48a29a6f20d4d36a4f856e684e2c75
25271604 F20110320_AAALMC springer_u_Page_07.tif
0aa94cb2fc3b16bf84ac7e47b8b6d959
09538d54ab12b785ce4edd06121fe681cb586ff8
75278 F20110320_AAALYU springer_u_Page_42.jpg
0bb2b318ffcc14b467b8b9740702b3f7
1526deec93596c5a94f5b594b85d3d8a992721cb
107473 F20110320_AAAMAV springer_u_Page_11.jp2
28e5dd003210315118616897e8a99adc
7932a2d55d7297dcfe9a986ad414c13378ccd9ff
3437 F20110320_AAALTY springer_u_Page_61.txt
4fff9b61991eb1b5354588c19b2ca5be
025b35490393bf71a1d141cee3301506725c376e
6501 F20110320_AAALMD springer_u_Page_30thm.jpg
57ffc4cc6895bf0a968d525ccea5c6cd
8a620b19dc568a07c988d1b4dab9e1cc4a5946de
23187 F20110320_AAALYV springer_u_Page_42.QC.jpg
a8b35d397a2e443cf01bf5e93533c2b7
4fc28d6529be97b0b1d88201eedab108de4aa387
115025 F20110320_AAAMAW springer_u_Page_15.jp2
4f9159af1874305217bf8f562a3d87cb
d06329d719708948369f561da75327ccf6d4a183
1053954 F20110320_AAALRA springer_u_Page_30.tif
408420806e80249fd8e52362adbb4883
ab4f9c0f9a444078fe982b227787423e69e64430
2113 F20110320_AAALTZ springer_u_Page_62.txt
7c448350ef57bfb3665b8fef10bb2c4e
b90d3923723d2bd9378f6e952bf91dd94977571d
1920 F20110320_AAALME springer_u_Page_18.txt
6cd2b5a5f8f200944214c98e46131885
1b44529f31308e2a1095a528c5d2e67c59bc02c6
58197 F20110320_AAALYW springer_u_Page_43.jpg
568b9748ce3e82286e2ea5ee9b384607
b3066a7221b0f1a4b7a7b187956802d3f9641445
109752 F20110320_AAAMAX springer_u_Page_16.jp2
e4452a487e667a8f17b8c0a999bb9705
2dcbdad17001280fbda59ffaf827fe378395f224
F20110320_AAALRB springer_u_Page_32.tif
b497b26d970cfc2a88e84445204077f7
d5b48b1ec1bf49912053ffff646b76ab3a8f4da7
6967 F20110320_AAALMF springer_u_Page_64thm.jpg
81b9f22b69b614148e3e48617568d5a4
3d78ab194943f3a2428e423cdd040e411c08439d
18558 F20110320_AAALYX springer_u_Page_43.QC.jpg
e55cc30af1c28330a5f6469a7f55e765
5b61f84621d3bcf1f6d82fcebc1e53ffa61ec7f8
109959 F20110320_AAAMAY springer_u_Page_17.jp2
88a85b615644f2df248ce1bf4d94796b
80712738f96010aa59e214c14ced70faed247ecf
F20110320_AAALRC springer_u_Page_33.tif
ca033e07b7caa0201dfdcf1bc25ef7a0
58c0fb9c862a4b1067bc9949cb82f4bc871925bc
51085 F20110320_AAALMG springer_u_Page_50.pro
c3d507c7fb6262f1aab5b5c7bb655b70
f07279df88f940aa37d26b8a4cf2e16f11fc2777
61536 F20110320_AAALWA springer_u_Page_65.pro
5e76c0288ff98fba85fe6f82c340eadb
c5044d04ff93672e26c529f49b907c4db90e6fd1
71558 F20110320_AAALYY springer_u_Page_44.jpg
dfeb842a31898411f7e2742e0e13df08
96e09d487e373251773965fb55e0bc7d75a06993
107184 F20110320_AAAMAZ springer_u_Page_18.jp2
06b5b257787ce35ee6afd2092da3722c
f3649327c3af17d5a0ba1699c071b26c83f05373
F20110320_AAALRD springer_u_Page_34.tif
db473159dc5f804582d4be2808694350
8d032db86744ae8dac33c46c76a24bfa5269678f
63600 F20110320_AAALWB springer_u_Page_66.pro
7ed1276ca0d284c0b9d58cc6ad415b58
1ce8fe02f1d2dbdf45279c84c6f9edf21730b3bb
23665 F20110320_AAALYZ springer_u_Page_44.QC.jpg
cb1ee7408a14a779cafee257bd4b3b00
d50f2dede9fee81588b0a8b590670687056e6847
F20110320_AAALRE springer_u_Page_35.tif
6f14acff8568b49ab26f8616c2ad1d3b
89acd96bfda9e04d5835c6e17a51614507911035
71630 F20110320_AAALMH springer_u_Page_20.jpg
30178282c2c4daab58e42b32705a2839
d9dea14222f193d8052411ce3d9ba9d5bba8f471
64509 F20110320_AAALWC springer_u_Page_67.pro
5cca8745c0ed94c5c05628e619db464e
e3c0afe864d21dcc6df44731d3facd09dac49d66
F20110320_AAALRF springer_u_Page_36.tif
db48b818889f807c7b7c743afc1fe313
c345f7a7fea4977d7b637409103ac9886ff61d91
6695 F20110320_AAAMDA springer_u_Page_16thm.jpg
a0ed1fadaf31bd9993330b2a7e90378d
ee0b7dc189d7e531150574fa034126e75ec90d1f
114051 F20110320_AAALMI springer_u_Page_41.jp2
0fa392388eb7b2673bbf7fffe7353041
c4a663dc36b937ce118f355260312bf61a90098e
50443 F20110320_AAALWD springer_u_Page_68.pro
1a2420899a3e9a5964b07629cc77ed71
52028f56caf49e3f2d1d67cc7c6bc7ed782b9eb4
F20110320_AAALRG springer_u_Page_40.tif
4c429fd874a29433995d2890d80c1f21
3bd276dc3404ca857bc753dbc503b799041f7b7a
6534 F20110320_AAAMDB springer_u_Page_17thm.jpg
e750054256defd52ad2311c3118b9e84
4b3510636b68e5be700e74c30996d8d0b5582585
130271 F20110320_AAALMJ springer_u_Page_65.jp2
3b008cc0e79acf73f7e5c662eb0cfdaf
f205d48da1482153b8787dd6472c660fb09f11fd
23305 F20110320_AAALWE springer_u_Page_01.jpg
4c9300e7de64e8b69bed9ac2d9a4ad61
4166d57698b565c46619de45a09de6d1641112a6
F20110320_AAALRH springer_u_Page_42.tif
6e4b371571008fd375a890ac8c668773
f3cd4df3dd69496ca7366edd82041ae57d51e248
6402 F20110320_AAAMDC springer_u_Page_20thm.jpg
275cdb988f21a8f3b23ef0f48552bc25
2a82aab115a868da5fec61937d396ab99231967d
113071 F20110320_AAALMK springer_u_Page_14.jp2
d2678b7ab9d724e440a61a0e48e56842
fd192b60a876dce87a2d08a5180a1ec3505efd06
7415 F20110320_AAALWF springer_u_Page_01.QC.jpg
cfecf58f35f97f0d2bab997312ef5e2e
c935d5b879630aaf8be8fefc262c5e97e949dfad
8423998 F20110320_AAALRI springer_u_Page_43.tif
0e7530ed20e7d195fe573a8043901d08
fee9ea336aa26bee068d9f50520045e2ebbbf798
6516 F20110320_AAAMDD springer_u_Page_21thm.jpg
5bfab1020dfbbb6f830df245880f5ab0
7765c366ac5138bf25bfb17c57a0daa3cccc5986
4835 F20110320_AAALML springer_u_Page_23thm.jpg
59eb08868590a66a7d8b7592988d1ce2
d6109372a126ec79ca11a3b769ec694298c02c56
10303 F20110320_AAALWG springer_u_Page_02.jpg
5826051c2caf6d280cad1a76ab9aacc3
5a0bdeaebe03869b2548a7eee2609f2adc6024a2
F20110320_AAALRJ springer_u_Page_44.tif
ebc10532a582503ebc9e1cf1e77ebcd9
5e60bdaf4e65c42b88cabcde6954d13c6d6d200b
6336 F20110320_AAAMDE springer_u_Page_22thm.jpg
958bad2a8a4fb0dc5fa9eec49b79c4de
71d1e10f5ef22c1856e7e121eb97e2cf00852058
5937 F20110320_AAALMM springer_u_Page_62thm.jpg
5d6e0f4eff9a181cd574d0e9afa7dcbe
a5d07250b834446a206107f959a25234487b220f
3317 F20110320_AAALWH springer_u_Page_02.QC.jpg
5276bb39a3dde4e3e8f5c7d6ca2e4b76
8e4c839603bffc983082be68b569aa04a84bda16
F20110320_AAALRK springer_u_Page_46.tif
885ed7e97e535c472800753a61e5c385
9087c450247633a80655aae1d296f1d4b289cd5d
6042 F20110320_AAAMDF springer_u_Page_24thm.jpg
3d2228376c4706a5efbafbee2c2cb355
673e08f7ba535ec7d0e4f1d71cb06f95334fc825
6631 F20110320_AAALMN springer_u_Page_37thm.jpg
6509172e46ac603ef53f32b9369087a2
241d5741823236a34007ef6183728a6d60efff9d
23556 F20110320_AAALWI springer_u_Page_03.jpg
aedc81c066f945bbdd38070f8e094d6f
873750fe3975778743da7cee0d53116080ac2bae
F20110320_AAALRL springer_u_Page_47.tif
fc5a1d45b4ac31860feddd193fed81ce
e02be7062e8342c35a9b4eb3f94e27fb613dad4f
6420 F20110320_AAAMDG springer_u_Page_25thm.jpg
fd8ddd5e2969be3a567eddbe932abb8a
18cf62411a5170a7c293a7e26e09abbcbd34a170
49290 F20110320_AAALMO springer_u_Page_32.pro
54acfe24f6c66195dfe14258d8c03078
993d20aa4aeacbc4f606c8b22164d036ea87129e
7901 F20110320_AAALWJ springer_u_Page_03.QC.jpg
15ff88317f6bd999d3356be6f999ecbb
c8a0fc55082a03fe637e7ac92465faea8f64cfa1
2733 F20110320_AAAMDH springer_u_Page_26thm.jpg
1175572baaa9d34bed4c2e9ce732d4f0
4b69e649e5702993acc84f46a53e55106f2d7c55
106427 F20110320_AAALMP springer_u_Page_33.jp2
b918f4ada58abab5afd5e98f1d3ca567
1e0831b1241b96e37ebb91fd4eea7837c7668520
75627 F20110320_AAALWK springer_u_Page_04.jpg
4074770d661e5aef8bbcc72523fafe45
0bbf0e74312f9deb2969cfb4baf9068512a660d1
F20110320_AAALRM springer_u_Page_48.tif
3242c6185b4c7a7781297831eb0e7029
eb714158365bf81c7c4d33ded74f242df233a664
5118 F20110320_AAAMDI springer_u_Page_28thm.jpg
755dbdea4c310a8c18f4edefc277f2b5
0519cc0beb5fbef731577ce33988749908686cc1
43903 F20110320_AAALMQ springer_u_Page_35.pro
c29136fcb32a82af911f5a1c5e5873b2
8981dcaa8f7c4e14d59e1ecc37c8bd8819e49289
19503 F20110320_AAALWL springer_u_Page_04.QC.jpg
dc110b8b08cff7263788420f212c8dd9
c5feed57aaf8eea8ab78e51c0421f9af3ea7dd13
F20110320_AAALRN springer_u_Page_49.tif
008995117e4bb1513d2ee8018d66b639
b8a63732a59f44102ad8bb27f63652705a3ce001
6614 F20110320_AAAMDJ springer_u_Page_29thm.jpg
58cd640205c54ebd37b06c82c5103dbe
77121708ca0846226a07b74981d094e49b442021
F20110320_AAALMR springer_u_Page_63.tif
7474a2612b19e7fc04ab90fd5b8610fd
981cc7070a28cc628a81400e8b47a258821a911a
55128 F20110320_AAALWM springer_u_Page_05.jpg
05b95d4b4e8a3bf09e8d62f387179119
71d17b69d7cdd49040b90029588bf000e59fc93b
F20110320_AAALRO springer_u_Page_50.tif
9925c8cb8f6318db532cea18128e14ec
f016d679b8502ed204bc034816351ea11e97f76c
6318 F20110320_AAAMDK springer_u_Page_32thm.jpg
15b8d4e627b1dcfc2967306b2728ba74
18dcbfe1230d0fd8da9ef65ca670249f257abcc6
49892 F20110320_AAALMS springer_u_Page_36.pro
43fa7c35542c26ba3927eb4fec6987a2
aa7223b4fc43bb90ad256eccfe0465735733082e
14421 F20110320_AAALWN springer_u_Page_05.QC.jpg
6dd0342466c3e6f5ffa1a76ccabb0f8f
520deb46b16d64993f541157b49809074f98f23e
F20110320_AAALRP springer_u_Page_51.tif
44154507562e8b04deef2acfb09b48b6
610337bc593e549b1e6282ce8c8707362753293b
6255 F20110320_AAAMDL springer_u_Page_33thm.jpg
de619287b23c47745a9689b539ac08d1
4bb17c176385e7623bcd18311e0f42ae39e9439e
29308 F20110320_AAALMT springer_u_Page_38.jp2
0e01a6897b31738e7faab0825907ffd3
aefa41d5b03b9f29404eeac15bf6a0b56245c5ec
24772 F20110320_AAALWO springer_u_Page_06.jpg
32d7a11655a334695141a21abf4c45ad
ebb9818fc754d5032b11cf730a4014cf688d5cf1
F20110320_AAALRQ springer_u_Page_52.tif
6a4f07b6f555904ff063ed9346cf6cae
261b3420ed70b8810a631eca265266ae06a02b49
6164 F20110320_AAAMDM springer_u_Page_35thm.jpg
be65c8e6064d03d96f73cf4466cac487
c3510376cf7056af63e69e3313c8440bd1359f52
7591 F20110320_AAALWP springer_u_Page_06.QC.jpg
02d59fc943d0891f82d13ffa9901fd16
83553262c6da272e3dd26dd0827651d9580e4c4b
F20110320_AAALRR springer_u_Page_53.tif
a6cb82fd069b0bc65b76182f710a4390
4e924cef07872c4114d723d89052cc62c1e8ba19
1934 F20110320_AAALMU springer_u_Page_33.txt
a0a211bbe6029ca1e1c87e2b236d230b
1c4e6e491767caa4ef88c9e9e2353435f84fda37
6345 F20110320_AAAMDN springer_u_Page_36thm.jpg
a0636ee3b6e1410f9d3da82c8f7bef03
6dbaf1f1192210e2e436549f346d6da9d3b3f724
23930 F20110320_AAALWQ springer_u_Page_07.jpg
dc202ce5728a3fcac6f6efb18cf5f4db
9fc90b8b81c829a73893a5e9c7caf7e91a2c5a22
F20110320_AAALRS springer_u_Page_54.tif
1426e4bee608623ac34ffdedb085a895
1e09d7ba56594e1d78c98edf3e1554e8cc721e45
50769 F20110320_AAALMV springer_u_Page_17.pro
dd2335f450eb28e01f95c400b4ae427e
caaf6ed05b137bae9f89ed1b9ca8daebe92eb0c9
2436 F20110320_AAAMDO springer_u_Page_38thm.jpg
5e338c59f354e70240bf77757b99656f
ed97b517364be2ef665883481c671815ca17d572
F20110320_AAALRT springer_u_Page_55.tif
f401a2b96ed20edba19004551314db41
16f43c7517c9e5f5867fcb3e03dfd354c08072cb
F20110320_AAALMW springer_u_Page_37.txt
8d13a5f17ce48cf93fff9d4266da1236
60e0d351fc0bb37c33e471eab35c6c69dc7a6e36
6739 F20110320_AAAMDP springer_u_Page_40thm.jpg
ff8c57b22b2cc2afdb8b468d715eb006
427520128068934a22fe3f4e834b265005745cde
7506 F20110320_AAALWR springer_u_Page_07.QC.jpg
d6d473dec6f236d04622c7ef04678642
aa8e4e5f4767d74a46553f46a28666d1397b1422
F20110320_AAALRU springer_u_Page_56.tif
fde50d3ddc05cd55d414ed92ea5fb2cd
ce833a890e92c74822b68a3fc9ef87384ea2efff
66310 F20110320_AAALMX springer_u_Page_24.jpg
f868ab07bff21537631b2ef6f21d54eb
4b58c7280dcc56928cf4f72e902c1373c4305502
6663 F20110320_AAAMDQ springer_u_Page_41thm.jpg
374078c3760b4c90a65266969af8a752
9308148387e46fdea3020a224d7d4ac922a36a1a
55966 F20110320_AAALWS springer_u_Page_08.jpg
15808d1260b73a7882e082f0dacedf77
9e62733992dcb73f6963bc2a918ee022158736b7
F20110320_AAALRV springer_u_Page_57.tif
d14f8e886cab38beb4dd58727b68973a
bbe6f9e280fa99c73bef64f36762447deec78b8c
6401 F20110320_AAALMY springer_u_Page_34thm.jpg
b2ab0cee47db603cb3c9a445f46e3a5f
9d80aa66f95224702a9330b706092cf3352331d1
6590 F20110320_AAAMDR springer_u_Page_42thm.jpg
b585e5a5e129d59cc194163a73bb31e5
29381f5dffe016967961ff6842163fe190e1180a
17409 F20110320_AAALWT springer_u_Page_08.QC.jpg
3bdb3aee10f5c38099b26cb19badb514
9586f93738f7da1413c25b26be93035dd8de92ea
F20110320_AAALRW springer_u_Page_58.tif
b088ac6e8989b2abe2c051eb03068da2
546ddd294f966af2dee1b3782ec6770145040254
1729 F20110320_AAALMZ springer_u_Page_49.txt
fd2a7c5dbc6f3238147a62ba6ffd7800
b0bb68f04a93bdb17ebf2e2c10f6345a76309235
5336 F20110320_AAAMDS springer_u_Page_43thm.jpg
640a3ecd49997c837933aaab1df68270
8f07038a09619debc39dbfb6194a4e94a7509bc2
64638 F20110320_AAALWU springer_u_Page_09.jpg
9acad99fec7bda3921fbf76d9eb9c213
9dce8d14d2a7c2f8943782b9fecf21a274480e75
F20110320_AAALRX springer_u_Page_60.tif
16894de626e56526d4d8c370650f9bb7
111ef3ecf7f80a671fec73a5ce2a9fb8e83ef167
6531 F20110320_AAAMDT springer_u_Page_44thm.jpg
d08e9fdf7927be29d1f2b3e9bb9d6428
fff8b6b5f5805ec05f51839822b60b111788e42c
63478 F20110320_AAALWV springer_u_Page_10.jpg
b47446d658bea70c3eca56c0fa8fce56
202b364ece0e2aa50421bf62d7b11d5b7fd20a8e
24726 F20110320_AAALPA springer_u_Page_19.QC.jpg
4f4552688aeed83423ca1b43cfafb6bb
623d236ba81c1f7735e8b72aa4fef25658ecfe1c
F20110320_AAALRY springer_u_Page_61.tif
5df24a39c6a9b1e385a1599b4ba18d64
b3a8936923ec729fce5449e20b6143854fb1fa50
5491 F20110320_AAAMDU springer_u_Page_45thm.jpg
70489df27c8e4cb57aad72ba57a4a2af
af5a676d0ebda5daca384ca6af03ac16c0718d4d
22753 F20110320_AAALWW springer_u_Page_11.QC.jpg
13b9c5b0d5378c03eea9350e3d6527b6
f87f8a3a6a66294f9ee28e244069ad2d51d99fad
102319 F20110320_AAALPB springer_u_Page_61.jp2
61b0713629333c8e0dbab4aac37e6e63
dc83d2c862002134f6854342bd370e3e5209e27c
F20110320_AAALRZ springer_u_Page_62.tif
66d6a3055c1bd67f8a3217bd0a6306b0
f1692db66e4c8eaed38a50dfc866d82284ca9b09
5166 F20110320_AAAMDV springer_u_Page_46thm.jpg
b726bee074eb303863bf6ec3ef3132ed
c3fd19e74c6b19402119fe035dfaf5189c0846a2
73565 F20110320_AAALWX springer_u_Page_12.jpg
b9948f2eca74009be53d27439ec05324
4fbc7e9823c4bde232d4d526e456fc7f8cee3d33
64481 F20110320_AAALPC springer_u_Page_64.pro
20f8870117e1409a62c90269545dee66
10c2f05a16eb6732034f513fa86a0e2e4ce944a9
5314 F20110320_AAAMDW springer_u_Page_47thm.jpg
ed21c795405b8c1e44bc3b95adaf8e66
cb5680e3b53826981c5d6869e9e33b761d076451
24318 F20110320_AAALWY springer_u_Page_12.QC.jpg
d23ccf03f99b5e8f572afd5698190f50
47d98570a0926021173c0246fa56067557ec9aa8
5358 F20110320_AAALPD springer_u_Page_60thm.jpg
7dbeb1daea0baaaaf463dd5f8b8e47c0
af9406b957d988869956fd853c633251c51665ee
2341 F20110320_AAALUA springer_u_Page_63.txt
7bdc045fc5b88694b3e60ae08e9b358c
3ae770d41721f4de1108cfccbba3a127342513e4
3369 F20110320_AAAMDX springer_u_Page_48thm.jpg
b73a8561828390559579767dbaded894
959455a2f3ec70197a48f7c9f8a89eccfc616c66
72157 F20110320_AAALWZ springer_u_Page_13.jpg
fc153ae15c7d2c460a6efc7f94c225a3
d58801a43b5652fe973ddd8d6848e65bf3297f66
55793 F20110320_AAALPE springer_u_Page_28.jpg
fb6378106bc7de62d63e30fa08b3aab9
6325c1b9696f31b79809b1aa0e4a62aca9806a7e
2592 F20110320_AAALUB springer_u_Page_64.txt
b95bf185700b6e161ebb57f39be23e22
1eb8982bf8731b4c98f8e01ee4943aa4e7cf3050
5535 F20110320_AAAMDY springer_u_Page_49thm.jpg
a241a9bf5052f4aa77c5df01403cc8f1
ebd7d30358482ba7a5a1e3b238c742c831cb0cf7
23777 F20110320_AAALPF springer_u_Page_55.QC.jpg
8632f45fc5d6bc67d6e9df75363768aa
23a6210626ebeed7766d2f2c98e98e857622b4e2
2490 F20110320_AAALUC springer_u_Page_65.txt
d4a7812ea65adc4bbb6bcbb09e9e45b9
c13b1e5e0f3f111606dd82f7403c2c2ca5859609
6339 F20110320_AAAMDZ springer_u_Page_51thm.jpg
5a194139793000810db77b7bfc9261ce
0b8c3c11a722419953ad73b70a350a109dd43f67
1051975 F20110320_AAALPG springer_u_Page_04.jp2
7a902900023ad9a0b3a069cae5fd32ca
2fc9f3f4ab55f872a8ea998d9d0f8455628eba4a
59421 F20110320_AAALZA springer_u_Page_45.jpg
5e5639b770b94e5b31d0240f99ee4e51
c2072d681c7f633d1d6b397c290039ccc6187355
116401 F20110320_AAAMBA springer_u_Page_19.jp2
b6100e4b05160b7af4f046559f5ac50b
c510a0040676c025d81fca94445ee2910182d677
2580 F20110320_AAALUD springer_u_Page_66.txt
5dc913b98c01485d4712643a2409ca65
2316b71f2f24e794e94a9c5563a01ecd46a9669d
48998 F20110320_AAALPH springer_u_Page_58.pro
557c0a032cb350d2f1109a2207ceb22b
837de205a6867d9275422a1f9c959662d13a1ff3
19102 F20110320_AAALZB springer_u_Page_45.QC.jpg
876facf278ca67b6a24c66d3ef2fb514
55c85fd3e872d5b6b9aabbdb2cc43a04d771058a
110409 F20110320_AAAMBB springer_u_Page_20.jp2
7f4dbebf1b5b4c01355853dbaed874bb
0a2bb77da12c7263864c97b01fc09d90b102b161
F20110320_AAALUE springer_u_Page_67.txt
a3b084b8a2a0a311e11f1c513cc3509a
c214999592f1574376ff744ff5e2c97d31c79497
20215 F20110320_AAALPI springer_u_Page_10.QC.jpg
cb5186bd7622be86129dd3fa84cf16a3
0360970887eef0b09db5e2d1a025b67dda3c42b7
54802 F20110320_AAALZC springer_u_Page_47.jpg
ae5eaf6bad44c6b721ce344d30d99708
56c348781f406e5f97b43202e672c43f2fd60486
113062 F20110320_AAAMBC springer_u_Page_21.jp2
a0484b96012172ca6e9888b6167e970a
b66bb367953fa9b2d0b60bc23890d002c808640b
2060 F20110320_AAALUF springer_u_Page_68.txt
3b8ec066082331785f6203ca5f88a5fe
7b6bb5b63c18be069b48d2b48aba7a06616f24e9
111671 F20110320_AAALPJ springer_u_Page_44.jp2
d21d04fe8957c16e13807d9ab81b7738
2e1b311c9281c7a81c65d9a61414e6e7233145e8
32790 F20110320_AAALZD springer_u_Page_48.jpg
c166628565988746d3e6e8586927b7a8
02fe574a6284813df0e5f2acbc104ac1ae6207a7
100496 F20110320_AAAMBD springer_u_Page_22.jp2
4b89a61c0f150e540a8bb77c1c3f46a8
335f125a27178137acb2c3eb7d0bea079b7281d8
423 F20110320_AAALUG springer_u_Page_69.txt
e1364cdf01c9b80cae88e053d6539f8b
8891420da3d1be0f522f7ed9d690bbdb42be6977
108043 F20110320_AAAMBE springer_u_Page_25.jp2
3ac7be41c4e22af82c4b2a34a7b54077
4b071ff51c7f8f06349bbca415e07f757a3e74ea
10627 F20110320_AAALZE springer_u_Page_48.QC.jpg
3d2d5d9f17ffdc46fdeccd733c7d7265
9094c14ca7479bd0f2c499d0ecd7782317417df0
8221 F20110320_AAALUH springer_u_Page_01.pro
13a4d38d3b3c3204abaebaa0eb0785d5
554587e31f3c73d4df0845c14f757e7e44eb5963
34588 F20110320_AAAMBF springer_u_Page_26.jp2
1ad214d1169b774ef1d70271abdb2156
a861804f43299ef86b83bf687f6324fcf7552893
17757 F20110320_AAALPK springer_u_Page_47.QC.jpg
e284265d00bc2a7d841b95b356413faa
260dfb909c71573a2276fe4df04d572938ba063c
60840 F20110320_AAALZF springer_u_Page_49.jpg
b858ea0b7fdae3e36ad6f263ccdd91ae
4eaad3a22de1b5425eb932512b3f8e641ff1b425
1228 F20110320_AAALUI springer_u_Page_02.pro
1e4018ccb0ac97b30deb1210b377b458
d11cb6b38da56dbabd067804f0981a59a0becfad
90351 F20110320_AAAMBG springer_u_Page_27.jp2
8e35bbe55b67e7c79063b5ad1ad4a8d0
1f1ae99d0434bd1eb6e8c2c1c0815935cca82667
F20110320_AAALPL springer_u_Page_41.tif
7255636858b9bd475bf531326357a67b
06fc0235d44994502543d8a2946eac3146d7450d
19851 F20110320_AAALZG springer_u_Page_49.QC.jpg
018e7ff7578b645344995779cccce33e
afd9ae1cd44e674b25e2faea4f4e6dcc45531947
11231 F20110320_AAALUJ springer_u_Page_03.pro
c76b4225613541d6974278f5f2cc7f64
daadbd8a81136bb4a60d7bdbfa5effd32ae1fedc
109933 F20110320_AAAMBH springer_u_Page_29.jp2
4e9c0e2ad9ead65bd55a60f15c018966
33784381f3be0f7b7f38622fd66503ed4ae6b81f
18846 F20110320_AAALPM springer_u_Page_39.QC.jpg
8eab96e01770a8cedd8a2a4122b7f1c2
caedcd19a34a2002c5ab997f0dd996671d260eb7
71885 F20110320_AAALZH springer_u_Page_50.jpg
fab2f53c7b88105d97926301d39ce716
915838294c6b9cc798dae0a8465eb4064ba7ca13
80054 F20110320_AAALUK springer_u_Page_04.pro
86420350c931c8898ca5b574f13a48b9
45cd6e2d9cbc3bf656b6881028c18cb2d442e468
112379 F20110320_AAAMBI springer_u_Page_30.jp2
b42bdbca7e9c257a6c2197519b09adb6
0eb480b8812bed1a23750cd8eda5cb11227d3ccd
80566 F20110320_AAALPN springer_u_Page_28.jp2
086b515dfa059d0a68c68fd9031bc2ef
9c285b3a27723fb02a0e960c1e75689e51e2ef58
23730 F20110320_AAALZI springer_u_Page_50.QC.jpg
4116a41f7935de2deba30e42f7c7e67d
144385a71da2f9796cc4d41c099a9557e31a2572
53853 F20110320_AAALUL springer_u_Page_05.pro
9165555d9fa542652d350bc060f4cd50
5b34f2959b149a5251841e0e9e447d698cd94739
85493 F20110320_AAAMBJ springer_u_Page_31.jp2
4315cd35870704d1c6a1a475abe5878d
ccde389b7d0dfbb02df090be0d8c38d9b6777b37
F20110320_AAALPO springer_u_Page_53thm.jpg
f14de552c1b1e15f1211d04e1ee4c9a3
083062c7f48a8d9133310467d4c0e551155e5256
69399 F20110320_AAALZJ springer_u_Page_51.jpg
ab64c0d7b2dc4f3dd0a2d0272c9b82e0
2a33402bf5688fa8d84eb27b66685098801e5805
15512 F20110320_AAALUM springer_u_Page_07.pro
042d2306a602b1deca96fa05a5e9e3e1
08dfbec638afa821b4572212984920d16ef8d797
106289 F20110320_AAAMBK springer_u_Page_32.jp2
6b8bd40b5919cffd4052e4f157c3f211
7b43fc41fa6a4bcba718a7dad3de322a5208e76f
51706 F20110320_AAALPP springer_u_Page_41.pro
4fb2986ac4d62edf0bea191a50f1ac07
c57997b4161622a54c4946aef60d2bd7adc9c940
22326 F20110320_AAALZK springer_u_Page_51.QC.jpg
f167709dd369ef8871de9dc24bf7aa33
7e701c2903f326a23dfac01648c6a4f3d0832edf
35749 F20110320_AAALUN springer_u_Page_08.pro
ea71805a87c9d50262270fed4692dee7
6f58ac5c98ca9e77be14f8f0926e37d329f29b50
103612 F20110320_AAAMBL springer_u_Page_34.jp2
5a1d84ae517530abb5db59b8692401c8
9b1460b3c4925b4c047ed88ee539b9f984882833
71458 F20110320_AAALZL springer_u_Page_52.jpg
b49d9b12b3de03b46907581bb5501851
067e3e07f475b1b72f1ea75e8a879158cdaf9792
43033 F20110320_AAALUO springer_u_Page_10.pro
3c31180d3cab57b6255b494a78677f20
9785fab9454a0a60d2eacdbe0e1a9e857eb5e181
72559 F20110320_AAALPQ springer_u_Page_32.jpg
28794fcde4eae6f52284d561d64bd527
9c2afab7431aee6b026ca9ae057fce33054eaf07
110527 F20110320_AAAMBM springer_u_Page_36.jp2
77d14a23b14bf897b28dccacd979b969
3d443ff86154b192e300f437e34bd38f712202af
23475 F20110320_AAALZM springer_u_Page_52.QC.jpg
0089e9f1e7a251240e4bf11ac54b70d4
29b9bfc8678479b7a20e5e177544aa557aa226f0
43975 F20110320_AAALPR springer_u_Page_09.pro
6362858f6cbb725b55cf244f94fcd287
7c94879d79fe358ef5bedebd07dfd41c82816ad2
112045 F20110320_AAAMBN springer_u_Page_37.jp2
1b5abf68e5bb8c5b4b12dc843ec14c87
fc5b835b36362a9c927bf428d75991b660098600
83544 F20110320_AAALZN springer_u_Page_53.jpg
e858a3505554cf0bf769bcd2f1c8cbe3
61667cdff277208e48be1634da4ea6b94bf9ebe8
52340 F20110320_AAALUP springer_u_Page_12.pro
92e0d64b0cd09774320731f031bbadcf
7d7002232960af6f843d31fd9d5c74439234ca2e
F20110320_AAALPS springer_u_Page_02.tif
af3a2785ea3f92144e1ae30a3e39c31d
976686e7b8d8f7a03883f000dd74b7f4f6bdf8d6
85976 F20110320_AAAMBO springer_u_Page_39.jp2
27b6252cddce0954a3e76467bf260376
61ee0f25e7f7caf0cb3f64bef8b3bece177514fb
24348 F20110320_AAALZO springer_u_Page_53.QC.jpg
d1b03e9d4b8defd114be5ccb53ff3052
77138fd2e1573006e2004a7dd3e6e0ba90a07ede
50484 F20110320_AAALUQ springer_u_Page_13.pro
2aeef8e348cf062374f48cdb892ec482
0816820ed0b3b4b5abde71f0fbaf04826d092e49
30956 F20110320_AAALPT springer_u_Page_47.pro
e4ef1cf4453868cf35b2fd6c60a62862
ba9ea62a19bf9cd4ea3a084fd9ecdc4aaba9f28f
114803 F20110320_AAAMBP springer_u_Page_40.jp2
99ab575c157237d97e572c87cfecaeb8
a83260da3a87da374a6f7ee52e7e1682299387e9
71603 F20110320_AAALZP springer_u_Page_54.jpg
d188c47724fbf8e9ce1494caff8ca6a4
d2b9f0be080f187245b4c982ee699cd70906b014
51681 F20110320_AAALUR springer_u_Page_14.pro
9cad4777ed051c474e06cdbc15827f9c
211bbec762f57200a9b75f7ea533591206ae8c6d
496 F20110320_AAALPU springer_u_Page_03.txt
d2a06c80baf11c5c11bc706ca3d177d8
a7f68a9cab8141ffb1058aa73aad2595f13246e5
115136 F20110320_AAAMBQ springer_u_Page_42.jp2
a610d2190e9982c013c85094c01eeb54
fa8338732c87f35b69c2d25945aea3b978bf7090
23100 F20110320_AAALZQ springer_u_Page_54.QC.jpg
756a9ee005e94acfef94577704735b3a
dc2a0197824f7b744e7305bce9bdcff1552b5878
52619 F20110320_AAALUS springer_u_Page_15.pro
b3d3e4e4daf98b50579b42fea4fafbeb
dccf1a2e4f74216f10e0c6551eed32c1acb55d91
37353 F20110320_AAALPV springer_u_Page_28.pro
8e8c082bd5141cfaeb0ff9a257a9de4b
0029662493f5d8625a9841564fef373163a0b586
780088 F20110320_AAAMBR springer_u_Page_43.jp2
5285611b7bba09689026f076c14b7d79
411b35154fec9acc23c44fe3368fa34d278bcb17
72107 F20110320_AAALZR springer_u_Page_55.jpg
ecb21a4ec7e30c2243d9f3c4cf5d9df0
2890042db54d4a2ec03cbe914d21011365ca1b0a
50327 F20110320_AAALUT springer_u_Page_16.pro
33ddcb2d03aaaf25d629584307479af8
1172315e7dbe8c6f77fd577c85d082dcc705e494
39416 F20110320_AAALPW springer_u_Page_39.pro
01b1d9f6e580d1567f035dc6d7894a59
88df6dbbfb266254db70e1187a789998df8cce7f
92498 F20110320_AAAMBS springer_u_Page_45.jp2
af0b774cfff22a04fc0f3eca65b1aa66
0c5f23d7287a4ed4da3354a32cb4a11174db03ef
74338 F20110320_AAALZS springer_u_Page_56.jpg
ea3fce66aa310ff5eae4c33f70b1081a
cb8f92e0658f911d95a70046b8623159f29e434b
53301 F20110320_AAALUU springer_u_Page_19.pro
268cc51e0da62ace6ad83b0e00139bc8
fcc60d39e40a6972f70be1e0c4c3a085df9ede17
F20110320_AAALPX springer_u_Page_08.tif
52d1fafd0fef307b22ad898a48635c3a
c783bdc7e5d8649353a66190671312e511bd5a9a
739665 F20110320_AAAMBT springer_u_Page_46.jp2
f3e18ddc82342335b29917fb8664b11f
dccb550573b4be9bb86d90e9dae3be836f3daf3e
24214 F20110320_AAALZT springer_u_Page_56.QC.jpg
d274b8a6ebf0f7ee4caf1c26c1838e7e
1f237a96c649ab8160feef7ab6883996d405d142
49954 F20110320_AAALUV springer_u_Page_20.pro
30f461347bca0da1420bb62c36a5367c
6438fccbe8c1f90f959983dccddbf7d70b1cca3d
66542 F20110320_AAALNA springer_u_Page_60.jpg
f6ca7cf323baf856d68c0dfce3778090
c954205145cdf2a017c323f538de608f3269000a
5702 F20110320_AAALPY springer_u_Page_02.jp2
8475d26da47bb2a6a115f3a1874c4bae
dc4571c369ae50b2513ede96b7200aeb93e59868
724425 F20110320_AAAMBU springer_u_Page_47.jp2
c47dd09c91e25df19f23792030001783
65bc469ec329f7d4e2ce3e62cc03b274c4ce3592
51817 F20110320_AAALUW springer_u_Page_21.pro
4d6e3f15577d890dea241c895ce62974
b3cf3e58f1ada54385ea2696e1d1f8b1147c7a1e
2177 F20110320_AAALNB springer_u_Page_05.txt
ed9b61996642271b68460a3a97132993
b85a46815a3061caf6f4361af4399cb60d77aca5
22868 F20110320_AAALPZ springer_u_Page_57.QC.jpg
d5454ffb438ae909749a9e0a5b395d71
a3753d5b7a746e5fe98a0c2dab2a4f95704d802b
46000 F20110320_AAAMBV springer_u_Page_48.jp2
bbfcd73e47e1fef8014fcdd707d614fc
7d95500b2a8a1fbd444c02eb6c1328584fd2ba15
69368 F20110320_AAALZU springer_u_Page_57.jpg
8037ada9e751f9594a5668ba595318f8
268eb7f2886bfcb5c221b46cb7436959eca81172
34533 F20110320_AAALUX springer_u_Page_23.pro
aa3e3e10e7a4e441c56c133c23793add
4de5de7fa8fda3b3508a70675405ff4e203a7e3e
F20110320_AAALNC springer_u_Page_31.tif
ca114eef4ee74686b840c8c944986bc1
a838119bc49b0556a084cb062030e38f2e08ed74
92956 F20110320_AAAMBW springer_u_Page_49.jp2
040cb9b06c129347fe4ee2f8f7c99657
4aeb770d286baf52ebb46e65f66d7b841dd1c6c1
69273 F20110320_AAALZV springer_u_Page_58.jpg
ed0a3c9f2fe8f4132ae9d223e5bfcdb3
b1d614f59b05219cf0672273214b4c2b9bc9ae61
F20110320_AAALSA springer_u_Page_64.tif
5f46b35f33cfbdb62282868614a64c88
1ad7132db4975ec7ecd1e8cde6ea771672fa9af0
46463 F20110320_AAALUY springer_u_Page_24.pro
29f078af5fdeba93702ca0b23e220e78
780c573c58396cb4ac71a1c80a2af4f3af12bc94
50735 F20110320_AAALND springer_u_Page_54.pro
e3e22042ffb32ba74448d0b1fbc99127
8a387542d4b6b1f1f5d8f74b7cea021a0d83d8dc
110272 F20110320_AAAMBX springer_u_Page_50.jp2
fffe5cbad54edd3ef5435edbf01826bc
90d2ca2d3229a568d504b9f453b057b96f85a844
22242 F20110320_AAALZW springer_u_Page_58.QC.jpg
a3e4ea6f6f0700d81d3d37eb1378ba4b
eba44cc68d045039fa8c031d18d5be735caf1b04
F20110320_AAALSB springer_u_Page_65.tif
876d5e3e3a0bf211f4378fd429e7b33d
23a4652b4b3c8c97d6e4eb007805e2ac6447c332
49434 F20110320_AAALUZ springer_u_Page_25.pro
3a216d801c908bb8b8ace7f325757b18
2846a50ee23587da8b782b14ff0f06866b1faa3d
66356 F20110320_AAALNE springer_u_Page_61.jpg
561a2f031d93ac893d309333bcd42390
b3ec6c6b53b994cfd617d99120505d751b29e7f9
105156 F20110320_AAAMBY springer_u_Page_51.jp2
3697bb40afd6fe5251efb8e132f4f6ce
b16cb0e2d9de3ed376aec25c283e8502be7fdc72
12136 F20110320_AAALZX springer_u_Page_59.jpg
0b57902fba22feca00a8e35c1db02d3f
1dfa875eaa343c3e463c43c786b995aac82db417
F20110320_AAALSC springer_u_Page_66.tif
5657a6ea18c148c41add78548dc145b1
1cfce7a8d2dc1767dd64b2fa1bb5d4b7dc66d729
16342 F20110320_AAALNF springer_u_Page_06.pro
bfcfd4f22538c1e4f937cd54edd783d1
849a3877232798a581761c7abda3cc4feb1f3c17
110327 F20110320_AAAMBZ springer_u_Page_52.jp2
63f8ca9a6849be90f6e4eea4640373b9
3aaa1ea06e9880e3c604a30c091fc7634c1d43a6
4035 F20110320_AAALZY springer_u_Page_59.QC.jpg
d67b1a450331f47fcceddb80a5242a9c
69f38089e994677e8c8bb6de50e34374e28e6dba
F20110320_AAALSD springer_u_Page_67.tif
8653cba19e616db3d8125563158803e0
64ef676ca1991f509bd8635052e531fb4463e8e4
5062 F20110320_AAALNG springer_u_Page_04thm.jpg
cdaff98403fddd9a503cda3cbd30f718
8cf96640153fe6da761a5a4f20eea1c8445273ce
23558 F20110320_AAALXA springer_u_Page_13.QC.jpg
ebbdc5bd88a6c558ed99db728a28e0c1
9de3f031587d95b750d000633af026b759723689
F20110320_AAALSE springer_u_Page_68.tif
9080d328c181663d77a86a3a4e458df2
7dec7fac87b713e7dd31eb796fdc8a61b95c4cf4
72575 F20110320_AAALNH springer_u_Page_62.jpg
820da1ca02d09f1720621ec610258989
1af93e49d7f32db8b419482a7802053d99634ea1
73067 F20110320_AAALXB springer_u_Page_14.jpg
e9c9a8d424fc1c0298d33b9d25056b1a
fc9c1c55a28d676cb3c9aea00f48d79f0f0d5b12
19617 F20110320_AAALZZ springer_u_Page_60.QC.jpg
96f3a8e7be9238bd448bd751127b1f41
7dedda9eb37dd30b5c2adb3cb589602681745aae
F20110320_AAALSF springer_u_Page_69.tif
2ffce5be103a7d19079c05008ecb3cc2
96c4788a80be43986dd01c76e568147b34cdab20
24179 F20110320_AAALXC springer_u_Page_14.QC.jpg
e3d10679ab8af01f8410bd9f87b0b460
e5171818d61855011e79305409bf5277cd438f30
6408 F20110320_AAAMEA springer_u_Page_52thm.jpg
ddbfdb1510781322dd0ca62d1dee94c3
ba8c0e310c38b07fb684e60eff2f09726b1b1672
464 F20110320_AAALSG springer_u_Page_01.txt
a77a1538c154ae8c501cd83c057f15c7
ccf5949f768bd412d5f0087a43477a4cd68707cd
113 F20110320_AAALNI springer_u_Page_02.txt
5e904c95d6f02d4374ffbcda7ade0ebf
4227dc7b367326d40ee98d1a1ecf443f7a15fe0f
75273 F20110320_AAALXD springer_u_Page_15.jpg
3bc7a0e3aadb343f5cdc23a987945bfe
1934251abc19e623bd123770834b631ed37b50cc
6504 F20110320_AAAMEB springer_u_Page_54thm.jpg
e20f46eef25137f379b3e3f6d492df08
372d4039dd4b3fda3a22dec4d450eab286d2ae49
3307 F20110320_AAALSH springer_u_Page_04.txt
ef0dda529cf35c10dcdd166ad5291e1e
3928331faf165431e099c1bc0a3e08d016479106
101041 F20110320_AAALNJ springer_u_Page_60.jp2
7f432c387798a4d2b8bf7a2d0187e741
c6ed091f7610718723355ee1e087a0385a0511f9
24336 F20110320_AAALXE springer_u_Page_15.QC.jpg
4b8fed4fc403f5eba0b924865a104cdf
2c04ae0cdddb6885ff48d1245a2f2092e2a9076a
6657 F20110320_AAAMEC springer_u_Page_56thm.jpg
fea2c997180e312b7427ee9e3c258d88
d6ebc62e875a593caccedec82d528c13dee95397
704 F20110320_AAALSI springer_u_Page_06.txt
5c59d1cc7b108cc5fcdcd47a3612bb30
8f9df5ad65313844dd247bbcff6ed37304d05c34
58994 F20110320_AAALNK springer_u_Page_39.jpg
280331ebc48d9fa4b2154a4955b51747
f22b18db9cfdd792eb69eacca7acbde624874c9c
71209 F20110320_AAALXF springer_u_Page_16.jpg
dc4c5078eb212ea2bf0ba4edcd4a7120
d4beae655d1a0a9bdc998a325a97e1415c9e8c32
5385 F20110320_AAAMED springer_u_Page_61thm.jpg
7ee3fd802e172b5a57222f27da1af363
53e4962c1bed6514ac3d856e1a06edfb45fbff65
702 F20110320_AAALSJ springer_u_Page_07.txt
1bc213bcaca22766c3a4b892c4940747
47720117f12b6ce23588cdd53380d799cf7b4bf8
9422 F20110320_AAALNL springer_u_Page_69.pro
e6f112a10f01d2f17f23cb2add82278f
5311b85cec39381a9b7335b1bf17e080d275d55f
72184 F20110320_AAALXG springer_u_Page_17.jpg
02cc5715f97889e33918af1c646370cb
94d7927c6cb0c94e2a1bdc247b919b9c25a629c7
6454 F20110320_AAAMEE springer_u_Page_63thm.jpg
f1134b2ace7f08c270d02d789b9febaa
83b91cc8d3bb2d45788773475a9943c6784525f6
1606 F20110320_AAALSK springer_u_Page_08.txt
30f7b428a1f1bde1feb4468e865974f9
2058a59519d2fcfe669b6685d9853aac37b70084
48963 F20110320_AAALNM springer_u_Page_33.pro
3ae1e9e173071ac3e79cb729782c2c86
66e261f5d5c8db8b612c4d5c5da0352b4b1c9d3c
23422 F20110320_AAALXH springer_u_Page_17.QC.jpg
111f69b57989ee3f681aae1027a3b834
f723443abc8bac01a94b6069c5443008e6143d9d
6693 F20110320_AAAMEF springer_u_Page_65thm.jpg
2561470f89f61de55a87d7d2c860b003
64063f94c8197b2d47b24eef21e7b7d644f6a5ff
1757 F20110320_AAALSL springer_u_Page_09.txt
cb3f1c6aa50692b19a10c668753a6f85
9456efa888050bd66c2cf1b815d3274efa226334
48775 F20110320_AAALNN springer_u_Page_18.pro
da3ececc41ebf86c49989fa4154f7139
f68638fd7be4c6746db564647058db79d9b486de
70779 F20110320_AAALXI springer_u_Page_18.jpg
def65c16a766c1772f75deff8ab7361b
788f5203a2bd7bb9e4a5aa0ede52e5ce0d3ab0a1
6854 F20110320_AAAMEG springer_u_Page_66thm.jpg
6fde0e897d59925c58dec55c68b50102
fdca9c0dbbe97798c74d45582f6992eb3de43184
1770 F20110320_AAALSM springer_u_Page_10.txt
5863f78a21c8b868e14316391ecfaa6a
a0915289b3022b61aff527f4fceda6e5635a981a
6433 F20110320_AAALNO springer_u_Page_57thm.jpg
6736d7647aa6a9d2f762f774ca069359
f777a860df873fb45a3edd7a70b5628397f81017
76614 F20110320_AAALXJ springer_u_Page_19.jpg
09b2ebceb8a3a1205e8edf655c28416e
1254097ebb4c5e9c16e104d9a0d7b21ba09d2c76
6827 F20110320_AAAMEH springer_u_Page_67thm.jpg
8e482d3ece18ee44a96ac4978ecde447
30034ade09e7d695a86b397080815e4d2f929e01
6542 F20110320_AAALNP springer_u_Page_15thm.jpg
0da425255b8c4d7febd40757d980cd0a
b7f2afac9c1c2e072df2f55983462343dc011f0d
23262 F20110320_AAALXK springer_u_Page_20.QC.jpg
0a41fb60321bfc8bbedec536e3bfeb86
1d30d72337f28d4e4482c94f6058020de467368f
5835 F20110320_AAAMEI springer_u_Page_68thm.jpg
843c22cfb0dee5d23cbd9dfc73535a8a
557ce432da48a322027c1aaee3acf3737d0fa1e7
6463 F20110320_AAALNQ springer_u_Page_58thm.jpg
36013f4544f23354095323dbe283cbf4
c420b90a2ca65b65279b0d8fa9b1e9f52395ae2c
73303 F20110320_AAALXL springer_u_Page_21.jpg
d2cc69ca74a6b0e169e156aa1df62818
3a672c1145005b3db923bf4dd0459f6901b71dac
1919 F20110320_AAALSN springer_u_Page_11.txt
ced4e21beb695adac3c14fdfc8d14b02
7f499966c3b9fbf26ef8ef84274623424e7f1190
2339 F20110320_AAAMEJ springer_u_Page_69thm.jpg
d40c7c31a699a1c2c74441df773eaa67
37edd6e95c043b8c0ed10acd25206cf2caa78f54
F20110320_AAALNR springer_u_Page_19.tif
bf5e8745b227746bb592fd6e58d9daaa
896d3447512c84100af6d43d033e9ed8b1fbac49
23934 F20110320_AAALXM springer_u_Page_21.QC.jpg
d80eb7eba5e64698a0d44fddc3bcf177
bbbb518cb06449eb5de6c944d8f21e760828ecf4
2068 F20110320_AAALSO springer_u_Page_12.txt
9465ec3585d6e5a4bae3484d6295fcf0
4ee6d50c1efe7fe0cca0cb27bf55dca0f83a9c09
906041 F20110320_AAAMEK springer_u.pdf
612cb21dae96f8ec93254edd6f3020f8
dc01e948c7e3a9542ae04af830b45340656c9f8d
6450 F20110320_AAALNS springer_u_Page_50thm.jpg
3438caf29a364c37763141d1121d939f
095b01aaf8d1044165932d777cb6946eda98aaa9
68838 F20110320_AAALXN springer_u_Page_22.jpg
aa0800de24e478b68741133116673bd2
3479d3aa88f6a421110c89d9499b26e616c9503a
1988 F20110320_AAALSP springer_u_Page_13.txt
74afe8ec287ab728cb6b4b693c5ae721
ff6f2745315574d3f5ece127a7150f22e4f71928
82099 F20110320_AAAMEL UFE0010570_00001.mets FULL
da966a0afb9b59bf752692a924a5eae0
702267da0add0face0d39611ec2dce85a4364e8e
174 F20110320_AAALNT springer_u_Page_59.txt
bca63575adbc7b71c97ca31823035a94
7ab4b795e1b8313ee48033a6af3d00b2c48328fe
22118 F20110320_AAALXO springer_u_Page_22.QC.jpg
e619229626baea54ae872913e18b7bd7
dc2b20bf49ba95508873e10428cfcfe2b108bf3f
F20110320_AAALSQ springer_u_Page_16.txt
7b9f6670164d54b8e7d80ead6d8b0238
b0502433e3b7699077e3bbe5bf9eafde765fb33d
624 F20110320_AAALNU springer_u_Page_26.txt
cdefad8e2d4c2734098391dc67dfe197
e5115cedebc42458e45bb2c50b55db988609b85a
51202 F20110320_AAALXP springer_u_Page_23.jpg
2cb1f8596908a52042291083737cda3b
b71afd922bd6fa65fcb4fddacc141767fba9fc31
1992 F20110320_AAALSR springer_u_Page_17.txt
73ca003d85cd2c23f5485f7221d14060
0b2ef7ebc36bfbd430e16700e97b24008f81504b
69240 F20110320_AAALNV springer_u_Page_11.jpg
48b81f9859b1c59c1bcca15bf5b7b102
93db3a4cb39c315941e2b2e73ed85036c007d595
17042 F20110320_AAALXQ springer_u_Page_23.QC.jpg
42f0eba35636a9d88d98ed107d372b95
bd8c235355d064f2d3391b17e07ad1fd970285a8
2092 F20110320_AAALSS springer_u_Page_19.txt
e32924ba2b3864791d0a91023e714b46
87518073a4bf1f783d9815e9fdb232dfb3f6ebf1
49158 F20110320_AAALNW springer_u_Page_22.pro
780c83b7b70cb525fa516cbf79cc25ab
ef5d039e52afb39144848fbe86daa6bade353ae4
21534 F20110320_AAALXR springer_u_Page_24.QC.jpg
fe23e3d48b57039da44c3135d40b60ad
f8999be87d13365daa66364068c11e1424a8a5db
1963 F20110320_AAALST springer_u_Page_20.txt
d161a42ac2e5a458b5d143aec866e6b6
558f2bc6cad60724f4031b263c5c0c9b781e66bb
F20110320_AAALNX springer_u_Page_06.tif
52815c0e71fe5d1cdf5e40662115cb8b
5c8f002042944c20a30ab4d37c6bbd5f8485dc37
2053 F20110320_AAALSU springer_u_Page_21.txt
b3027b60812f8a5c033a4c1b251b4a12
bd5b10dc88c7ed1acd5e7d1f17c3fb2e7de1050a
F20110320_AAALNY springer_u_Page_59.tif
a06dc9d33d980fac9f3863a3c2fb95c8
ea443cb3a250a1befd14cfba04f8517660b1ce9d
69279 F20110320_AAALXS springer_u_Page_25.jpg
792d26bb4c53d05708cc3333d779cf85
a3f4601a045c0c44336aa9bdf20e353a9d68183c
2419 F20110320_AAALSV springer_u_Page_22.txt
0fe6f16e0ba209bfe0a005f58fbcb449
a462e85ab74024ee50c92f57873a5cdc62ca9f90
F20110320_AAALNZ springer_u_Page_01.tif
db71399dd32659db06bdefb9da94669e
66a8359d0cbb429b17bdefc4ed98351aba216b38
22762 F20110320_AAALXT springer_u_Page_25.QC.jpg
9a02f2c856f8a5f7d66343fc66f165c7
3b87410f70dc1c69e5c3c868ad2497a0554fe0c4
1368 F20110320_AAALSW springer_u_Page_23.txt
40cc15063948fd9f9d8c7b1b38aae840
cb2cf4ea04c84d2fceabdf6880ae5821ef9a4085
26514 F20110320_AAALXU springer_u_Page_26.jpg
1fa0532edd415e75f86e825509122adb
8429d594b5aea2c890f49b0ef896059d05346048
1895 F20110320_AAALSX springer_u_Page_24.txt
4f96baae618c8cddb7cbe2808b947cf3
e77706812f6b9819dba2872cd17514d1b6dc398b
8884 F20110320_AAALXV springer_u_Page_26.QC.jpg
d116607001b442ba62b4a014ef6228d7
51548fec8d24ff821ea709e4f13ad509bbeecb74
50986 F20110320_AAALQA springer_u_Page_29.pro
01183a9b5ae790803f3dfd8d7a625b9d
0d44d86b6a58e495432fa1e2e15c7d9d8d5ea7a3
1719 F20110320_AAALSY springer_u_Page_27.txt
a2a11ee32a35feec59680a41b97fa06e
84698cdbaa992695a136848bc33ffa37f73e25f6
60638 F20110320_AAALXW springer_u_Page_27.jpg
b6f6a8c42df5ff402fef666db82456fe
7fec5fe167c8be64d1d4e4d3ded33ab9fb19d068
75943 F20110320_AAALQB springer_u_Page_23.jp2
d50c11dfd929a5f9e904a2b8edbcc620
190bba83331e14d93a23e05170be840659ecc946
2018 F20110320_AAALSZ springer_u_Page_29.txt
10a9b61679331c4e98a2cdca4afce566
2693bdef894e0a91030881fcb931f29a8aff8121
19636 F20110320_AAALXX springer_u_Page_27.QC.jpg
fc183f39cafae31b540adb426c9056d0
05db66aafdd5cef3e25371cc2b09ca1a109926dd
113812 F20110320_AAALQC UFE0010570_00001.xml
4f5ce3ee52f15b764fc95eade07b2b93
46a56d51053e51135bf64d8fceb8b194299a5957
14660 F20110320_AAALVA springer_u_Page_26.pro
70aee624a64226f958a348f4073cd915
8cf6deb7137fdea79723de3b44611b9024ed5297
17787 F20110320_AAALXY springer_u_Page_28.QC.jpg
b64c0627bec5832ef970e2791df52cfd
f2a741d04c73023645f043cd7296b314b3d83b19
40385 F20110320_AAALVB springer_u_Page_27.pro
05d306c780bf0299ffc7b563744902ce
59287d47e583f5ee70b34c4582de9d66942807ca
71874 F20110320_AAALXZ springer_u_Page_29.jpg
1e39a67af6437fc3cb8fca0440e0573e
378fd19fc0a0b56a4bee369725b38726d469ed76
50993 F20110320_AAALVC springer_u_Page_30.pro
866f55cf202f00a93baf4e529ec2c97c
e138f031cd3258c06d8f0fdd06b83c13067d190e
F20110320_AAALQF springer_u_Page_03.tif
220046172f298f62b5d96a5d479a6f1d
9682ed1e30e9a7176c8686aa732ea3232fcf0183
131981 F20110320_AAAMCA springer_u_Page_53.jp2
95f8d1b5d90496f57b0ed4a22cbda0b5
d15832e0d8e1e00a3a285fcaac4cec8b76175cf8
42109 F20110320_AAALVD springer_u_Page_31.pro
fed62b4fd0269ce0a4828a28889ab0c4
963fdd0121462cc991e8db130a708a63416f2b84
F20110320_AAALQG springer_u_Page_04.tif
da4885865fbf7feb90f5a58498547dac
830714edb94680149f764aff1e1b504a486a0dd8
111448 F20110320_AAAMCB springer_u_Page_54.jp2
ffbbaaaf8792a97e7b4b383d7d9bd109
ef4a6b0d3e6d5b5dc35aa7e62f715cdfc95733c8
47734 F20110320_AAALVE springer_u_Page_34.pro
d8be4a5bfbfc4d246583b5d534dbff91
9493b02c73d839872e10a80fd9dcc897e294b5b7
F20110320_AAALQH springer_u_Page_05.tif
207ac62c26be8ccd0710f2973ead2b53
e5a00205161c5c40b1b688237c0603631be4be5f
110458 F20110320_AAAMCC springer_u_Page_55.jp2
e317d4115b090a156cd7d4d97dcf9b50
12a414b290c67b733c5778fc0890c5c91d9e36ba
50244 F20110320_AAALVF springer_u_Page_37.pro
071aa6b647fc432d135b924dab9ba868
2a6831dae82cd7fe0bedfcaae8a53cca9a774e27
F20110320_AAALQI springer_u_Page_09.tif
5fc8c91019019a856adad09831964e84
bbf18c2db345fa3cfd5bf6dcfb3ab46b2418615e
114067 F20110320_AAAMCD springer_u_Page_56.jp2
cd5107faadcef5d199aae373308c5926
1cc05a887f5dccd92cf71977df2d1e84b24a11fa
11915 F20110320_AAALVG springer_u_Page_38.pro
8d6f708731369775781d7df09f0dca3a
99de1df20edef319922f1a51c4bea4d91c37c417
F20110320_AAALQJ springer_u_Page_10.tif
b144d3accf6042c068d359ced9d5a5d4
6216b9ea9797294ab7b63c700190380709f11157
107718 F20110320_AAAMCE springer_u_Page_57.jp2
088f6efe8c54511607b599bb0a7e9807
f6cfdc90db33e0488f419068c7859177ab773575
53781 F20110320_AAALVH springer_u_Page_40.pro
597059ce27966bcccc8c3c152fb32f90
94a525bb3ba4c9b23c9db0e336a007ed8dc2415a
F20110320_AAALQK springer_u_Page_11.tif
db44e3c44dd6cef2ea34b53228ca42dd
c8a39d22865037233b83b42cb1b3bf28c6239fc1
106088 F20110320_AAAMCF springer_u_Page_58.jp2
2bb4d55690434dc5d1dde9c92768ff50
5966b6e432ea43bef2cbbb92bb04b8bda287d9fc
52336 F20110320_AAALVI springer_u_Page_42.pro
509478831310b6433da8acd907e97393
3d6330916f4b450dc56c7c85652f0ef0628f3b3d
10191 F20110320_AAAMCG springer_u_Page_59.jp2
044ec4755c20d6d0085f4d9cc1b5d8d1
7525340770ed6af5c137d9142450e61cc287c378
F20110320_AAALQL springer_u_Page_13.tif
f76628a8855844262f48aae4a875668e
4b227d17269fe94981af4de6a862953b9b9ef040
32813 F20110320_AAALVJ springer_u_Page_43.pro
192c31b18f0ffd337bcb4ce1db964cd5
989c8e87df4c3bde2e14c78365715114cfffc6de
112465 F20110320_AAAMCH springer_u_Page_62.jp2
1eb190fe667d03f723aec79f5ed6ea23
c592c48499dee6d461bc28a605addecf740f1ab9
F20110320_AAALQM springer_u_Page_14.tif
b65740bff03436b8eca71c9d5515f478
916d88238d2d80f4d8b92d411c56ed61ddea07f7
49748 F20110320_AAALVK springer_u_Page_44.pro
fb200418ef97dabc2f6362855bab3773
40da8be0d40451e944db97d1a6ca75ba825f6147
123688 F20110320_AAAMCI springer_u_Page_63.jp2
32bd0bfd5637fd0ea630c83b95436644
0ae3e11f8df4ec83101102523b8f71f0b964c0ee
F20110320_AAALQN springer_u_Page_15.tif
54b681f3ec4b32bcb7de7fafc2cbaa8d
a5b5f6e8c820b5e97f163c46688c3fa9b8ef0bd2
45277 F20110320_AAALVL springer_u_Page_45.pro
3a4574667c4836d52944bbaa13e0ba43
864c7f7830e960149886ae654b54e12a3b5b8248
137274 F20110320_AAAMCJ springer_u_Page_64.jp2
f88ed81b4e5548e5312665f68aa30b8c
8bf8b7256538ab18125845c539e9b4141b933ac9
F20110320_AAALQO springer_u_Page_16.tif
ed5a95ee9bea2a70f27463b815d2ae17
c292c9da6c98362a02bf63bf19223db25d60c96a
32805 F20110320_AAALVM springer_u_Page_46.pro
7012a942513e4dac87da171054cf2c5d
cfb9ee9ae7f307793eaf95596cbaf820366a2651
137161 F20110320_AAAMCK springer_u_Page_67.jp2
163ffeaa323ebcf3da471639ab3fdd0a
7dff6634aa535f48f3fa29c1d058dd0193ee3074
F20110320_AAALQP springer_u_Page_17.tif
bdffa57dfc7d66ec0780e89a37fdb3af
dea38c2d51fc07b96fd74544e7e670efeef4848e
1802 F20110320_AAALLS springer_u_Page_28.txt
269ce3209a38d72840611c9cc1be1e13
374f8c29bcc61e0eac0a2e47e319525d64bb6c37
19622 F20110320_AAALVN springer_u_Page_48.pro
51465e6ebb9e73032c17b5d59d0732fd
3e7d32b46e7532cddda5a4fd9963546609bff3b1
109113 F20110320_AAAMCL springer_u_Page_68.jp2
ba841b12454aa00896c45056eb36733f
dfeafdc807c5c95ac4d4d8559c6261fe7433f5a8
F20110320_AAALQQ springer_u_Page_18.tif
5e54b85a25847f64af670aaa66a517b7
05e7104d719738ea75d56b745b485f7131192c16
F20110320_AAALLT springer_u_Page_12.tif
0ba11b1ff7d4a28801711869ea26c465
e65ccec4c6a7fc5a29bfa53f53299f8be35660e0
41643 F20110320_AAALVO springer_u_Page_49.pro
9079f825fc5c5843a49dc424ed6bc6e5
f44c4393a8905ffa8e8a7f5ea0cf12f14def2eca
24452 F20110320_AAAMCM springer_u_Page_69.jp2
a74bf33897b4c96b36a44c01888ea8dc
bf21ec7ee7e5ce95e77240c8b49c6d36b640d12f
F20110320_AAALQR springer_u_Page_20.tif
7112ba6bf5d1ae1772a4a197c1547a6b
3b42e4f1bcb7cd60385332b947134c0c16522793
56874 F20110320_AAALLU springer_u_Page_46.jpg
3a320275ba98b85d3068a15ff327dea3
1fd93ed91737d33a3715300d3fab3361a4781b7c
48268 F20110320_AAALVP springer_u_Page_51.pro
24310447c41118698acde8cb05ec9ec6
7dbb81abe54c803206baa18737e7027118618f3f
2432 F20110320_AAAMCN springer_u_Page_01thm.jpg
827c2d32cd14471de64f5eecc1210a1f
e29c2fec0cc315cdb937d96f580cfea50c36cb02
F20110320_AAALQS springer_u_Page_21.tif
363657baa0506bddb8b39b1d29752a43
ab44b3773c8e7cd8b08f5ccfd235b86bd983459a
23119 F20110320_AAALLV springer_u_Page_16.QC.jpg
be6346db3a70fa27289764b8087a6e23
96266302dcc49e5629abb41d86ea74f3cb395faf
1383 F20110320_AAAMCO springer_u_Page_02thm.jpg
157f4d8fdcfcb02019f349d28c908aaf
663b03ae5c69547954623ba7738d789151d45fea
F20110320_AAALQT springer_u_Page_23.tif
8c574b8c2708f7600a2e6638deb87b09
80e7724b88610fee43c6504cff9adc36868e4a3a
F20110320_AAALLW springer_u_Page_39.tif
8a7f412692e20eec0726153517da5d41
3cf2a47dc904e179b674d38d17a563ad2316766e
50002 F20110320_AAALVQ springer_u_Page_52.pro
298ce3f27f90aeaf3d3861273015b4f1
d5e22722249943fcb9b6ebf0a9f5c6db1cd8a82f
2599 F20110320_AAAMCP springer_u_Page_03thm.jpg
950e9851b5c48084010f87d8383972a0
d958554f6773400ca813a3d63b20d8b826f59540
F20110320_AAALQU springer_u_Page_24.tif
3da181e778bbbb7b056ec31d1275c005
a0b0a0743299d7fdae2c457f52785f88b895dac0
72013 F20110320_AAALLX springer_u_Page_36.jpg
c2721436972c615f03d9b6d857776c47
a2abcbf8593ac1cc39539df7c448a7d471680134
71527 F20110320_AAALVR springer_u_Page_53.pro
9bce968aba77a0f831d79a3c1c2f87bd
510d191905d9173264cd2bcc5f2c2c598ba8cb1a
3890 F20110320_AAAMCQ springer_u_Page_05thm.jpg
e18a2d8efdef3c7cd3a53b1c2d78628b
dc50dc2c2d01a3b18502124f6efd66452c84c0f1
F20110320_AAALQV springer_u_Page_25.tif
b2250ac12b1e742a85e961dad4baaec8
2db5c583366b7d806dc1d5d7e447c2eb8a6e973f
1979 F20110320_AAALLY springer_u_Page_57.txt
61b6b2ed53c0e981d892534ac7352781
e955cdbca6c00849f9268829d9147b67ec28ff0e
50641 F20110320_AAALVS springer_u_Page_55.pro
78c27111c46bc1085565b97c37adb380
c9b13257bd5038ddfa363e555dd7ffe631b78e53
2429 F20110320_AAAMCR springer_u_Page_06thm.jpg
af830feb5b18ec4506dc6fc649c6b4b9
70dc2198a8d0dc7e6cc58568fd767be3ad76c2da
F20110320_AAALQW springer_u_Page_26.tif
314276ab03a1200bbf45c2172c2795b6
ffe0bd71d3a24131de05683b06eacea7b4563cd7
97113 F20110320_AAALLZ springer_u_Page_35.jp2
358721ff5614108d2d6a8186a1381279
e949eba93ee04e10bb1df513c9e509801cc44155
53717 F20110320_AAALVT springer_u_Page_56.pro
32b18a828ad0b843428af0750d15bf89
6cdf606182e3792bbb5a734da9a71695925f4053
2459 F20110320_AAAMCS springer_u_Page_07thm.jpg
90fafc44e340711385c5f9b1b18cb239
b9adb12b003ef757f259d34de7ac14218233d4fd
F20110320_AAALQX springer_u_Page_27.tif
56fa991dae5dd4edd10391a6f74f986a
c9861f00a9acc0b078ebf8d0a9c2961a45947d5f
49661 F20110320_AAALVU springer_u_Page_57.pro
948132dd272d7d2c7098d3dd4b42bce1
17edf187b150b3a5314433482e9b9e7ef5c82f50
4871 F20110320_AAAMCT springer_u_Page_08thm.jpg
83ed8dc93b10048c05a9802d7630637d
8df64930336ca57e74e2440a0d1f799ead973d5f
F20110320_AAALQY springer_u_Page_28.tif
cf203f84a3442d8bfa50891b59f59756
8c6c892a7c7a90b522a0f275ed92338373bbe487
3223 F20110320_AAALVV springer_u_Page_59.pro
0bb8d52f62b0d1dca9dd66611637e04f
122c5924a306b26f706a96f944b6b7318d4cab26
133413 F20110320_AAALOA springer_u_Page_66.jp2
247dfaa921af894f4b94876b30e95146
c021b9cab12751fb7d881f034ae9fc47bdf5b55c
5856 F20110320_AAAMCU springer_u_Page_09thm.jpg
9c44b67f3889878352eef61fbe58c4e7
8dcc57e3a631970e97d72bd9fe37df0b7d0fd3b9
F20110320_AAALQZ springer_u_Page_29.tif
1bb7ae85c8f485c00f60fe61b1638b12
fd73dabafc9d6d7d24117682b623314730b13314
74400 F20110320_AAALVW springer_u_Page_60.pro
075f3ca5308da27f2e7fd5a8ced33842
28b603739809dc97660d98f89ac926b691025a58
2016 F20110320_AAALOB springer_u_Page_25.txt
5cd2ad67a1ce663f47020b7e70db4cef
9474cb3e57ad009fd7cbd3a6566ab74fa489528d
5766 F20110320_AAAMCV springer_u_Page_10thm.jpg
65d48cdc8c50a6f1fdb9195374f75fb5
402e7a6026e7f3ae73c43c3d2d85c030a3621da0
72898 F20110320_AAALVX springer_u_Page_61.pro
aa87d45b5eceafee0bfa71fbc0462d72
88795fea70e83ac9882faf1351bc52ce225c8f2f
18094 F20110320_AAALOC springer_u_Page_46.QC.jpg
3f3809680d11e00ccdb0d610a9cec313
467a88076568a58222e56022d5a37e75a4229363
6287 F20110320_AAAMCW springer_u_Page_11thm.jpg
21f312f19417184f4efe8d56eaad614b
fdbaba7c2137b7dff2da8d1697dcc6ea9b07693a
2001 F20110320_AAALTA springer_u_Page_30.txt
aa3668cf6013a6362456a47f76ae7a28
6d85234717c69804fb42768c79f28622060552f7
51998 F20110320_AAALVY springer_u_Page_62.pro
ec20c9b11673e1ffde87b03e3c5d72cb
d5e8a278d98d39bfb70532e07c07934938421f28
2063 F20110320_AAALOD springer_u_Page_15.txt
6c9289a3f6cb3df239374ebe659e80d4
4808d41d82527406c50af3b27c14cfa9f64c2377
6684 F20110320_AAAMCX springer_u_Page_12thm.jpg
c841033f8f12967622c2b703709ccc9f
a9d633ea2969a88d629a7f7aab52aa26b54fadf6
1936 F20110320_AAALTB springer_u_Page_31.txt
ed5847e44e472e11fac68f02d77440cc
7ad962f19f4a1cf64829213a8e87717960973ff5
56918 F20110320_AAALVZ springer_u_Page_63.pro
39adc4f9e4714d387ea212188e5cdbd2
541bcd0af16b4803814aa662eee3743dd38298d0
22667 F20110320_AAALOE springer_u_Page_18.QC.jpg
15e21b05be3ef13353dd9dbcc8f77c18
6c6cc7f0ee1413a06254d63ca470a797661e2792
6604 F20110320_AAAMCY springer_u_Page_13thm.jpg
210440719340c48bbcb13c96588df44c
b1c999b546a155ea60b5a3962931420093632e1f
2054 F20110320_AAALTC springer_u_Page_32.txt
cdf7e9fa9c5704578c76907f6ad05470
040a946d75bc31e62d68d01f87fe7137308b0700
74442 F20110320_AAALOF springer_u_Page_68.jpg
8e3a7fc4555a5577096fd6cf071a3cb8
11e036adcca29cf20f515960f98b7f19a8d79b67
6551 F20110320_AAAMCZ springer_u_Page_14thm.jpg
2d0f5d25333ef16d7fd52387071228e0
910edbd447a126676420d204d111190ddd66f965
23714 F20110320_AAALYA springer_u_Page_29.QC.jpg
48b08b4c6471df1d55219a2945bb68a7
62777e6421e70ab23ad7b937ffdd9adfb57e5c65
19733 F20110320_AAAMAA springer_u_Page_61.QC.jpg
97ceac4a70e0e3ea2144efa2367bec50
258a675af8aca3abde703468e84395627be375a1
1774 F20110320_AAALTD springer_u_Page_35.txt
72ec0c9710d12f5f90aca9264f60c223
2c56df194f12722e1e1754b059ba059578cf4ec7
F20110320_AAALOG springer_u_Page_38.tif
ae633791d9d9d06b96e938eb3b744a14
6dea7e10a46c8c4c020bd4fd781d9950d98bc658
72516 F20110320_AAALYB springer_u_Page_30.jpg
ce8b446cbfc93b72128f6c739bcf8b35
e51b8aac538a5bab55a8b8cbfc1750721262b9bc
21838 F20110320_AAAMAB springer_u_Page_62.QC.jpg
43fb56f524b478b0a72de6053183bb05
3fce36f54dc4a223356bff65297dfaab5ad76884
1980 F20110320_AAALTE springer_u_Page_36.txt
6ad9277a5346a209e284131b64d1efdf
3ebc85f84277c3efacde5859a9de23551f8c2653
95722 F20110320_AAALOH springer_u_Page_10.jp2
6b0cfa7befd068b602e0faed1d954730
6744da1886a5f499f851e04d5a923babbe8fe71e
24019 F20110320_AAALYC springer_u_Page_30.QC.jpg
3fe9f484e22d6b658a2f09b4a0cefa29
51d840762861826ad8a6183f9a62d1bd5d8eafdf
77728 F20110320_AAAMAC springer_u_Page_63.jpg
b603895bd1244ff20557ef1adae65fdd
055b7b02821f654402eebd4c0b439dfb08dafc9c
516 F20110320_AAALTF springer_u_Page_38.txt
4a74095d1e8420c21dabf5531bfd89fa
e791dab28c7e5412c9af0df936da140317d6cdb6
F20110320_AAALOI springer_u_Page_22.tif
7d98d97de156f5e23d538fcaf53374f9
ba4f5af48b64dfd16ba594f7dead53362ba0c45a
59962 F20110320_AAALYD springer_u_Page_31.jpg
034bae5b5afd2d86572820a1bf490f38
9279dcaaa7860c19d7e554a5044f9f05ca49015b
23013 F20110320_AAAMAD springer_u_Page_63.QC.jpg
2fc392399a3bd9689a8dc67e432adca0
ea4851e7aa15be1ca833acaa577d9031b083b743
1862 F20110320_AAALTG springer_u_Page_39.txt
7573082a757219d470cd883c123da5b1
93bf6e200a9dbd4e41be34700c920c413e1fbf8f
19959 F20110320_AAALYE springer_u_Page_31.QC.jpg
c713d961d5720751c7a1cd1fa7dd0d94
0f19071589d1e38143c9f496723c24a99dcceb71
90163 F20110320_AAAMAE springer_u_Page_64.jpg
dc828424e3bb146791682eb43b3aebf2
9e2d2a8f05299737b8ddd867ec996177291bb911
2102 F20110320_AAALTH springer_u_Page_40.txt
6d06898f1803c4d9c615e877d4c3c495
c9154db49a73737f151a084370ff3279b8939364
F20110320_AAALOJ springer_u_Page_45.tif
dcfaa35ba60eb3f250f53a3b387f0dd5
1f6c69b3e0247c9fe5e2992388e113c3e14f1cd4
25899 F20110320_AAAMAF springer_u_Page_64.QC.jpg
3a418f6b1ebeb19a8952dcbdfb431200
49d9cb0520cc728161b10e7d3a4f1c998199c144
2280 F20110320_AAALTI springer_u_Page_42.txt
58e448f79bdf258e0e185060be10b19d
5fbe60b945f1a3bb40a5c2dfcbc3ef254ec55474
109007 F20110320_AAALOK springer_u_Page_13.jp2
ca8653e57d3e988bc554a7637e035601
656b182a9aa94c737bc951ae2bedbd0718a76fb1
23260 F20110320_AAALYF springer_u_Page_32.QC.jpg
bbc33fa3940225189c1c139a2e9271eb
d49cd7e18a94ac84716cdf686844762533ce45a1
86541 F20110320_AAAMAG springer_u_Page_65.jpg
36bb7d6c5e25dd29d628a68d8c396df3
a741a0a4d375d22ee979ab2a6927d1f8c11ebf87
1411 F20110320_AAALTJ springer_u_Page_43.txt
b6a15b5fb7404003b7dae5350e387fdf
2868df0d7d7b2fc2d6bbfd33662bc1910078431e
91364 F20110320_AAALOL springer_u_Page_67.jpg
900f8ba683dd22955b83510d047aad62
91e6d3cff181e819bff120f9f38390ce4e48c4d2
69899 F20110320_AAALYG springer_u_Page_33.jpg
4bd9e4ee8c64ae88fb8252589f12c3ac
4f795d287570eb3252893fcee31feee0dc32884d
24820 F20110320_AAAMAH springer_u_Page_65.QC.jpg
5786c9818eeece9024448141dc4bcf38
eaa60acc568fabd5260a57c71f87df7763fb1005
1977 F20110320_AAALTK springer_u_Page_44.txt
00d5c039eb6de8f7ae84cd114c3890c5
47d5e7390d60410ce7176ccbec3a3924ea834049
21294 F20110320_AAALOM springer_u_Page_09.QC.jpg
f5ea8b0f0d5aba65b3267242a6cb6dda
bcfd2952bbb10b923c9e00d150c98965fb282566
22777 F20110320_AAALYH springer_u_Page_33.QC.jpg
b1f6db4f52c84ebf35ea8aa47f625432
b308a620707e077bf2814f4bb5f90368f8f95d8f
87720 F20110320_AAAMAI springer_u_Page_66.jpg
5891ab69be7aafa258814c99b8cf6132
9247a101f897bde63ad7191752850c4461ce06e1
1776 F20110320_AAALTL springer_u_Page_45.txt
d17e238bf2c59d567fda5f1b5fb06e5a
09de6394b6cd45b6ba572ff02e09542b1a011a5e
5462 F20110320_AAALON springer_u_Page_27thm.jpg
9745eacec475ebcfd7c82fb1f40a0e36
693df3bd5e4c8c6715fdcdcbeb59aa949216a596
68268 F20110320_AAALYI springer_u_Page_34.jpg
43d6b5185c4c4e61373ce924e4fc2e50
80a86f249e3bbe92fb90b40fecb70309a9d94da1
25789 F20110320_AAAMAJ springer_u_Page_66.QC.jpg
869682901f8eebc7a172a2e9659b96e5
81115947f9813cbcc92d96021bf177183e444c51
1400 F20110320_AAALTM springer_u_Page_46.txt
e60ff73ec98b21d1f13590e2734c04de
80683eeafc066a1fc54b4833bb6f5c7c02dbe94c
2030 F20110320_AAALOO springer_u_Page_14.txt
ebf483bc9cd9256997811a27be4fbf71
f6ff9b1774fb2937fa4191e886bcf2c5da2902d3
21792 F20110320_AAALYJ springer_u_Page_34.QC.jpg
a8e9a2f6bcafd2e972d1ec2a29f0906e
8cbf1f83ef50ba7ba09f29dc71a84bd737d593c0
26303 F20110320_AAAMAK springer_u_Page_67.QC.jpg
906aa90cfac6ac99be4315567fa939c9
38077754b78ecf14a5fe2afe065efe1c05a4839b
1348 F20110320_AAALTN springer_u_Page_47.txt
e387e664a9541ee1c8d0c093b82ebf28
ac7e83dfdca3f4dffbb0e7ef4d16d8c6f9da43a1
F20110320_AAALOP springer_u_Page_37.tif
124bd0df8e511545b1625812247b5b82
2b2c088354c7be4d4f8456faa40ddadd6253b51f
64737 F20110320_AAALYK springer_u_Page_35.jpg
b3f37a85170def91aa4675bdc085dc30
191df31bc929e308ce2924cb046e0b367b976caf
21041 F20110320_AAAMAL springer_u_Page_68.QC.jpg
8cfc10e9899ba898f9eda30ccafb9498
02282d25e72aefe872363fbf33013839d2c6d05c
F20110320_AAALOQ springer_u_Page_18thm.jpg
c0c7b4c95132e3a8dc58e849d244a60d
7e4371a5c710160ec3b303328226a96dc65b700f
20977 F20110320_AAALYL springer_u_Page_35.QC.jpg
de70e9d2e0715fb97a88f55c49404036
1c5023b8d94df89a360af5b96e2464d89c3cdc85
780 F20110320_AAALTO springer_u_Page_48.txt
2480d9df0ccb4a98a7850fa0db4c0315
94dd6860b25f6f02f893791874011cf225501e8a
73536 F20110320_AAALOR springer_u_Page_40.jpg
e0d3fe0b8217e290e8402ec688631928
ca08d9e5883ce3ce2977a2e030755e571ee28be3
23487 F20110320_AAALYM springer_u_Page_36.QC.jpg
ef48264152785f19a221122b3e5e440e
a09bd300c508e13210266925bc68d72adc488691
21302 F20110320_AAAMAM springer_u_Page_69.jpg
9c0c0c3b6fef083f04a1de3256a824fd
b9c7bce095d0ade54df1c44df9fbce71c677586f
2038 F20110320_AAALTP springer_u_Page_50.txt
b13f7572f6957de33f2cbdb07eb2fcb0
635afe7d543b4bf230559ae62000b1260bc55a58
1886 F20110320_AAALOS springer_u_Page_34.txt
985400a4a8bbb99f05e4c2ce5a8f441e
b17bb93b227f161a9120b3001f7743ae7f6df752
71637 F20110320_AAALYN springer_u_Page_37.jpg
3afa9070bbbc61174cc5d0473243918e
38b7ed63c90932b2009f047dcf49c3a9ba849059
6818 F20110320_AAAMAN springer_u_Page_69.QC.jpg
b86cf1f98d152aa086d7ea3439b2da5f
95a5adf77bfe7bbc88098fb3b47c11c5f553d6de
1906 F20110320_AAALTQ springer_u_Page_51.txt
3f10907a0a6206c7a851204b8728349b
14f9020040aedb829aa4fbbfe711fb14b646cb06
6477 F20110320_AAALOT springer_u_Page_55thm.jpg
95dab88d322b1ddd7b68d677201e2f6f
bc8d18fcbb3e2eb4b5fb033d054d23a2370e6a69
23381 F20110320_AAALYO springer_u_Page_37.QC.jpg
1f92c385678dff272af6626fd811c634
c501e3ae226240bb04be7b649f23206664a426c1
24221 F20110320_AAAMAO springer_u_Page_01.jp2
8cf078175495c0a2258fb32ab104b71b
914246a29606d3f0fe6e1e42c40a12efd35a7230
F20110320_AAALTR springer_u_Page_52.txt
215bf32c96b7f3a19004ea5c0229ab64
f402a7789e98637ff20c3ba992fa9b41acf8aef9
5911 F20110320_AAALOU springer_u_Page_31thm.jpg
013c891ee181ecdd6cd83ad0e9e61448
6ca9845e7c575b65aa7f3c295726730ad5cfedee
22839 F20110320_AAALYP springer_u_Page_38.jpg
89ed9f5906ba5055b95de397b42ba311
ad3f802130dc180810c3493f1078654554f54be7
28399 F20110320_AAAMAP springer_u_Page_03.jp2
f4aff53f0f2cf315f9cafeae733eea40
c86b3d1a6f90800c970b9ddeb6e42d6d8e96ae58



PAGE 1

DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND DYNAMIC DISPLAYS OF FACIAL EMOTION By UTAKA S. SPRINGER A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by Utaka S. Springer

PAGE 3

ACKNOWLEDGMENTS This research was supported by R01 MH62539. I am grateful to Dawn Bowers for her patience, availability, and expertise in advising this project. I would like to thank the members of the Cognitive Neuroscience Laboratory for their support throughout this project. I would like to extend special thanks to Shauna Springer, Alexandra Rosas, John McGetrick, Paul Seignourel, Lisa McTeague, and Gregg Selke. iii

PAGE 4

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES.............................................................................................................vi LIST OF FIGURES..........................................................................................................vii ABSTRACT.....................................................................................................................viii 1 INTRODUCTION........................................................................................................1 Perceptual Differences for Static and Dynamic Expressions.......................................3 Cognitive Studies...................................................................................................4 Neural Systems and the Perception of Movement versus Form............................5 Dimensional versus Categorical Models of Emotion...................................................7 Dimensional Models of Emotion...........................................................................7 Categorical Models of Emotion...........................................................................10 Emotional Responses to Viewing Facial Expressions................................................12 2 STATEMENT OF THE PROBLEM..........................................................................15 Specific Aim I.............................................................................................................16 Specific Aim II...........................................................................................................16 3 METHODS.................................................................................................................18 Participants.................................................................................................................18 Materials.....................................................................................................................19 Collection of Facial Stimuli: Video Recording..................................................19 Selection of Facial Stimuli..................................................................................20 Digital Formatting of Facial Stimuli...................................................................21 Dynamic Stimuli.........................................................................................................22 Final Selection of Stimuli for Psychophysiology Experiment............................23 Design Overview and Procedures...............................................................................23 Psychophysiologic Measures......................................................................................26 Acoustic Startle Eyeblink Reflex (ASR).............................................................26 Skin Conductance Response (SCR)....................................................................27 Data Reduction of Psychophysiology Measures........................................................27 Statistical Analysis......................................................................................................28 iv

PAGE 5

4 RESULTS...................................................................................................................30 Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces.......................30 Startle Eyeblink Response...................................................................................31 Skin Conductance Response (SCR)....................................................................31 Self-Reported Arousal.........................................................................................32 Hypothesis 2: Emotion Modulation of Startle by Expression Categories..................32 Other Patterns of Emotional Modulation by Viewing Mode......................................35 Skin Conductance Response................................................................................35 Self-Reported Arousal.........................................................................................36 Self-Reported Valence.........................................................................................37 5 DISCUSSION.............................................................................................................40 Interpretation and Relationship to Other Findings.....................................................41 Methodological Issues Regarding Facial Expressions...............................................44 Other Considerations of the Present Findings............................................................46 Limitations of the Current Study................................................................................47 Directions for Future Research...................................................................................48 APPENDIX A STATIC STIMULUS SET.........................................................................................51 B DYNAMIC STIMULUS SET....................................................................................52 LIST OF REFERENCES...................................................................................................53 BIOGRAPHICAL SKETCH.............................................................................................60 v

PAGE 6

LIST OF TABLES Table page 3-1 Demographic characteristics of experimental participants......................................19 3-2 Mean (SD) recognition rates, valence, and arousal of static and dynamic face stimuli.......................................................................................................................23 4-1 Mean (SD) dependent variable scores by Viewing Mode.........................................30 4-2 Mean (SD) dependent variable scores by Viewing Mode and Expression Category...................................................................................................................33 vi

PAGE 7

LIST OF FIGURES Figure page 1-1 Neuroanatomic circuitry of the startle reflex...........................................................13 3-1 Temporal representation of dynamic and static stimuli...........................................22 4-1 Startle eyeblink T-scores by expression category....................................................34 4-2 Self-reported arousal by expression category..........................................................37 4-3 Self-reported valence by expression category..........................................................38 vii

PAGE 8

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND DYNAMIC DISPLAYS OF FACIAL EMOTION By Utaka S. Springer May 2005 Chair: Dawn Bowers Major Department: Clinical and Health Psychology Rationale. Recent studies suggest that many neurologic and psychiatric disorders are associated with impairments in accurately interpreting facial expressions. These studies have typically used photographic stimuli, yet cognitive and neurobiological research suggests that the perception of moving (dynamic) expressions is different from the perception of static expressions. Moreover, in day-to-day interactions, humans generally view faces while they move. This study had two aims: (1) to elucidate differences in physiological reactivity [i.e., startle eyeblink reflex and the skin conductance response (SCR)] while viewing static versus dynamic facial expressions, and (2) to examine patterns of reactivity across specific facial expressions. It was hypothesized that viewing dynamic faces would be associated with greater physiological reactivity and that expressions of anger would be associated with potentiated startle eyeblink responses relative to other facial expressions. viii

PAGE 9

Methods. Forty young adults viewed two slideshows consisting entirely of static or dynamic facial expressions. Expressions represented the emotions of anger, fear, happiness, and neutrality. Psychophysiological measures included the startle eyeblink reflex and SCR. Self-reported valence and arousal were also recorded for each stimulus. Results. Data were analyzed using repeated measures analyses of variance. The participants exhibited larger startle eyeblink responses while viewing dynamic versus static facial expressions. Differences in SCR approached significance (p = .059), such that dynamic faces tended to induce greater responses than static ones. Self-reported arousal was not significantly different during either condition. Additionally, the startle reflex was significantly greater for angry expressions, and comparably smaller for the fearful, neutral, and happy expressions, across both modes of presentation. Self-reported differences in reactivity between types of facial expressions are discussed in the context of the psychophysiology results. Conclusions. The current study found evidence supporting greater psychophysiological reactivity in young adults while they viewed dynamic compared to static facial expressions. Additionally, expressions of anger induced relatively higher startle responses relative to other expressions, including fear. It was concluded that angry expressions, representing personally directed threat, induce a greater motivational propensity to withdraw or escape. These findings highlight an important distinction between initial stimulus processing (i.e., expressions of fear or anger) and motivated behavior. ix

PAGE 10

CHAPTER 1 INTRODUCTION The ability to successfully interpret facial expressions is a fundamental aspect of normal life. An immense number of configurations across the landscape of the human face are made possible by 44 pairs of muscles anchored upon the curving surfaces of the skull. A broad smile, a wrinkled nose, widened eyes, a wink all convey emotional content important for social interactions. Darwin (1872) suggested that successful communication through nonverbal means such as facial expressions has promoted survival of the human species. Indeed, experimental research has demonstrated that infants develop an understanding of their mothers facial expressions rapidly and automatically, and that they use these signals to guide their safe behavior (Field, Woodson, Greenberg, & Cohen, 1982; Johnson, Dziurawiec, Ellis, & Morton, 1991; Nelson & Dolgrin, 1985; Sorce, Emde, Campos, & Klinnert, 1985). The accurate decoding of facial signals, then, can play a protective role as well as a communicative one. A growing body of empirical research suggests that many conditions are associated with impaired recognition of facial expressions. A list of neurologic and psychiatric conditions within which studies have associated impaired interpretation of facial expressions include autism, Parkinsons disease, Huntingtons disease, Alzheimers disease, schizophrenia, body dysmorphic disorder, attention-deficit/hyperactivity disorder, and social phobia (Buhlmann, McNally, Etcoff, Tuschen-Caffier, & Wilhelm, 2004; Edwards, Jackson, & Pattison, 2002; Gilboa-Schechtman, Foa, & Amir, 1999; Kan, 1

PAGE 11

2 Kawamura, Hasegawa, Mochizuki, & Nakamura, 2002; Singh et al., 1998; Sprengelmeyer et al., 1996; Sprengelmeyer et al., 2003; Teunisse & de Gelder, 2001). These deficits in processing facial expressions appear to exist above and beyond disturbances in basic visual or facial identify processing and may reflect disruption of cortical and subcortical networks for processing nonverbal affect (Bowers, Bauer, & Heilman, 1993). In many cases, impairments in the recognition of specific facial expressions have been discovered. For example, bilateral damage to the amygdala has been associated with the inability to recognize fearful faces (Adolphs, Tranel, Damasio, & Damasio, 1994). One potential problem with these clinical studies is that they most often use static, typically photographic, faces as stimuli. This may be problematic for two reasons. First, human facial expressions usually consist of complex patterns of movement. They can flicker across the face in a fleeting and subtle manner, develop slowly, or arise with sudden intensity. The use of static stimuli in research and clinical evaluation, then, has poor ecological validity. Second, mounting evidence suggests that there are fundamental cognitive and neural differences between the perception of static-based and dynamic facial expressions. These differences, which can be subdivided into evidence from cognitive and more biologically based studies, are described in more detail in the following sections. The preceding highlights the need to incorporate dynamic facial expression stimuli in the re-evaluation of conditions currently associated with facial expression processing deficits, as argued by Kilts and colleagues (2003). This line of research would greatly benefit from the creation of a standardized battery of dynamic expression stimuli. Before

PAGE 12

3 a more ecologically valid dynamic battery can be developed, it is necessary to more precisely characterize how normal individuals respond to different types of facial expression stimuli. Although cognitive, behavioral, and neural systems have been examined in the comparing responses associated with static and dynamic face perception, no studies to date have compared differences in emotional reactivity using psychophysiologic indices of arousal and valence (i.e., startle reflex, skin conductance response). The two major goals of the present study, then, are as follows: first, to empirically characterize psychophysiologic differences in how people respond to dynamic versus static emotional faces, and second, to determine whether psychophysiologic response patterns differ when individuals view different categories of static and dynamic facial expressions (e.g., anger, fear, or happiness). The following sections provide the background for the current study in three parts: (1) evidence that suggests cognitive and neurobiological differences in the perception of static versus dynamic expressions, (2) dimensional and categorical approaches to studying emotion, and (3) emotional responses to viewing facial expressions. Specific hypotheses and predictions are presented in the next chapter. Perceptual Differences for Static and Dynamic Expressions Evidence that individuals respond differently to static and dynamic displays of emotion comes from two major domains of research. The first major domain is cognitive research. With regard to the present study, this refers to the study of the various internal mental processes involved in the perception of emotions in others (i.e., recognition and discrimination), as inferred by overt responses. The second major domain is neurobiological research. Again, specific to the present study, this refers to the physiological and neurological substrates involved during or after emotion perception.

PAGE 13

4 The following sections review the literature from these two domains with regard to differences in perception of static and dynamic expressions. Cognitive Studies Recent research suggests that facial motion influences several cognitive aspects of face perception. First, facial motion improves recognition of familiar faces, especially in less-than-optimal visual conditions (Burton, Wilson, Cowan, & Bruce, 1999; Lander, Christie, & Bruce, 1999). For example, in conditions such as low lighting or blurriness, the identity of a friend or a famous actor is more easily discerned through face perception if the face is moving. It is less clear whether this advantage of movement is also conferred to the recognition of unfamiliar faces (Christie & Bruce, 1998; Pike, Kemp, Towell, & Phillips, 1997). As reviewed by OToole et al. (2002), there are two prevailing hypotheses on how facial motion enhances face recognition. According to the first, facial movement provides additional visual information that helps the viewer assemble a three-dimensional mental construct of the face (e.g., Pike et al., 1997). A second view is that certain movement patterns may be unique and characteristic of a particular individual (i.e., movement signatures). These unique movement signatures, such as Elvis Presleys lip curl, are thought to supplement the available structural information of the face (e.g., Lander & Bruce, 2004). Either or both hypotheses can account for observations that familiar individuals are more readily recognized from dynamic than static pictures. One question that naturally arises is whether facial motion also increases recognition and discrimination of discrete types of emotional expressions. Like familiar faces, emotional expressions on the face have been shown to be similar across individuals and even across cultures (Ekman, 1973; Ekman & Friesen, 1976). Leonard and

PAGE 14

5 colleagues (1991) found that categorical judgments of happiness during the course of a smile occurred at the point of most rapid movement change in the actors facial configuration. Werhle and colleagues (2000) reported that recognition of discrete emotions was enhanced through the use of dynamic versus static synthetic facial stimuli. Other research extended the findings of Werhle et al. by finding that certain speeds of facial expressions are optimal for recognition, depending on the specific expression type (Kamachi et al., 2001). Altogether, these studies suggest that motion does facilitate the recognition of facial expressions. Some research suggests that the subjectively rated intensity of emotional displays might also be influenced by a motion component. For example, a study by Atkinson and colleagues (2004) suggested that the perceived intensity of emotional displays is dependent on motion rather than on form. Participants in this study judged actors posing full-body expressions of anger, disgust, fear, happiness, and sadness, both statically and dynamically. Dynamic displays of emotion were judged as more intense than static ones, both in normal lighting and in degraded lighting (i.e., in darkness with points of light attached to the actors joints and faces). Although this evidence suggests that dynamic expressions of emotion are indeed perceived as more intense than static ones, research on this topic has been sparse. Neural Systems and the Perception of Movement versus Form Previous research also suggests that distinct neural systems are involved in the perception of static and dynamic faces. A large body of evidence convincingly supports the existence of two anatomically distinct visual pathways in the cerebral cortex (Ungerleider & Mishkin, 1982). One visual pathway is involved in motion detection (V5) while the other visual pathway is involved in processing form or shape information

PAGE 15

6 (V3, V4, inferotemporal cortex) [for review, see Zeki (1992)]. As one example of evidence that visual form is processed relatively independently, microelectrode recordings of individual neurons in the inferotemporal cortex of monkeys have been shown to respond preferentially to simple, statically presented shapes (Tanaka, 1992). Preferential single-cell responses to more complex types of statically presented stimuli, such as faces, have also been shown (DeSimone, 1991). An example of evidence for the existence of a specialized motion pathway is provided by a fascinating case study describing a patient with a brain lesion later found to be restricted to area V5 [Zihl et al., 1983; as discussed in Eysenck (2000)]. This woman was adequate at locating stationary objects by sight, she had good color discrimination, and her stereoscopic depth perception was normal; however, her perception of motion was severely impaired. The patient perceived visual events as if they were still photographs. People would suddenly appear here or there, and when she poured her tea, the fluid appeared to be frozen, like a glacier. Humphreys and colleagues (1993) described findings from two brain-impaired patients who displayed different patterns of performance during the perception of static and dynamic facial expressions. One patient was impaired at discriminating facial expressions from still photographs of faces, but performed normally when asked to make judgments of facial expressions depicted by moving dots of light. This patient had suffered a stroke that involved the bilateral occipital lobes and extended anteriorly towards the temporal lobes (i.e., the form visual pathway). The second patient was poor at judging emotional expressions from both the static and dynamic displays despite being relatively intact in other visual-perceptual tasks of comparable complexity. This patient had two parietal lobe lesions, one in each cerebral hemisphere. Taken together,

PAGE 16

7 the different patterns of performance from these two patients suggest dissociable neural pathways between recognition of static and dynamic facial expressions. Additional work with microelectrode recordings in non-human primates suggests that static and dynamic facial stimuli are processed by visual form and visual motion pathways, respectively, and converge at the area of the superior temporal sulcus (STS) (Puce & Perrett, 2003). A functional imaging study indicates that the STS region performs the same purpose in humans (Puce et al., 2003). In monkeys, specific responses in individual neurons of the STS region have shown sensitivity to static facial details such as eye gaze and the shape of the mouth, as well as movement-based facial details, such as different types of facial motion (Puce & Perrett, 2003). The amalgamation of data from biological studies indicates that static and dynamic components of facial expressions appear to be processed by separable visual streams that eventually converge within the region of the STS. The next section provides a background for two major conceptual models of emotion. This information is then used as a backdrop for the current study. Dimensional versus Categorical Models of Emotion Dimensional Models of Emotion Historically, there have been two major approaches in the study of emotion. In what is often described as a dimensional model, emotions are characterized using chiefly two independent, bipolar dimensions (e.g., Schlosberg, 1952; Wundt, 1897). The first dimension, valence, has been described in different ways (i.e., pleasant to unpleasant, positive to negative, appetitive to aversive); however, it generally refers to a range of positive to negative feeling. The second dimension, arousal, represents a continuum ranging from very low (e.g., calm, disinterest, or a lack of enthusiasm) to very high (e.g.,

PAGE 17

8 extreme alertness, nervousness, or excitement). These two orthogonal scales create a two-dimensional affective space, across which emotions and emotional responses might be characterized. Other dimensional approaches have included an additional scale in order to more fully define the range of emotional judgments. This third scale has been variously identified as preparation for action, aggression, attention-rejection, dominance, and potency, and has been helpful for differentiating emotional concepts (Averill, 1975; Bush, 1973; Heilman, 1987, February; Russell & Mehrabian, 1977; Schlosberg, 1952). For instance, fear and anger might be indistinguishable within a two-dimensional affective space both may be considered negative/unpleasant emotions high in arousal. A third dimension such as dominance or action separates these two emotions in three-dimensional affective space. Briefly, dominance refers to the range of feeling dominant (i.e., having total power, control, and influence) to submissive (i.e., feeling a lack of control or unable to influence a situation). This construct has been discovered statistically through factor analytic methods based on the work of Osgood, Suci, and Tannenbaum (1957). Action (preparation for action to non-preparation for action), on the other hand, was proposed by Heilman [1987; from Bowers et al. (1993)]. This construct was based on neuropsychological evidence and processing differences between the anterior portions of the right and left hemispheres (e.g., Morris, Bradley, Bowers, Lang, & Heilman, 1991). Thus, in the present example for differentiating fear and anger, anger is associated with feelings of dominance or preparation for action, whereas fear is associated with feelings of submission (lack of dominance) or a lack of action (i.e., the freezing response in rats with a sudden onset of fear). In this way, then, a third

PAGE 18

9 dimension can sometimes help distinguish between emotional judgments that appear similar in two-dimensional affective space. Generally, however, the third dimension has not been a replicable factor across studies or cultures (Russell, 1978; Russell & Ridgeway, 1983). The present study incorporates only the dimensions of valence and arousal. Emotion researchers have measured emotional valence and arousal in several ways, including: (1) overt behaviors (e.g., EMG activity of facial expression muscles such as corrugator or zygomatic muscles), (2) conscious thoughts or self-reports about ones emotional experience, usually measured by ordinal scales, and (3) central and physiologic arousal and activation, such as electrodermal activity, heart rate, and the magnitude of the startle reflex (Bradley & Lang, 2000). All three components of emotion have been measured reliably in laboratory settings. Among the physiological markers of emotion, the startle eyeblink typically is used as an indicator of the valence of an emotional response (Lang, Bradley, & Cuthbert, 1990). The startle reflex is an automatic withdrawal response to a sudden, intense stimulus, such as a flash of light or a loud burst of noise. More intense eyeblink responses, measured from electrodes over the orbicularis oculi muscles, have been found in association with negative/aversive emotional material relative to neutral material. Less intense responses have been found for positive/appetitive material, relative to neutral material. Palm sweat, or SCR, is another physiological marker of emotion and typically is used as an indicator of sympathetic arousal (Bradley & Lang, 2000). Higher SCR has been shown to be associated with higher self-reported emotional arousal, relatively independent of valence (e.g., Lang, Greenwald, Bradley, & Hamm, 1993).

PAGE 19

10 Categorical Models of Emotion A second major approach to the study of emotion posits that emotions are actually represented by basic, fundamental categories (e.g., Darwin, 1872; Izard, 1994). Support for the discrete emotions view comes from two major lines of evidence: cross-cultural studies and neurobiological findings [although cognitive studies have also been conducted, e.g., Young et al. (1997)]. With regard to the former line of evidence, Darwin (1872) argued that specific emotional states are evidenced by specific, categorical patterns of facial expressions. He suggested that these expressions contain universal configurations that are displayed by people throughout the world. Ekman and Friesen (1976) developed this idea further and created an atlas describing the precise muscular configurations associated with each of six basic emotional expressions (e.g., surprise, fear, disgust, anger, happiness, and sadness). In a cross-cultural study, Ekman (1972) found that members of a preliterate tribe in the highlands of New Guinea were able to recognize the meaning of these expressions with a high degree of accuracy. Further, photographs of tribal members who had been asked to pose various emotions were shown to college students in the United States. The college students were able to recognize the meanings of the New Guineans emotions, also with a high degree of accuracy. Additional evidence supporting the categories of emotion conceptualization is derived from the neurobiological literature. For instance, electrical stimulation of highly specific regions of the brain has been associated with distinct emotional states. Hess and Brgger [1943; from Oatley & Jenkins (1996)] discovered that angry behavior in cats, dubbed sham rage (Cannon, 1931), were elicited with direct stimulation of the hypothalamus. Fearful behavior and autonomic changes have been induced (both in rats and humans) with stimulation of the amygdala, an almond-shaped limbic structure within

PAGE 20

11 the anterior temporal lobe. These changes include subjective feelings of fear and anxiety as well as freezing, increased heart rate, and increased levels of stress hormones [for review, see Davis & Whalen (2001)]. Positive feelings have also been elicited with direct stimulation of a specific neural area. Okun and colleagues (2004) described a patient exuding smiles and feelings of euphoria in association with deep brain stimulation of the nucleus accumbens region. These studies of electrical stimulation in highly focal areas in the brain appear to lend credence to the hypothesis that emotions can be categorized into discrete subtypes. The case for categorical emotions has been further bolstered with evidence that different emotional states have been associated with characteristic psychophysiologic responses. Several studies conducted by Ekman, Levenson, and Friesen (Ekman, Levenson, & Friesen, 1983; Levenson, Carstensen, Friesen, & Ekman, 1991; Levenson, Ekman, & Friesen, 1990) involved participants reliving emotional memories and/or receiving coaching to reconstruct their facial muscles to precisely match the configurations associated with Ekmans six major emotions (Ekman & Friesen, 1976). The results of these studies indicated that the response pattern from several indices of autonomic nervous system activity (specifically, heart rate, finger temperature, skin conductance, and somatic activity) could reliably distinguish between positive and negative emotions, and even among negative emotions of disgust, fear, and anger (Ekman et al., 1983; Levenson et al., 1991; Levenson et al., 1990). Sadness was associated with a distinctive, but less reliable pattern. Other researchers also have described characteristic psychophysiologic response patterns associated with discrete emotions (Roberts & Weerts, 1982; Schwartz, Weinberger, & Singer, 1981).

PAGE 21

12 Emotional Responses to Viewing Facial Expressions Emotion-specific psychophysiologic responses have been elicited in individuals viewing facial displays of different types of emotions. For instance, Balaban and colleagues (1995) presented photographic slides of angry, neutral, and happy facial expressions to 5-month-old infants. During the presentation of each slide, a brief acoustic noise burst was presented to elicit the eyeblink component of the startle reflex. Angry expressions were associated with significantly stronger startle responses than happy expressions, suggesting that at least in babies, positive and negative facial expressions could emotionally modulate the startle reflex. This phenomenon was explored in a recent study using an adult sample, but with the addition of fearful expressions as a category (Bowers et al., 2002). Thirty-six young adults viewed static images of faces displaying anger, fear, happy, and neutral expressions. Acoustic startle probes elicited the eyeblink reflex during the presentation of each emotional face. Similar to Balabans (1995) study, responses to angry faces were associated with significantly stronger startle reflexes than responses to other types of expressions. Startle eyeblinks during the presentation of neutral, happy, and fearful expressions did not significantly differ in this study. The observations that fear expressions failed to prime or enhance startle reactivity seem counterintuitive for two reasons (Bowers et al., 2002). First, many studies have indicated that the amygdala appears to play a role in danger detection and processing fearful material. Stimulation of the amygdala induces auras of fear (Gloor, Olivier, Quesney, Andermann, & Horowitz, 1982), while bilateral removal or damage of the amygdala is characterized by behavioral placidity and blunted fear for threatening material (Adolphs et al., 1994; Klver & Bucy, 1937). A few studies have even

PAGE 22

13 suggested that the amygdala is particularly important for identification of fearful facial expressions (Adolphs et al., 1994; J. S. Morris et al., 1998). A second reason why the null effect of facial fear to startle probes seems counterintuitive is derived from the amygdalas role in the startle reflex. Davis and colleagues mapped the neural circuitry of the startle reflex using an animal model [see Figure 1-1; for a review, see Davis (1992)]. Their work has shown that through direct neural projections, the amygdala serves to amplify the startle circuitry in the brainstem under conditions of fear and aversion. In light of this research, the finding that fearful faces exerted no significant modulation effects on the startle circuitry (Bowers et al., 2002) does appear counterintuitive, at least from an initial standpoint. Stimulus Input Sensory Cortex Sensory Thalamus N ucleus Reticularis Pontis Caudalis Potentiated Startle Ventral Central Gray (Freezing) Lateral Region Hypothalamus Autonomic NS (HR, BP) Dorsal Central Gray (Fight/Flight) Lateral Central Nucleus Nucleus Amygdala Figure 1-1. Neuroanatomic circuitry of the startle reflex (adapted from Lang et al., 1997) The authors, however, provided a plausible explanation for this result (Bowers et al., 2002). They underscored the importance of the amygdalas role in priming the subcortical startle circuitry during threat-motivated behavior. Angry faces represent personally directed threat, and, as demonstrated by the relatively robust startle response they found, induce a motivational propensity to withdraw or escape from that threat. Fearful faces, on the other hand, reflect potential threat to the actor, rather than to the perceiver. It is perhaps unsurprising in this light that fearful faces exerted significantly

PAGE 23

14 less potentiation of the startle reflex. The preparation for action dimension (Heilman, 1987) might account for this difference between responses to fearful and angry faces perhaps the perception of fear in another face involves less propensity or motivation to act than personally directed threat. Regardless of the interpretation, these findings suggest that different types of emotional facial expressions are associated with different, unique patterns of reactivity as measured by the startle reflex (also referred as emotional modulation of the startle reflex). The question remains as to whether the pattern of startle reflex responses while viewing different facial expressions is different when viewing dynamic versus static emotional facial expressions. This has only been evaluated previously for static facial expressions, but not for dynamic ones. It seems reasonable to hypothesize that the two patterns of modulation will be similar, as both dynamic and static visual information must travel from their separate pathways to converge on the area of the cortex that enables one to apply meaning (STS area of the cortex). Across emotions, the question also remains as to whether overall differences in physiologic reactivity exist. These questions are tested empirically in the present study.

PAGE 24

CHAPTER 2 STATEMENT OF THE PROBLEM Historically, the characterization of expression perception impairments in neurologic and psychiatric populations has been largely based on research using static face stimuli. The preceding literature suggests this may be problematic, as fundamental cognitive and neurobiological differences exist in the perception of static and dynamic displays of facial emotion. A long-term goal is to develop a battery of dynamic face stimuli that would enable investigators and clinicians to better evaluate facial expression interpretation in neurologic and psychiatric conditions. Before this battery can be developed, however, an initial step must be taken to characterize differences and similarities in the perception of static and dynamic expressions. To date, no study has used psychophysiological methods to investigate this question. This study investigates the emotional responses that occur in individuals as a result of perceiving the emotions of others via facial expressions. The two major aims of the present study are to empirically determine in normal, healthy adults (1) whether dynamic versus static faces induce greater psychophysiologic reactivity and self-reported arousal and (2) whether reactions to specific types of facial expressions (e.g., anger, fear, happiness) resolve into distinct patterns of emotional modulation based on the mode of presentation (i.e., static, dynamic). To examine these aims, normal individuals were shown a series of static or dynamically presented facial expressions (fear, anger, happy, neutral) while psychophysiologic measures (skin conductance, startle eyeblink) were simultaneously acquired. Following presentation of each facial stimulus, subjective 15

PAGE 25

16 ratings of valence and arousal were obtained. Thus, the primary dependent variables were included: (a) skin conductance as a measure of psychophysiologic arousal; (b) startle eyeblink as a measure of valence; and (c) subjective ratings of valence and arousal. Specific Aim I To test the hypothesis that dynamically presented emotional faces will induce greater psychophysiologic reactivity and self-reported arousal than statically presented faces. Based on the reviewed literature, it is hypothesized that the perception of dynamic facial expressions will be associated with greater overall physiological reactivity than will the perception of static facial expressions. This hypothesis is based on evidence suggesting that dynamic displays of emotion are judged as more intense, as well as the fact that the perception of motion in facial expressions appears to provide more visual information to the viewer, such as three-dimensional structure or movement signatures. The following specific predictions are made: (a) the skin conductance response will be significantly larger when subjects view dynamic than static faces; (b) overall startle magnitude will be greater when subjects view dynamic versus static faces; and (c) subjective ratings of arousal will be significantly greater for dynamic versus statically presented faces. Specific Aim II To test the hypothesis that the pattern of physiologic reactivity (i.e., emotional modulation) to discrete facial emotions (i.e., fear, anger, happiness, neutral) will be similar for both static and dynamically presented facial expressions. Based on preliminary findings from our laboratory, we expected that anger expressions would induce heightened reactivity (as indexed by the startle eyeblink reflex) than fear, happiness, or neutral expressions. We hypothesized that this pattern of emotion

PAGE 26

17 modulation will be similar for both static and dynamic expressions, since both modes of presentation presumably gain access to neural systems that underlie interpretation of emotional meaning. The following specific predictions are made: (a) for both static and dynamic modes of presentation, the startle response (as indexed by T-scores) for anger expressions will be significantly larger than those for fear, happy, and neutral ones, while magnitudes for fear, happy, and neutral expressions will not be significantly different from each other.

PAGE 27

CHAPTER 3 METHODS Participants Participants consisted of 51 (27 females, 24 males) healthy, right-handed adults recruited from the University of Florida campus. Exclusion criteria included: (1) a history of significant neurologic trauma or disorder, (2) a history of any psychiatric or mood disorder, (3) a current prescription for mood or anxiety-altering medication, (4) a history of learning disability, and (5) clinical elevations on the Beck Depression Inventory (BDI) (Beck, 1978) or the State-Trait Anxiety Inventory (STAI) (Spielberger, 1983). Participants gave written informed consent according to university and federal regulations. All participants who completed the research protocol received $25. Eleven of the 51 subjects were excluded from the final data analyses. They included 8 subjects whose psychophysiology data were corrupted due to excessive artifact and/or absence of measurable blink responses. The data from 3 subjects were not analyzed due to clinical elevations on mood questionnaires [BDI (N=2; scores of 36 and 20); STAI (N=1; State score = 56, Trait score = 61)]. Demographic variables for the remaining 40 participants are given in Table 3-1. As shown, subjects ranged in age from 18 to 43 years (M=22.6, SD=4.3) and had 12 to 20 years of education (M=15.3, SD=1.7). BDI scores ranged from 0 to 9 (M=3.8, SD=2.9), STAI-State scores ranged from 20 to 46 (M=29.2, SD=6.9), and STAI-Trait scores ranged from 21 to 47 (M=31.0, SD=6.9). The racial representation was 52.5% Caucasian, 18

PAGE 28

19 17.5% African American, 12.5% Hispanic/Latino, 12.5% Asian, 2.5% Native American, and 2.5% Multiracial. Table 3-1 Demographic characteristics of experimental participants Measure Mean (SD) Range Age 22.6 (4.3) 18 43 Education 15.3 (1.7) 20-Dec GPA 3.48 (0.49) 2.70 3.96 BDI 3.8 (2.9) 0 9 STAI-State 29.2 (6.9) 20 46 STAI-Trait 31.0 (6.9) 2147 Note. BDI = Beck Depression Inventory; GPA = Grade Point Average; STAI = State-Trait Anxiety Inventory. Materials Static and dynamic versions of angry, fearful, happy, and neutral facial expressions from 12 untrained actors (6 males, 6 females) were used as stimuli in this study. These emotions were chosen based on previous findings from our laboratory (Bowers et al., 2002). The following sections describe the procedure used for eliciting, recording, and digitally standardizing these stimuli. Collection of Facial Stimuli: Video Recording The stimulus set for the present study was originally drawn from 15 University of Florida graduate students (Clinical and Health Psychology) and undergraduates who were asked to pose various facial expressions. These untrained actors ranged in age from 19 to 32 years and represented Caucasian, African American, Hispanic, and Asian ethnicities. All provided informed consent to allow their faces to be used as stimuli in research studies.

PAGE 29

20 The videorecording session took place in the Cognitive Neuroscience Laboratory, where the actor sat comfortably in a chair in front of a continuously recording black-and-white Pulnix videocamera. The camera was connected to a Sony videorecorder and located approximately 2 meters in front of the actor. The visual field of the videocamera was adjusted to include only the face of the actor. A Polaris light meter was used to uniformly balance the incident light upon the patients left and right sides to within 1 lux of brightness. To minimize differences in head position and angle between captured facial expressions, the actors head was held in one position by a rigid immobilization cushion (Med-Tec, Inc.) during the entirety of the recording session. Prior to the start of videorecording, the experimenter verified that the actor was comfortable and that the cushion did not obstruct the view of the actors face. A standardized format was followed for eliciting the facial expressions. The actor was asked to pose 6 emotional expressions (i.e., anger, disgust, fear, happiness, sadness, and neutral) and to make each expression intense enough so that others could easily decipher the intended emotion. For neutral, the actor was told to look into the camera lens with a relaxed expression and blink once. Before each expression type was recorded, visual examples from Ekman & Friesens Pictures of Facial Affect (Ekman & Friesen, 1976) and Bowers and colleagues Florida Affect Battery (Bowers, Blonder, & Heilman, 1992) were shown to the actor. At least three trials were recorded for each of the six expression types. Selection of Facial Stimuli Once all the face stimuli were recorded, three nave raters from the Cognitive Neuroscience Laboratory reviewed all trials of each expression made by the 15 actors. The purpose of this review was to select the most easily identifiable exemplar from each

PAGE 30

21 emotion category (anger, disgust, fear, happiness, sadness, neutral) that was free of artifact (blinking, head movement) and most closely matched the stimuli from the Ekman series (Ekman & Friesen, 1976) and the Florida Affect Battery (Bowers et al., 1992). Selection was based on consensus by the three raters. The expressions from 3 actors (2 female, 1 male) were discarded due to movement artifact, occurrence of eyeblinks, and lack of consensus regarding at least half of the intended expression types. This resulted in 72 selected expressions (6 expressions x 12 actors) stored in videotape format. Digital Formatting of Facial Stimuli Each of the videotaped facial expressions were digitally formatted and standardized. Dynamic versions were created first. Each previously selected expression (the best exemplar from each emotion category) was digitally captured onto a PC using a FlashBus MV Pro framegrabber (Integral Technologies) and VideoSavant 4.0 (IO Industries) software. The resulting digital movie clips (videosegments) consisted of a 5.0-second sequence of 150 digitized images or frames (30 frames per second). Each segment began with the actors face in a neutral pose that then moved to peak expression. The temporal sequence of each stimulus was standardized such that the first visible movement of the face (the start of each expression) occurred at 1.5 seconds and that the peak intensity was visible and unchanging for at least 3.0 seconds at the end of the videosegment. To standardize the point of the observers gaze at the onset of each stimulus, 30 frames (1 s) of a white crosshairs over a black background were inserted before the first frame of the videosegment, such that the crosshairs marked the point of intersection over each actors nose. In total, each final, processed videosegment consisted of 180 frames (6.0 seconds). All videosegments were stored in 16-bit greyscale

PAGE 31

22 (256 levels) with a resolution of 640 x 480 pixels and exported to a digital MPEG movie file (Moving Picture Experts Group) to comprise the dynamic set of face stimuli. Unmoving, or static correlates of these stimuli were then created by using the frame representing the peak intensity of each facial expression. Peak intensity was defined as the last visible frame in the dynamic expression sequence of frames. This frame was multiplied to create a sequence of 150 identical frames (5.0 seconds). As with the dynamic stimuli, 1.0 second of crosshairs was inserted into the sequence prior to the first frame. The digital specifications of this stimulus set were identical to that of the dynamic stimulus set. Figure 3-1 graphically compares the content and timing of the both versions of these stimuli. Dynamic Stimuli Image Crosshairs Neutral Moving Peak Expression Expression Expression Seconds 0 1.0 2.5 ~3.0 6.0 Frame No. 0 30 75 90 180 Static Stimuli Image Crosshairs Peak Expression Seconds 0 1.0 6.0 Frame No. 0 30 180 Figure 3-1. Temporal representation of dynamic and static stimuli by time (s) and frame number. Each stimulus frame rate is 30 frames / s. After dynamic and static digital versions of the facial stimuli were created, an independent group of 21 nave individuals rated each face according to emotion category, valence, and arousal. Table 3-2 provides the overall mean ratings for each emotion

PAGE 32

23 category by viewing mode (static or dynamic). Ratings by individual actor are given in Appendixes A (static) and B (dynamic). Table 3-2 Mean (SD) recognition rates, valence, and arousal of static and dynamic face stimuli Measure Anger Disgust Fear Happiness Neutral Sadness Dynamic Faces (n = 12) % Correct 78.2 (16.7) 79.0 (17.5) 94.4 (6.5) 99.6 (1.4) 92.0 (4.2) 93.5 (10.0) Valence 3.34 (.40) 3.58 (.43) 4.12 (.29) 7.23 (.39) 4.68 (.65) 3.51 (.52) Arousal 5.28 (.38) 5.19 (.56) 6.00 (.47) 6.00 (.51) 3.63 (.50) 4.55 (.64) Static Faces (n = 12) % Correct 68.2 (21.3) 77.4 (16.6) 95.2 (5.0) 99.2 (1.9) 89.3 (8.1) 91.3 (11.0) Valence 3.04 (.39) 3.39 (.55) 3.60 (.41) 7.18 (.52) 4.95 (.41) 3.45 (.40) Arousal 5.13 (.61) 5.31 (.64) 5.96 (.53) 5.84 (.56) 3.26 (.39) 4.48 (.56) Final Selection of Stimuli for Psychophysiology Experiment The emotional categories of anger, fear, happiness, and neutral were selected for the present study based on previous results from our laboratory (Bowers et al., 2002). Thus, the final set of stimuli used in the present study consisted of static and dynamic versions of 12 actors (6 female, and 6 male) facial expressions representing these four emotion categories. The total number of facial stimuli was 96 (i.e., 48 dynamic, 48 static). Design Overview and Procedures Each subject participated in two experimental conditions, one involving dynamic face stimuli and the other involving static face stimuli. During both conditions, psychophysiologic data (i.e., skin conductance, startle eyeblink responses) were collected along with the participants ratings of each face stimulus according to valence (unpleasantness to pleasantness) and arousal. There was a 5-minute rest interval between the two conditions. Half the participants viewed the dynamic faces first, whereas the

PAGE 33

24 remaining viewed the static faces first. The order of these conditions was randomized but counterbalanced across subjects. Testing took place within the Cognitive Neuroscience Lab of the McKnight Brain Institute at the University of Florida. Informed consent was obtained according to University and Federal regulations. Prior to beginning the experiment, the participant completed several questionnaires including a demographic form, the BDI, the STAI, and a payment form. The skin from both hands and areas under each eye were cleaned and dried thoroughly. A pair of 3 mm Ag/AgCl sensory electrodes was filled with a conducting gel (Medical Associates, Inc., Stock # TD-40) and attached adjacently over the bottom arc of each orbicularis oculi muscle via lightly adhesive electrode collars. Two 12 mm Ag/AgCl sensory electrodes were filled with conducting gel (K-Y Brand Jelly, McNeil-PPC, Inc.) and were attached adjacently via electrode collars on the thenar and hypothenar surfaces of each palm. Throughout testing, the participant sat in a reclining chair in a dimly lit sound-attenuated 12 x 12 room with copper-mediated electric shielding. An initial period was used to calibrate the palmar electrodes and to familiarize the participant with the startle probes. The lights were dimmed, and twelve 95-dB white noise bursts were presented to the subject via stereo Telephonics (TD-591c) headphones. The noise bursts were presented at a rate of about once per 30 seconds. After the initial calibration period, the participant was given instructions about the experimental protocol. They were told they would see different emotional faces, one face per trial, and were asked to carefully watch each face and ignore the brief noises that would be heard over the headphones. During each trial, the dynamic or static face stimuli

PAGE 34

25 were presented on a 21 PC monitor, positioned 1 meter directly in front of the participant. Each face stimulus was shown for six seconds on the monitor. While viewing the face stimulus, the participant heard a white noise burst (95 db, 50 ms) that was delivered via headphones. The white noise startle probes were randomly presented at 4200 ms, 5000 ms, or 5800 ms after the onset of the face stimulus. At the end of each trial, the participant was asked to rate each face stimulus along the dimensions of valence and arousal. The ratings took place approximately six seconds following the offset of the face stimulus, when a Self-Assessment Manikin SAM; Bradley & Lang, 1994) was shown on the computer monitor. Valence ratings ranged from extremely positive, pleasant, or good (9) to extremely negative, unpleasant, or bad (1). Arousal ratings ranged from extremely excited, nervous, or active (9) to extremely calm, disinterested, or unenthusiastic (1). The participant reported their valence and arousal ratings out loud, and their responses were recorded by an experimenter in the next room, listening via a baby monitor. A new trial began 6 to 8 seconds after the ratings were made. Each experimental condition (i.e., dynamic, static) consisted of 48 trials that were divided into 6 blocks of 8 trials each. A different actor represented each trial within a given block. Half were males, and half females. One male actor and one female actor represented each of four emotions (neutral, happiness, anger, fear) to total the 8 trials per block. To reduce habituation of the startle reflex over the course of the experiment, 8 trials representing male and female versions of each expression category did not contain a startle probe. These trials were spread evenly throughout each slideshow.

PAGE 35

26 Following administration of both slideshows, the experimenter removed all electrodes from the participant, who was then debriefed on the purpose of the experiment, thanked, and released. Psychophysiologic Measures Acoustic Startle Eyeblink Reflex (ASR) Startle eye blinks were measured via EMG activity from the orbicularis oculi muscle beneath each eye. This measure was used as a dependent measure because of its sensitivity to valence, with larger startle eyeblinks associated with negative/aversive emotional states and smaller eyeblinks associated with positive emotional states (Lang, Bradley, & Cuthbert, 1990). The raw EMG signal was amplified and frequencies below 90 Hz and above 1000 Hz were filtered using a Coulbourn bioamplifier. Amplification of acoustic startle was set at 30000 with post-experimental multiplication to equate gain factors (Bradley et al., 1990). The raw signal was then rectified and integrated using a Coulbourn Contour Following Integrator with a time constant of 10 ms. Digital sampling began at 20 Hz 3 s prior to stimulus onset. The sampling rate increased to 1000 Hz 50 ms prior to the onset of the startle probe and continued at this rate for 250 ms after probe onset. Sampling then resumed at 20 Hz until 2 s after stimulus offset. The startle data were reduced off-line using custom software which evaluates trials for unstable baseline and which scores each trial for amplitude in arbitrary A-D units and onset latency in milliseconds. The program yields measures of startle response magnitude in arbitrary A-D units that expresses responses during positive, neutral, and negative materials on the same scale.

PAGE 36

27 Skin Conductance Response (SCR) The SCR was measured from electrodes attached to the palms with adhesive collars. This measure was used because it is an index of sympathetic arousal, correlates with self-reports of emotional arousal, and is relatively independent of valence (Bradley & Lang, 2000). Skin conductance data were sampled at 20 Hz using two Coulbourn Isolated Skin Conductance couplers in DC mode (this is a constant voltage system in which .5v is passed across the palm during recording). The SC couplers output to a Scientific Solutions A/D board integrated within a custom PC. The skin conductance response (SCR) was defined as the difference between the peak conductance during the 6-second viewing period and the mean conductance achieved during the last pre-stimulus second, derived independently for each hand. SCR was represented in microsiemens (S) units. Data Reduction of Psychophysiology Measures After the collection of the psychophysiologic data, the eyeblink and skin conductance data were reduced using custom condensing software. For startle eyeblink, data from trials without startle probes and the initial two practice trials were excluded from the statistical analyses. Trials containing physiological data containing obvious artifacts were also removed. For the remaining data, the peak magnitude of the EMG activity elicited by each startle probe within the recorded time window was measured (peak baseline in microvolts). Peak startle magnitudes were averaged for both eyes into a composite score when data from both eyes were available. If data from only one eye was available, this data was used in place of the composite score. Peak startle magnitudes were additionally translated into T-scores, which were then averaged for each expression type (i.e., happy, neutral, fear, and anger) and mode of presentation (i.e., static

PAGE 37

28 and dynamic stimuli). For both startle magnitudes and T-scores, the four expression categories were represented by no fewer than four trials each. For the skin conductance response, condensing consisted of measuring the peak magnitude of change relative to baseline activity at the start of each trial. Again, trials containing physiological data containing obvious artifacts were removed. The magnitude of change for each trial was measured and averaged for both hands, unless the data from one of the palms contained excessive artifact. In these cases, the data from the other hand was used in place of the composite data. Statistical Analysis Separate analyses were conducted for startle-blink, skin conductance, SAM Valence ratings, and SAM Arousal ratings. Repeated-measures ANOVA with adjusted degrees of freedom (Greenhouse-Geisser correction) were used, with a between-subjects factor of Order of Slideshows (dynamic, then static; static, then dynamic) and within-subjects factors of Expression Category (anger, fear, neutral, happiness) and Viewing Mode (dynamic, static). Analyses corresponding to a priori predictions were conducted using planned contrasts (Helmert) between the four expression categories. A significance level of alpha = 0.05 was used for all analyses. We predicted three changes corresponding to indices of greater psychophysiologic reactivity to dynamic expressions versus static expressions. These indices were: (1) greater magnitude of the startle reflex, (2) greater percent change in skin conductance, and higher self-reported SAM arousal ratings during perception of dynamic facial expressions. Additionally, we predicted that the pattern of T-scores for both dynamic and static facial expressions would show emotional modulation to the four different categories of facial expressions incorporated in the experimental study. That is, startle

PAGE 38

29 reflexes measured during the perception of anger would show larger startle reflexes than those measured during the perception of fear, neutral, and happy expressions. Startle responses measured during the perception of facial expressions represented by the latter three emotional categories would not be appreciably different. Finally, this pattern of modulation would not be significantly different between static and dynamic viewing modes.

PAGE 39

CHAPTER 4 RESULTS The primary dependent measures were the acoustic startle eyeblink response (ASR), the skin conductance response (SCR), and self-reported arousal from the Self-Assessment Manikin (arousal). As previously described, the ASR was quantified by measuring the change in EMG activity (mV) following the onset of the startle probes (i.e., peak minus baseline EMG). The SCR was calculated by the difference between the peak conductance in microsiemens (S) during the 6-second period of stimulus presentation and the mean level of conductance during a 1-s period immediately prior to the onset of the stimulus. Finally, self-reported arousal encompassed a range of 1 to 9, with higher numbers representing greater arousal levels. Table 1 gives the means and standard deviations of each of these dependent variables by viewing mode. Table 4-1 Mean (SD) dependent variable scores by Viewing Mode Viewing Mode Measure Dynamic Static ASR-M .0062 (.0054) .0048 (.0043) SCR .314 (.514) .172 (.275) Arousal 5.27 (.535) 5.30 (.628) Note. ASR = Acoustic Startle Eyeblink Response, Magnitude (mV); SCR = Skin Conductance Response (S); Arousal = Self-Assessment Manikin, Arousal Scale (1-9). Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces An initial set of analyses addressed the first hypothesis and investigated whether psychophysiologic reactivity (startle eyeblink, SCR) and/or self-reported arousal differed 30

PAGE 40

31 during the perception of dynamic versus static emotional faces. The results of the analyses for each of the three dependent variables are described below. Startle Eyeblink Response The first analysis examined whether the overall size of the startle eyeblink responses differed when participants viewed dynamic versus static facial expressions. A repeated-measures ANOVA was conducted using Viewing Mode (dynamic, static) as the within-subjects factor and Order of Presentation (dynamic then static, or static then dynamic) as the between-subjects factor. 1 The results of the ANOVA revealed a significant main effect for Viewing Mode [F(1, 38) = 9.003, p = .005, p 2 = .192, power = .832]. As shown in Table 1, startle eyeblink responses were greater during dynamic versus static expressions. The main effect of Order of Presentations was not significant [F(1, 38) = 1.175, p = .285, p 2 = .030, power = .185], nor was the Viewing Mode X Order of Presentations interaction [F(1, 38) = .895, p = .350, p 2 = .023, power = .152]. Skin Conductance Response (SCR) The second analysis examined whether the perception of the different types of facial emotions induced different SCR patterns between modes of viewing. A repeated measures ANOVA was conducted with Viewing Mode (dynamic, static) and Expression Category (anger, fear, happy, neutral) as the within-subjects factors and Order of Presentations (dynamic first, static first) as the between-subjects factor. The results of the ANOVA revealed that the main effect of Viewing Mode approached significance [F(1, 35) = 3.796, p = .059, p 2 = .098, power = .474], such that SCR tended to be larger 1 Expression Category was not used as a factor in this analysis. Examination of emotional effects on startle eyeblink is traditionally done using T-scores as the dependent variable rather than raw magnitude. Raw startle magnitude is more appropriate as an index of reactivity, whereas T-scores are more appropriate for examining patterns of emotional effects on startle.

PAGE 41

32 when participants viewed dynamic versus static faces (see Table 1). No other main effects or interactions reached trend level or significance {Order of Presentations [F(1, 35) = .511, p = .479, p 2 = .014, power = .107]; Viewing Mode X Order of Presentations [F(1, 35) = 1.559, p = .220, p 2 = .043, power = .229]; Expression Category X Order of Presentations [F(1.832, 64.114) = .942, p = .423, p 2 = .026, power = .251]}. Self-Reported Arousal The third analysis examined whether self-reported arousal ratings differed when participants viewed static versus dynamic facial expressions. Again, a 2 (Viewing Mode) X 4 (Expression Category) X 2 (Order of Presentation) repeated measures ANOVA was conducted. The results of this ANOVA revealed that no main effects or interactions were significant: {Viewing Mode [F(1, 38) = .072, p = .789, p 2 = .002, power = .058]; Order of Presentations [F(1, 38) = 2.912, p = .096, p 2 = .071, power = .384]; Viewing Mode X Order of Presentations [F(1,38) = .479, p = .493, p 2 = .012, power = .104]}. The effects related to Expression Category will be described in the next section (page 39). In summary, viewing dynamic facial stimuli was associated with significantly larger acoustic startle eyeblink responses and a tendency (trend, p = .059) for larger skin conductance responses than viewing static stimuli. There was no significant difference in self-reported arousal ratings between dynamic and static stimuli. Hypothesis 2: Emotion Modulation of Startle by Expression Categories An additional set of analyses addressed the second hypothesis, investigating emotional modulation of the startle eyeblink response via distinct categories of facial expressions (i.e., anger, fear, neutral, and happy). Because of individual variability in the size of basic eyeblink responses, the startle magnitude scores for each individual were converted to T-scores on a trial-by-trial basis. These T-scores were analyzed in a

PAGE 42

33 repeated-measures 4 (Expression Category: anger, fear, neutral, happy) X 2 (Viewing Mode: dynamic, static) X 2 (Order of Presentations: dynamic then static, or static then dynamic) ANOVA. Table 2 gives the means and standard deviations of these scores and other dependent variables by Viewing Mode and Expression Category. Table 4-2 Mean (SD) Dependent variable scores by Viewing Mode and Expression Category Expression Category Viewing Mode Measure Anger Fear Neutral Happy Dynamic ASR-M .0053 (.0052) .0049 (.0046) .0045 (.0037) .0046 (.0042) ASR-T 51.06 (3.43) 49.47 (3.01) 49.77 (3.47) 49.68 (3.14) SCR .1751 (.2890) .1489 (.2420) .1825 (.3271) .1768 (.3402) Valence 3.10 (.89) 3.44 (.99) 4.76 (.54) 7.19 (.84) Arousal 5.39 (1.05) 6.43 (.98) 3.41 (1.33) 5.96 (.88) Static ASR-M .0066 (.0061) .0059 (.0051) .0061 (.0051) .0061 (.0057) ASR-T 50.99 (3.79) 49.43 (3.92) 49.57 (4.30) 49.88 (3.21) SCR .3247 (.5200) .3583 (.8070) .2515 (.3911) .3212 (.5457) Valence 3.17 (1.00) 3.65 (1.21) 4.69 (.84) 7.17 (.84) Arousal 5.51 (1.05) 6.35 (.95) 3.29 (1.36) 5.95 (.87) Note. ASR=Acoustic Startle Response (mV); SCR=Skin Conductance Response (S); Valence=Self-Assessment Manikin, Valence Scale (1-9); Arousal=Self-Assessment Manikin, Arousal Scale (1-9). The main effect of Expression Category approached but did not reach significance [F(3, 117) = 2.208, p = .091, p 2 = .055, power = .548]. No other main effects or interactions reached trend level or significance {Viewing Mode: [F(1, 114) = .228, p = .636, p 2 = .006, power = .075]; Order of Presentations: [F(1, 38) = .336, p = .566, p 2 = .009, power = .087]; Viewing Mode X Order of Presentations: [F(1, 38) = .457, p = .503, p 2 = .012, power = .101]; Expression Category X Order of Presentations: [F(3, 114) = .596, p = .619, p 2 = .015, power = .171]; Expression Category X Viewing Mode: [F(3, 114) = .037, p = .991, p 2 = .001, power = .056]; Expression Category X

PAGE 43

34 Viewing Mode X Order of Presentations: [F(3, 114) = .728, p = .537, p 2 = .019, power = .201]}. The a priori predictions regarding the expected pattern of emotion modulation of the startle response [i.e., Anger > (Fear = Neutrality = Happiness)] warranted a series of planned comparisons (Helmert) on Expression Category. Results of these comparisons revealed that: (a) startle responses were significantly different for faces of anger than the other expressions [F(1, 38) = 8.217, p = .007, p 2 = .178, power = .798]; (b) there were no significant differences among the remaining emotional expressions [i.e., Fear = (Neutral and Happy): F(1, 38) =.208, p = .651, p 2 = .005, power = .073); and Neutral = Happy: F(1, 38) =.022, p = .882, p 2 = .001, power = .052)]. Figure 4-2 graphically displays the pattern of startle reactivity with T-scores among the four expression categories. 464748495051525354AngerFearNeutralityHappinessExpression CategoryStartle Eyeblink Response (T-scores) Figure 4-1. Startle eyeblink T-scores by expression category [A > (F = N = H)]. To summarize these results, viewing angry facial expressions was associated with significantly larger acoustic startle eyeblink responses than other types of facial

PAGE 44

35 expressions (i.e., fear, neutral, and happy), and the responses between the other expressions were not significantly different from each other. Additionally, the non-significant Expression Category X Viewing Mode interaction (p = .991) indicates that this response pattern was similar for both static and dynamic facial expressions. Other Patterns of Emotional Modulation by Viewing Mode The response pattern among different expression categories was also examined for SCR and self-reported arousal, as well as self-reported valence. Like arousal, valence was measured on a scale of 1-9, with higher numbers representing greater positive feeling, pleasure, or appetitiveness, and lower numbers representing greater negative feeling, displeasure, or aversiveness. For all three variables, the analyses were separate 3-way (4 x 2 x 2) repeated measures analyses of variance, using the within-subject factors of Expression Category (anger, fear, neutral, happy) and Viewing Mode (dynamic, static), and the between-subjects factor of Order of Presentations (dynamic then static, or static then dynamic). For SCR and arousal, these analyses were conducted in a preceding section (Differences in Reactivity to Dynamic vs. Static Faces, page 39). As such, for these two measures, this section provides only the results for the Expression Category main effect and associated interactions. The results for self-reported valence, however, are provided in full, as this is a novel analysis. Table 2 gives the means and standard deviations for each dependent variable by Viewing Mode and Expression Category. Skin Conductance Response For the skin conductance response, the main effect of Expression Category and all associated interactions were non-significant: Expression Category [F(1.832, 64.114) = .306, p = .821, p 2 = .009, power = .107], Expression Category X Viewing Mode

PAGE 45

36 [F(2.012, 70.431) = 1.345, p = .264, p 2 = .037, power = .349]; 2 Expression Category X Viewing Mode X Order of Presentations [F(2.012, 70.431) = 1.341, p = .265, p 2 = .037, power = .348]. Thus, differences in SCR for discrete expressions were not found. Self-Reported Arousal For self-reported arousal, the main effect of Expression Category was significant [F(2.144, 81.487) = 81.836, p < .001, p 2 = .683, power = 1.000], 3 indicating that arousal ratings were different while viewing different types of facial expressions. The results of Bonferroni-corrected post-hoc comparisons are provided graphically in Figure 4-2. Fearful faces (M = 6.39, SD = .91) were associated with significantly higher (p < .001) intensity ratings than angry faces (M = 5.45, SD = .96), which were in turn rated as higher (p < .001) in intensity than neutral faces (M = 3.35, SD = 1.22). Differences in intensity ratings associated with happy faces (M = 5.96, SD = .76) approached significance when compared to fearful (p = .082) and happy (p = .082) faces, and were rated as but significantly higher (p < .001) than neutral faces. 2Mauchleys test was significant for both Expression Category [W = .273, 2(5) = 43.762, p < .001] and the Expression Category X Viewing Mode interaction [W = .451, 2(5) = 26.850, p < .001]; thus, degrees of freedom for these effects were adjusted using the Greenhouse-Geisser method. 3 Mauchleys test was significant for both Expression Category [W = .507, 2(5) = 24.965, p < .001] and the Expression Category X Viewing Mode interaction [W = .403, 2(5) = 33.335, p < .001]; thus, degrees of freedom for these effects were adjusted using the Greenhouse-Geisser method.

PAGE 46

37 123456789AngerFearNeutralityHappinessExpression CategoryArousal (1-9) Figure 4-2. Self-reported arousal by expression category (F > A > N; H > N). Self-Reported Valence The final analysis explored the pattern of self-reported valence ratings for each of the facial emotion subtypes and viewing modes. The results of the ANOVA revealed a significant effect for Expression Category [F(2.153, 81.822) = 205.467, p < .001, p 2 = .844, power = 1.00], 4 indicating that valence ratings differed according to expression categories. Bonferroni-corrected pairwise comparisons among the four facial expression types indicated that faces of happiness (M = 7.18, SD = .78) were rated as significantly more pleasant than neutral faces (M = 4.73, SD = .59; p < .001), fear faces (M=3.54, SD=1.03, p < .001), and angry faces (M = 3.14, SD = .84; p < .001). Additionally, neutral faces were rated as significantly more pleasant than fearful (p < .001) or angry 4 A significant Mauchleys test for Expression Category [W = .566, 2(5) = 20.903, p = .001] and the Expression Category X Viewing Mode interaction [W = .504, 2(5) = 25.146, p < .001] necessitated the use of Greenhouse-Geisser adjusted degrees of freedom.

PAGE 47

38 faces (p < .001). Finally, anger faces were rated as significantly more negative than fearful faces (p = .014). This pattern is displayed graphically in Figure 4-3. No other main effects or interactions reached trend level or significance {Viewing Mode: [F(1, 38) = .646, p = .426, p 2 = .017, power = .123]; Order of Presentations: [F(1, 38) = 1.375, p = .248, p 2 = .035, power = .208]; Viewing Mode X Order of Presentations: [F(1, 38) = .047, p = .829, p 2 = .001, power = .055]; Expression Category X Order of Presentations: [F(2.153, 81.822) = 1.037, p = .363, p 2 = .027, power = .233]; Expression Category X Viewing Mode: [F(2.015, 76.554) = .933, p = .398, p 2 = .024, power = .207]; Expression Category X Viewing Mode X Order of Presentations: [F(2.015, 76.554) = 1.435, p = .244, p 2 = .036, power = .300]}. 123456789AngerFearNeutralityHappinessExpression CategoryValence (1-9) Figure 4-3. Self-reported valence by expression category (H > N > F > A). To summarize, these analyses revealed that the skin conductance response for different categories of emotional expressions were not different from one another. By

PAGE 48

39 contrast, both self-report measures did distinguish among the emotion categories. With regard to self-reported arousal, fearful faces were rated highest, significantly moreso than anger faces, which were in turn rated as significantly more arousing than neutral ones. The difference in arousal between happy and angry faces, as well between happy and fearful ones, approached but did not reach significance (p = .082, p = .082, respectively). Happy faces were, however, rated as significantly more arousing than neutral ones. For self-reported valence, each expression category was rated as significantly different from the other, such that angry expressions were rated as most negative, followed by fearful, neutral, and then happy faces.

PAGE 49

CHAPTER 5 DISCUSSION The present study examined two hypotheses. The first was that the perception of dynamic versus static faces would be associated with greater physiological reactivity in normal, healthy adults. Specifically, it was predicted that individuals would exhibit significantly stronger startle eyeblink reflexes, higher skin conductance responses (SCR), and higher levels of self-reported arousal when viewing dynamic expressions. These predictions were based on evidence from previous research suggesting that movement in facial expression (a) provides more visual information to the viewer, (b) increases recognition of and discrimination between specific types of emotion, and (c) may make the facial expressions appear more intense. The second hypothesis was that the perception of different categories of facial expressions would be associated with a distinct pattern of emotional modulation, and that this pattern would not be different for static and dynamic faces. In other words, it was hypothesized that the level of physiological reactivity while viewing facial expressions would be dependent on the type of expression viewed, regardless of the viewing mode. Specifically, the prediction was that normal adults would have increased startle eyeblink responses during the perception of angry faces, and that responses to fearful, happy, and neutral faces would not be significantly different from each other. Moreover, it was predicted that this pattern of responses would be similar for both static and dynamically presented expressions. 40

PAGE 50

41 The first hypothesis was partially supported by the data. The participants tested in the study sample exhibited larger startle eyeblink responses while viewing dynamic versus static facial expressions. Differences in SCR while viewing the expressions in these two modes reached trend level (p = .059), such that dynamic faces tended to induce greater responses than static ones. Self-reported arousal was not significantly different during either condition. Thus, the perception of moving emotional faces versus still pictures was associated with greater startle eyeblink responses, but not SCR or self-reported arousal. The second hypothesis was supported by the data. That is, the startle reflex was significantly greater for angry faces, and comparably smaller for the fearful, neutral, and happy faces. The data suggested that this pattern of emotional modulation was similar during both static and dynamic viewing conditions. In summary, participants demonstrated greater psychophysiological reactivity to dynamic faces compared to static faces, as indexed by the startle eyeblink response, and partially by SCR. Participants did not, on the other hand, report differences in perceived arousal. Emotional modulation of the startle response was similar for both modes of presentation, such that angry faces induced greater negative or aversive responses in the participants than did happy, neutral, and fearful faces. Interpretation and Relationship to Other Findings The finding that viewing faces of anger was found to increase the strength of the startle eyeblink reflex is consistent with other results. Currently, only two other studies are known that measured the magnitude of this reflex during the perception of different facial emotions. Balaban and colleagues (1995) conducted one of these studies. They measured the size of startle eyeblinks in 5-month-old infants viewing photographic slides

PAGE 51

42 of happy, neutral, and angry faces. Their results were similar to those of the current study, in that the magnitudes of startle eyeblinks measured in the infants were augmented while they viewed faces of anger versus faces of happiness. The other study was conducted by Bowers and colleagues (2002). Similar to the present experiment, participants were young adults (n = 36) who viewed facial expressions of anger, fear, neutral, and happiness. These stimuli, however, consisted solely of static photographs and were sampled from standardized batteries (The Florida Affect Battery: Bowers et al., 1992; Pictures of Facial Affect: Ekman & Friesen, 1976). The startle eyeblink responses that were measured while viewing these pictures reflected the pattern produced in the present study: greater negative or aversive responses were associated with angry faces than happy, neutral, or fearful faces. Responses to happy, neutral, and fearful faces yielded relatively reduced responses and were not different from each other in magnitude. The augmentation of the startle reflex during the perception of angry versus other emotional faces appears to be a robust phenomenon for several reasons. First, the findings from the present study were similar to those of previous studies (Balaban et al., 1995; Bowers et al., 2002). Second, this pattern of emotional modulation was replicated using a different set of facial stimuli. Thus, the previous findings were not restricted to faces from specific sources. Third, the counterbalanced design of the present study minimized the possibility that the anger effect was due to some imbalance of factors other than the portrayed facial emotion. Within each experimental condition, for example, both genders and each actor were equally represented within each expression category.

PAGE 52

43 Although the current results were made more convincing for these reasons, the implication that the startle circuitry is not enhanced in response to fearful expressions was unexpected for several reasons. The amygdala has been widely implicated in states of fear and processing fearful material (Davis & Whelan, 2001; Gloor et al., 1981, Klver-Bucy, 1939), and some investigators have even directly implicated the amygdala for processing facial expressions of fear (Adolphs et al., 1994; Morris et al., 1998). Additionally, the work of Davis and colleagues (Davis et al., 1992) uncovered direct neural projections from the amygdala to the subcortical startle circuitry, which have been shown to prime the startle mechanism under fearful or aversive conditions. This body of research suggests that fearful expressions might potentiate the startle reflex relative to other types of facial expressions; however, Bowers and colleagues study (2002) as well as the present one provide evidence that suggests otherwise. No other studies are known to have directly compared startle reactivity patterns among fearful and other emotionally expressive faces. Additionally, imaging and lesion studies have shown mixed results with respect to the role of the amygdala and the processing of fearful and angry faces per se. For instance, Sprengelmeyer and colleagues (1998) showed no fMRI activation in the amygdala in response to fearful relative to neutral faces. Young and colleagues (1995) attributed a deficit in recognition of fear faces to bilateral amygdala damage, but the much of the surrounding neural tissue was also damaged. So, how might one account for the relatively reduced startle response to fearful faces? Bowers and colleagues (2002) provided a plausible explanation, implicating the role of motivated behavior [i.e., Heilmans (1987) preparation for action scale] on these

PAGE 53

44 results. As previously described, angry faces represent personally directed threat, and, as might be reflected by the increased startle found in the present study, induce a motivational propensity to withdraw or escape from that threat. Fearful expressions, on the other hand, reflect some potential environmental threat to the actor, rather than to the observer. Thus, this would reflect less motivational propensity for action and might account for the reduced startle response. Methodological Issues Regarding Facial Expressions Before discussing the implications of this study more broadly, several methodological issues must be addressed that potentially influenced the present findings. The first relates to the reliability of the facial expression stimuli in depicting specific emotions. Anger was the emotion that elicited the greatest startle response overall. At the same time, anger facial expressions were least accurately categorized by a group of independent nave raters (see Table 3-2, page 23). 5 Whether there is a connection between these findings is unclear, particularly since the emotions that the raters viewed included a wider variety of options (i.e., 6 expressions) than those viewed by the participants in this study (4 expressions). For example, the raters were shown facial expressions of anger, disgust, fear, sad, happiness and neutral. Their accuracy in 1 A 2 (Viewing Mode: dynamic, static) X 6 (Expression Category: anger, disgust, fear, happy, neutral, sad) repeated-measures ANOVA was conducted with an alpha criterion of .05 and Bonferroni-corrected post-hoc comparisons. Results showed that dynamic expressions (M = .89, SD = .06) were rated significantly more accurately than static expressions (M = .87, SD = .07). Additionally, Expression Category was found to be significant, but not the interaction between Expression Category and Viewing Mode. Specific to the emotion categories used in the present study, it was also found that happy faces were rated significantly more accurately (M = .99, SD = .01) than neutral (M = .91, SD = .06) and angry (M = .73, SD = .18) faces, while fear (M = .95, SD = .05) recognition rates were not significantly different from the other three. Comparing each emotion across viewing modes, only anger was rated significantly more accurately in dynamic (M = .78, SD = .17), versus static (M = .68, SD = .21), modes, while the advantage for dynamic neutral faces (M = .92, SD = .04) over static versions (M = .89, SD = .08) only approached significance (p = .055). A static version of an emotional expression was never rated significantly more accurately than its dynamic version.

PAGE 54

45 identifying anger expression was around 78%. When errors were made, they typically (i.e., 95% of the time) judged the anger expressions as being disgust. In the psychophysiology study, the participants were shown only four expressions. It seems unlikely that participants in the psychophysiology study easily confused anger, fear, happiness, and neutral expressions. However, this could be addressed by examining the ratings that were made by the psychophysiology participants. Nevertheless, elevated startle reactivity for facial expressions that were less reliably categorized might occur for several reasons: (1) differences in attention between relatively poorly and accurately recognized stimuli, and (2) differences in perceived arousal levels between relatively poorly and accurately recognized stimuli. Regarding attention, previous researchers have suggested that visual attention inhibits the startle response when the modalities between the startle probe and stimulus of interest are mismatched (e.g., Ornitz, 1996). In this case, acoustic startle probes were used in conjunction with visual stimuli. Since anger was associated with the strongest startle reflexes, it was not likely inhibited. Thus, attention was probably not a mediating factor between lower recognition rates and this effect. Regarding arousal, researchers such as Cuthbert and colleagues (1996) indicated that potentiation of the startle response occurs with more arousing stimuli when the stimuli are of negative valence. Anger, was rated as the most negatively valenced, significantly more so than fear. Happy was rated most positively. Since anger was rated most negatively, the only way arousal could have been an influencing factor on angers potentiated startle response was if anger was more arousing than the other two expressions. However, it was rated as significantly less arousing than both fear and happiness.

PAGE 55

46 To conclude, it seems unlikely that ambiguity of the angry facial expressions significantly contributed to the current findings. However, examination of ratings made by the participants themselves might better clarify the extent to which anger expressions were less accurately categorized than other expressions. Other Considerations of the Present Findings One explanation for the failure to uncover more robust findings using the skin conductance response might relate to several of this measures attributes. First, although SCR can be a useful measure of emotional arousal, it does have considerable limitations. It is estimated that that 15-20% of healthy individuals are skin conductance non-responders; some individuals do not exhibit a discernable difference in this response to different categories of emotional stimuli, while others exhibit very weak responses overall (Bradley & Lang, 2000; O'Gorman, 1990). Moreover, the sensitive electrical signal that records SCR is vulnerable to the effects of idle, unconscious motor activity, especially considering that the electrodes are positioned on the palms of both hands. Because participants sat alone during these recordings, it was impossible to determine whether they followed instructions for keeping still. These factors suggest that the potential for interference during the course of the two slideshows in the present study is not insignificant and may have contributed to the null SCR findings, both for reactivity across emotions, and response differences between emotions. As such, this study uncovered only weak evidence that dynamic faces induced stronger skin conductance responses than static faces; only a trend towards significance was found. A significant difference might have emerged with more statistical power (dynamic: power = .47). Numerically, dynamic faces were associated with larger mean SCR values (.314) than

PAGE 56

47 static faces (.172). Therefore, a larger sample size would be required to increase our confidence about the actual relationship of SCR for these two visual modes. Several explanations might account for the finding that self-reported arousal ratings were not significantly different for static and dynamic expressions (contrary to one prediction in the current study). First, it is possible that the similar ratings between these two experimental conditions were the product of an insensitive scale. The choice between integers ranging only from 1 to 9 may have prohibited sufficient response variability for drawing out differences between viewing modes. Also, it is possible that subjects rated each expression in arousal relative to the expressions immediately preceding the currently rated one, and failed to consider their responses relative to the previously seen presentation. If this were the case, the viewed expressions might have been rated in arousal relative to the average score within the current presentation, and the means of arousal ratings from both presentations would be virtually identical. Limitations of the Current Study It is important to acknowledge some of the limitations of the current study. One limitation is that the specific interactions between participant and actor variables of gender, race, and attractiveness were not analyzed. It is likely that the emotional response of a given individual to a specific face is dependent upon these factors due to the individuals unique experiences. In addition, the meaning of some facial expressions may be ambiguous when they are viewed in isolation. Depending on the current situation, for instance, a smile might communicate any number of messages, including contentment, peer acceptance, sexual arousal, relief, mischief, or even contempt (i.e., a smirk). Taken together, averaging potentially variable responses due to highly specific interactions with non-expressive facial features or varying interpretations of facial stimuli

PAGE 57

48 between subjects might have contributed to certain non-significant effects, or created artificial ones. Secondly, the facial expression stimuli may have been perceived as somewhat artificial, which potentially reduced the overall emotional responses (and consequently, physiologic reactivity). The actors were recorded using black and white video with their heads surrounded on either side with an immobilization cushion. In addition, despite some pre-training, the actors deliberately posed the facial expressions; these were not the product of authentic emotion per se. Previous research has determined that emotion-driven and posed expressions are mediated by different neural mechanisms and muscular response patterns (Monrad-Krohn, 1924; for review, see Rinn, 1984). It is likely that some expressions might have been correctly recognized by emotional category, but not necessarily believed as having an emotional origin. The extent to which emotional reactivity is associated with perceiving genuine versus posed emotion in others remains the topic of future research. It is reasonable to conjecture, however, that based on everyday social interactions, the perception of posed expressions would be less emotionally arousing and would therefore be associated with reduced emotional reactivity. Directions for Future Research There are many avenues for future research. Further investigation into the effects of and interactions between factors of gender, race, age, and attractiveness and the characterization of these effects on patterns of startle modulation is warranted. The effects of these factors would need to be determined to clearly dissociate expression-specific differences in emotion perception. One of these factors may be implicated as being more influential than facial expressivity in physiological reactivity to facial stimuli.

PAGE 58

49 Further, the use of more genuine, spontaneous expressions as stimuli might be considered to potentially introduce greater levels of emotional arousal into studies of social emotion perception. Greater ecological validity might be gained via this route, as well as the use of color stimuli and actors given free range of head movement. Also, patterns of startle modulation to facial expressions should be further studied over different age groups to help uncover the development of emotional recognition and social cognition over the lifespan. This is especially warranted given the difference in the findings of the present study (i.e., increased startle response to anger with attenuated responses being associated with fearful, happy, and neutral expressions) in relation to those of Balabans (1995) study who tested infants. In her study, fearful expressions yielded significantly greater responses than neutral ones and neutral ones yielding greater responses than happy ones). Continued research with different age groups would help disentangle the ontogenetic responsiveness to the meaning conveyed through facial emotional signals and help determine the reliability of these few studies that have been conducted. To conclude, despite the limitations of the current study, dynamic and static faces appear to elicit qualitatively different psychophysiological responses; specifically, that dynamic faces induce greater startle eyeblink responses than static versions. This observation has not been previously described in the literature. Because they appear to differentially influence motivational systems, these two types of stimuli cannot be treated interchangeably. The results of this and future studies will likely play an important role in the development of a dynamic facial affect battery and aid in the race to extricate more

PAGE 59

50 precisely the social cognition impairments in certain neurologic, psychiatric, and brain injured populations.

PAGE 60

APPENDIX A STATIC STIMULUS SET Actor Measure Anger Disgust Fear Happiness Neutrality Sadness Male 1 % Recognition 47.6 66.7 90.5 100 100 85.7 Valence M (SD) 3.0 (1.6) 3.9 (1.5) 4.4 (1.7) 7.4 (1.3) 5.2 (0.9) 3.7 (1.2) Arousal M (SD) 5.5 (1.4) 5.4 (1.7) 5.8 (1.5) 6.3 (1.3) 3.5 (1.8) 4.6 (1.4) Male 2 % Recognition 90.5 85.7 100 100 90.5 95.2 Valence 2.8 (1.3) 3.5 (1.1) 4.5 (1.8) 7.2 (1.4) 4.2 (1.2) 2.6 (1.3) Arousal 5.1 (2.1) 5.0 (1.9) 6.8 (1.7) 5.7 (1.7) 3.7 (1.8) 5.0 (1.8) Male 3 % Recognition 71.4 81 90.5 100 100 Valence 3.2 (1.5) 3.2 (0.9) 4.2 (1.7) 7.3 (0.9) 4.7 (1.4) Arousal 5.2 (2.0) 5.1 (1.7) 6.3 (1.5) 5.9 (1.6) 3.7 (1.9) Male 4 % Recognition 57.1 71.4 85.7 100 95.2 95.2 Valence 3.3 (1.5) 3.6 (1.7) 3.8 (1.6) 7.0 (2.2) 4.6 (0.7) 3.1 (1.2) Arousal 5.4 (1.4) 5.5 (1.2) 6.0 (0.8) 6.7 (1.4) 3.3 (1.7) 4.5 (1.6) Male 5 % Recognition 57.1 76.2 95.2 95.2 81 100 Valence 4.1 (1.2) 4.6 (0.8) 4.5 (1.2) 7.0 (1.3) 5.4 (1.2) 4.1 (1.3) Arousal 4.6 (1.3) 4.0 (1.6) 5.5 (1.4) 5.4 (1.7) 3.9 (1.8) 4.1 (1.7) Male 6 % Recognition 71.4 61.9 95.2 100 90.5 76.2 Valence 3.1 (1.6) 3.0 (1.8) 3.6 (1.6) 6.9 (1.3) 4.6 (1.7) 3.5 (1.5) Arousal 5.1 (1.6) 6.1 (2.3) 5.8 (1.6) 5.3 (2.1) 3.9 (2.2) 5.3 (1.3) Female 1 % Recognition 61.9 76.2 100 100 85.7 90.5 Valence 3.3 (1.5) 3.3 (1.6) 3.9 (1.7) 6.7 (1.1) 4.5 (1.3) 2.9 (1.2) Arousal 6.1 (1.8) 5.3 (2.0) 6.3 (1.9) 6.0 (1.3) 3.4 (1.6) 4.7 (1.6) Female 2 % Recognition 28.6 100 100 100 76.2 66.7 Valence 3.2 (1.6) 3.5 (1.0) 3.9 (1.5) 7.1 (1.1) 3.3 (1.3) 4.4 (1.0) Arousal 5.5 (1.5) 4.7 (1.4) 5.9 (1.9) 5.8 (1.7) 2.8 (1.6) 2.9 (1.6) Female 3 % Recognition 95.2 71.4 95.2 100 90.5 100 Valence 3.9 (1.0) 3.6 (2.0) 4.0 (1.1) 7.7 (1.3) 4.4 (1.0) 3.4 (1.5) Arousal 5.0 (1.5) 6.0 (1.7) 5.5 (1.2) 6.4 (1.5) 3.5 (1.8) 4.8 (1.5) Female 4 % Recognition 95.2 100 100 100 95.2 100 Valence 2.9 (1.4) 3.7 (1.3) 4.3 (1.1) 7.1 (0.9) 4.8 (0.5) 3.7 (1.4) Arousal 5.6 (2.3) 5.5 (1.9) 5.9 (1.7) 5.9 (2.0) 3.3 (1.7) 4.6 (1.2) Female 5 % Recognition 90.5 95.2 100 95.2 90.5 95.2 Valence 3.8 (1.7) 3.3 (1.0) 4.1 (1.8) 7.2 (1.1) 4.5 (1.1) 3.7 (1.2) Arousal 5.5 (1.7) 5.2 (1.3) 7.0 (1.5) 5.7 (1.5) 4.1 (1.9) 4.8 (1.5) Female 6 % Recognition 52.4 42.9 90.5 100 76.2 100 Valence 3.5 (1.6) 3.9 (1.4) 4.1 (1.1) 8.1 (0.9) 5.9 (1.1) 3.7 (1.1) Arousal 5.0 (1.5) 4.9 (1.8) 5.6 (1.8) 7.1 (2.0) 4.8 (2.4) 5.1 (1.6) Note. The sad expression for male 3 was not created because of videotape corruption. 51

PAGE 61

APPENDIX B DYNAMIC STIMULUS SET Actor Measure Anger Disgust Fear Happiness Neutrality Sadness Male 1 % Recognition 76.2 52.4 90.5 100 95.2 95.2 Valence M (SD) 2.9 (1.2) 4.1 (1.3) 4.5 (1.9) 7.5 (0.9) 5.4 (0.7) 4.1 (1.1) Arousal M (SD) 5.7 (2.0) 5.1 (1.5) 6.1 (2.0) 6.1 (1.4) 3.2 (2.1) 3.4 (1.9) Male 2 % Recognition 95.2 85.7 100 100 95.2 100 Valence 3.2 (1.3) 3.7 (1.1) 3.6 (1.9) 7.0 (1.0) 4.9 (0.7) 3.1 (1.4) Arousal 4.0 (1.3) 4.6 (2.1) 6.3 (2.1) 5.6 (1.6) 3.1 (1.9) 4.9 (1.6) Male 3 % Recognition 71.4 85.7 95.2 100 95.2 Valence 2.9 (1.1) 3.1 (0.8) 3.7 (1.6) 6.5 (1.2) 4.7 (0.9) Arousal 5.3 (1.5) 4.8 (1.9) 6.2 (1.4) 5.4 (1.4) 3.2 (2.0) Male 4 % Recognition 95.2 85.7 90.5 100 90.5 100 Valence 3.6 (0.9) 3.3 (1.7) 4.0 (1.8) 6.9 (2.1) 5.0 (0.9) 3.3 (1.0) Arousal 4.5 (1.3) 5.8 (1.5) 5.9 (1.9) 6.4 (1.8) 3.6 (2.6) 4.4 (1.4) Male 5 % Recognition 71.4 52.4 95.2 100 85.7 100 Valence 3.2 (1.4) 4.1 (0.9) 3.8 (1.6) 6.9 (1.1) 4.9 (0.4) 3.2 (1.3) Arousal 5.2 (1.3) 4.5 (1.9) 5.8 (1.5) 5.2 (2.0) 3.1 (1.9) 4.7 (1.7) Male 6 % Recognition 66.7 85.7 100 95.2 95.2 90.5 Valence 3.0 (0.8) 2.9 (1.5) 4.1 (1.2) 6.9 (1.7) 4.8 (0.7) 3.3 (1.5) Arousal 5.4 (1.8) 5.9 (1.5) 4.6 (2.2) 5.8 (2.1) 2.9 (2.0) 5.1 (2.0) Female 1 % Recognition 57.1 57.1 100 100 95.2 85.7 Valence 2.7 (1.6) 2.1 (1.1) 3.2 (1.3) 6.9 (1.5) 4.5 (1.3) 3.1 (0.9) Arousal 5.7 (2.0) 5.8 (2.1) 6.3 (1.6) 5.9 (0.9) 3.3 (2.1) 4.8 (1.3) Female 2 % Recognition 52.4 100 100 100 85.7 66.7 Valence 2.6 (1.3) 3.6 (0.9) 3.4 (1.5) 7.3 (1.2) 4.3 (0.9) 4.2 (0.9) Arousal 5.1 (2.0) 4.4 (1.8) 5.8 (1.7) 5.5 (1.6) 2.8 (1.7) 3.5 (2.2) Female 3 % Recognition 100 81 80.1 100 90.5 100 Valence 3.5 (1.3) 3.7 (2.1) 3.1 (1.1) 7.9 (1.2) 4.9 (0.5) 3.2 (1.0) Arousal 4.3 (1.8) 6.4 (1.9) 5.6 (1.8) 6.8 (1.9) 3.3 (2.0) 4.6 (1.4) Female 4 % Recognition 100 100 95.2 100 95.2 100 Valence 2.3 (1.1) 3.4 (2.2) 3.5 (1.3) 7.3 (1.4) 5.1 (0.7) 3.1 (1.0) Arousal 6.1 (1.9) 5.5 (1.8) 6.1 (1.6) 5.9 (1.8) 3.0 (1.9) 4.9 (1.1) Female 5 % Recognition 85.7 95.2 100 100 95.2 95.2 Valence 3.4 (1.7) 3.3 (1.0) 3.2 (1.8) 6.9 (1.6) 5.14 3.6 (1.6) Arousal 5.0 (2.0) 5.4 (1.8) 6.8 (2.0) 4.9 (1.8) 3.2 (2.1) 4.2 (1.4) Female 6 % Recognition 66.7 66.7 85.7 100 85.7 95.2 Valence 3.2 (1.3) 3.5 (1.3) 3.3 (1.3) 8.3 (0.9) 6.0 (1.0) 3.5 (1.1) Arousal 5.1 (1.5) 5.6 (1.3) 6.1 (1.7) 6.7 (2.1) 4.3 (2.2) 4.8 (2.1) Note. The sad expression for male 3 was not created because of videotape corruption. 52

PAGE 62

LIST OF REFERENCES Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372(6507), 669-672. Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33(6), 717-746. Averill, J. R. (1975). A semantic atlas of emotional concepts. JSAS Catalogue of Selected Documents in Psychology, 5, 330. (Ms. No. 421). Balaban, M. T. (1995). Affective influences on startle in five-month-old infants: reactions to facial expressions of emotion. Child Development, 66(1), 28-36. Beck, A. T. (1978). Depression inventory. Philadelphia: Center for Cognitive Therapy. Bowers, D., Bauer, R., & Heilman, K. M. (1993). The Nonverbal Affect Lexicon: theoretical perspectives from neuropsychological studies of affect perception. Neuropsychology, 7(4), 433-444. Bowers, D., Blonder, L. X., & Heilman, K. M. (1992). Florida Affect Battery. University of Florida. Bowers, D., Parkinson, B., Gober, T., Bauer, M. C., White, E., & Bongiolatti, S. (2002, November). Two faces of emotion: patterns of startle modulation depend on facial expressions and on knowledge of evil. Poster presented at the Society for Neuroscience, Orlando, FL. Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. Journal of Behavioral Therapy and Experimental Psychiatry, 25(1), 49-59. Bradley, M. M., & Lang, P. J. (2000). Measuring emotion: behavior, feeling, and physiology. In R. D. Lane & L. Nadel (Eds.), Cognitive Neuroscience of Emotion (pp. 242-276). New York: Oxford University. Buhlmann, U., McNally, R. J., Etcoff, N. L., Tuschen-Caffier, B., & Wilhelm, S. (2004). Emotion recognition deficits in body dysmorphic disorder. Journal of Psychiatric Research, 38(2), 201-206. 53

PAGE 63

54 Burton, A. M., Wilson, S., Cowan, M., & Bruce, V. (1999). Face recognition in poor-quality video: evidence from security surveillance. Psychological Science, 10(3), 243-248. Bush, L. E., II. (1973). Individual differences in multidimensional scaling of adjectives denoting feelings. Journal of Personality and Social Psychology, 25, 50-57. Cannon, W. B. (1931). Again the James-Lange and the thalamic theories of emotion. Psychological Review, 38, 281-295. Christie, F., & Bruce, V. (1998). The role of dynamic information in the recognition of unfamiliar faces. Memory and Cognition, 26(4), 780-790. Cuthbert, B. N., Bradley, M. M., & Lang, P. J. (1996). Probing picture perception: activation and emotion. Psychophysiology, 33(2), 103-111. Darwin, C. (1872). The expression of the emotions in man and animals. Chicago: University of Chicago Press. Davis, M. (1992). The role of the amygdala in fear-potentiated startle: implications for animal models of anxiety. Trends in Pharmacological Science, 13(1), 35-41. Davis, M., & Whalen, P. J. (2001). The amygdala: vigilance and emotion. Mol. Psychiatry, 6(1), 13-34. DeSimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 3, 1-8. Edwards, J., Jackson, H. J., & Pattison, P. E. (2002). Emotion recognition via facial expression and affective prosody in schizophrenia: a methodological review. Clinical Psychology Review, 22(6), 789-832. Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In J. Cole (Ed.), Nebraska symposium on motivation, 1971 (pp. 207-283). Lincoln, NE: University of Nebraska Press. Ekman, P. (1973). Darwin and facial expression; a century of research in review. New York: Academic Press. Ekman, P. (1980). The face of man: expressions of universal emotions in a New Guinea village. New York: Garland STPM Press. Ekman, P. (1982). Emotion in the human face (2nd ed.). New York: Cambridge University Press. Editions de la Maison des Sciences de l'Homme. Ekman, P., & Davidson, R. J. (1994). The nature of emotion: fundamental questions. New York: Oxford University Press.

PAGE 64

55 Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto: Consulting Psychologists Press. Ekman, P., Levenson, R. W., & Friesen, W. V. (1983). Autonomic nervous system activity distinguishes among emotions. Science, 221(4616), 1208-1210. Ekman, P., & Rosenberg, E. L. (1997). What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (FACS). New York: Oxford University Press. Eysenck, M. W., & Keane, M. (2000). Cognitive Psychology: A Student's Handbook. Philadelphia: Taylor & Francis. Field, T. M., Woodson, R., Greenberg, R., & Cohen, D. (1982). Discrimination and imitation of facial expression by neonates. Science, 218(4568), 179-181. Gilboa-Schechtman, E., Foa, E. B., & Amir, N. (1999). Attentional biases for facial expressions in social phobia: the face-in-the-crowd paradigm. Cognition and Emotion, 13(3), 305-318. Gloor, P., Olivier, A., Quesney, L. F., Andermann, F., & Horowitz, S. (1982). The role of the limbic system in experiential phenomena of temporal lobe epilepsy. Annals of Neurology, 12(2), 129-144. Hargrave, R., Maddock, R. J., & Stone, V. (2002). Impaired recognition of facial expressions of emotion in Alzheimer's disease. Journal of Neuropsychiatry and Clinical Neurosciences, 14(1), 64-71. Hariri, A. R., Tessitore, A., Mattay, A., Frea, F., & Weinberger, D. (2001). The amygdala response to emotional stimuli: a comparison of faces and scenes. Neuroimage, 17(317-323). Heilman, K. M. (1987, February). Syndromes of facial affect processing. Paper presented at the International Neuropsychological Society, Washington, DC. Hess, W. R., & Brugger, M. (1943). Subcortical center of the affective defense reaction. In K. Akert (Ed.), Biological order and brain organization: selected works of W. R. Hess (pp. 183-202). Berlin: Springer-Verlag. Humphreys, G. W., Donnelly, N., & Riddoch, M. J. (1993). Expression is computed separately from facial identity, and it is computed separately for moving and static faces: neuropsychological evidence. Neuropsychologia, 31(2), 173-181. Izard, C. E. (1994). Innate and universal facial expressions: evidence from developmental and cross-cultural research. Psychological Bulletin, 115(2), 288-299. Johnson, M. H., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns' preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40(1-2), 1-19.

PAGE 65

56 Kamachi, M., Bruce, V., Mukaida, S., Gyoba, J., Yoshikawa, S., & Akamatsu, S. (2001). Dynamic properties influence the perception of facial expressions. Perception, 30(7), 875-887. Kan, Y., Kawamura, M., Hasegawa, Y., Mochizuki, S., & Nakamura, K. (2002). Recognition of emotion from facial, prosodic and written verbal stimuli in Parkinson's disease. Cortex, 38(4), 623-630. Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., & Hoffman, J. M. (2003). Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage, 18(1), 156-168. Klver, H., & Bucy, P. C. (1937). "Psychic blindness" and other symptoms following bilateral temporal lobectomy. American Journal of Physiology, 119, 352-353. Kohler, C. G., Bilker, W., Hagendoorn, M., Gur, R. E., & Gur, R. C. (2000). Emotion recognition deficit in schizophrenia: association with symptomatology and cognition. Biological Psychiatry, 48(2), 127-136. Lander, K., & Bruce, V. (2004). Repetition priming from moving faces. Memory and Cognition, 32(4), 640-647. Lander, K., Christie, F., & Bruce, V. (1999). The role of movement in the recognition of famous faces. Memory and Cognition, 27(6), 974-985. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1990). Emotion, attention, and the startle reflex. Psychological Review, 97(3), 377-395. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1997). Motivated attention: affect, activation and action. In P. J. Lang, R. F. Simons & M. T. Balaban (Eds.), Attention and orienting: sensory and motivational processes. Hillsdale, NJ: Lawrence Erlbaum. Lang, P. J., Bradley, M. M., Cuthbert, B. N., & Patrick, C. J. (1993). Emotion and psychopathology: a startle probe analysis. Progress in Experimental, Personality, and Psychopathological Research, 16, 163-199. Lang, P. J., Greenwald, M. K., Bradley, M. M., & Hamm, A. O. (1993). Looking at pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology, 30, 261-273. Leonard, C., Voeller, K. K. S., & Kuldau, J. M. (1991). When's a smile a smile? Or how to detect a message by digitizing the signal. Psychological Science, 2, 166-172. Levenson, R. W., Carstensen, L. L., Friesen, W. V., & Ekman, P. (1991). Emotion, physiology, and expression in old age. Psychology and Aging, 6(1), 28-35.

PAGE 66

57 Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology, 27(4), 363-384. Monrad-Krohn, G. H. (1924). On the dissociation of voluntary and emotional innervation in facial paralysis of central origin. Brain, 47(22-35). Morris, J. S., Friston, K. J., Buchel, C., Frith, C. D., Young, A. W., Calder, A. J., et al. (1998). A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain, 121 ( Pt 1), 47-57. Morris, M., Bradley, M. M., Bowers, D., Lang, P. J., & Heilman, K. M. (1991). Valence specific hypoarousal following right temporal lobectomy [Abstract]. Journal of Clinical and Experimental Neuropsychology, 14, 105. Nelson, C. A., & Dolgrin, K. G. (1985). The generalized discrimination of facial expressions by seven-month-old infants. Child Development, 56, 58-61. Oatley, K., & Jenkins, J. M. (1996). Understanding emotions. Cambridge: Blackwell Publishers. O'Gorman, J. G. (1990). Individual differences in the orienting response: nonresponding in nonclinical samples. Pavlov Journal of Biological Science, 25(3), 104-108; discussion 109-110. Okun, M. S., Bowers, D., Springer, U., Shapira, N., Malone, D., Rezai, A., Nuttin, B., Heilman, K. M., Morecraft, R., Rasmussen, S., Greenberg, B., Foote, K., Goodman, W. (2004). What's in a "smile?" Intra-operative observations of contralateral smiles induced by deep brain stimulation. Neurocase, 10(4), 271-279. Ornitz, E. M., Russell, A. T., Yuan, H., & Liu, M. (1996). Autonomic, electroencephalographic, and myogenic activity accompanying startle and its habituation during mid-childhood. Psychophysiology, 33(5), 507-513. Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning. Chicago: University of Illinois Press. O'Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: a psychological and neural synthesis. Trends in Cognitive Science, 6(6), 261-266. Pike, G. E., Kemp, R. I., Towell, N. A., & Phillips, K. C. (1997). Recognizing moving faces: the relative contribution of motion and perspective view information. Visual Cognition, 4(4), 409-438. Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological motion. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 358(1431), 435-445.

PAGE 67

58 Puce, A., Syngeniotis, A., Thompson, J. C., Abbott, D. F., Wheaton, K. J., & Castiello, U. (2003). The human temporal lobe integrates facial form and motion: evidence from fMRI and ERP studies. Neuroimage, 19(3), 861-869. Rinn, W. E. (1984). The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95(1), 52-77. Roberts, R. J., & Weerts, T. C. (1982). Cardiovascular responding during anger and fear imagery. Psychology Report, 50(1), 219-230. Rosen, J. B., & Davis, M. (1988). Enhancement of the acoustic startle by electrical stimulation of the amygdala. Behavioral Neuroscience, 102(2), 195-202. Russell, J. A. (1978). Evidence of convergent validity on the dimensions of affect. Journal of Personality and Social Psychology, 36, 1152-1168. Russell, J. A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11(273-294). Russell, J. A., & Ridgeway, D. (1983). Dimensions underlying children's emotion concepts. Developmental Psychology, 19, 785-804. Schlosberg, H. (1952). The description of facial expressions in terms of two dimensions. Journal of Experimental Psychology, 44(4), 229-237. Schwartz, G. E., Weinberger, D. A., & Singer, J. A. (1981). Cardiovascular differentiation of happiness, sadness, anger, and fear following imagery and exercise. Psychosomatic Medicine, 43(4), 343-364. Singh, S. D., Ellis, C. R., Winton, A. S., Singh, N. N., Leung, J. P., & Oswald, D. P. (1998). Recognition of facial expressions of emotion by children with attention-deficit hyperactivity disorder. Behavior Modification, 22(2), 128-142. Sorce, J., Emde, R., Campos, J., & Klinnert, M. (1985). Maternal emotional signaling: it's effect on the visual cliff behavior of 1-year-olds. Developmental Psychology, 21(1), 195-200. Spielberger, C. D. (1983). State-Trait Anxiety Inventory. Palo Alto, CA: Mind Garden. Sprengelmeyer, R., Rausch, M., Eysel, U. T., & Przuntek, H. (1998). Neural structures associated with recognition of facial expressions of basic emotions. Proceedings of the Royal Society of London Series B Biological Sciences, 265(1409), 1927-1931. Sprengelmeyer, R., Young, A. W., Calder, A. J., Karnat, A., Lange, H., Homberg, V., et al. (1996). Loss of disgust. Perception of faces and emotions in Huntington's disease. Brain, 119 (Pt 5), 1647-1665.

PAGE 68

59 Sprengelmeyer, R., Young, A. W., Mahn, K., Schroeder, U., Woitalla, D., Buttner, T., et al. (2003). Facial expression recognition in people with medicated and unmedicated Parkinson's disease. Neuropsychologia, 41(8), 1047-1057. Tanaka, K. (1992). Inferotemporal cortex and higher visual functions. Current Opinion in Neurobiology, 2, 502-505. Teunisse, J. P., & de Gelder, B. (2001). Impaired categorical perception of facial expressions in high-functioning adolescents with autism. Neuropsychology, Development, and Cognition. Section C, Child Neuropsychology, 7(1), 1-14. Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale & R. J. W. Mansfield (Eds.), Analysis of Visual Behavior (pp. 549-586). Cambridge: MIT Press. Walker, D. L., & Davis, M. (2002). Quantifying fear potentiated startle using absolute versus proportional increase scoring methods: implications for the neurocircuitry of fear and anxiety. Psychopharmacology(164), 318-328. Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78(1), 105-119. Wundt, W. (1897). Outlines of psychology (C. H. Judd, Trans.). New York: Gustav E. Stetchert. Young, A. W., Aggleton, J. P., Hellawell, D. J., Johnson, M., Broks, P., & Hanley, J. R. (1995). Face processing impairments after amygdalotomy. Brain, 118 (Pt 1), 15-24. Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997). Facial expression megamix: tests of dimensional and category accounts of emotion recognition. Cognition, 63(3), 271-313. Zeki, S. (1992). The visual image in mind and brain. Scientific American, 267(3), 68-76. Zihl, J., von Cramon, D., & Mai, N. (1983). Selective disturbance of movement vision after bilateral brain damage. Brain, 106 (Pt 2), 313-340.

PAGE 69

BIOGRAPHICAL SKETCH Utaka Springer was born in Menomonie, WI, and received his B.S. in biology from Harvard University. After gaining research experience in cognitive neuroscience at the McKnight Brain Institute in Gainesville, FL, he entered the doctoral program in clinical psychology at the University of Florida, specializing in neuropsychology. 60


Permanent Link: http://ufdc.ufl.edu/UFE0010570/00001

Material Information

Title: Differences in Psychophysiological Reactivity to Static and Dynamic Displays of Facial Emotion
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0010570:00001

Permanent Link: http://ufdc.ufl.edu/UFE0010570/00001

Material Information

Title: Differences in Psychophysiological Reactivity to Static and Dynamic Displays of Facial Emotion
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0010570:00001


This item has the following downloads:


Full Text












DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND
DYNAMIC DISPLAYS OF FACIAL EMOTION















By

UTAKA S. SPRINGER


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2005

































Copyright 2005

by

Utaka S. Springer















ACKNOWLEDGMENTS

This research was supported by RO1 MH62539. I am grateful to Dawn Bowers for

her patience, availability, and expertise in advising this project. I would like to thank the

members of the Cognitive Neuroscience Laboratory for their support throughout this project.

I would like to extend special thanks to Shauna Springer, Alexandra Rosas, John McGetrick,

Paul Seignourel, Lisa McTeague, and Gregg Selke.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ......... .................................................................................... iii

LIST OF TABLES ......... ........ ................................... .......... .... ............ vi

L IST O F F IG U R E S .... ...... ................................................ .. .. ..... .............. vii

A B STR A C T ..................... ................................... ........... ................. viii

1 IN TR OD U CTION ............................................... .. ......................... ..

Perceptual Differences for Static and Dynamic Expressions ......................................3
Cognitive Studies .................................... .......... .. ........... ........... 4
Neural Systems and the Perception of Movement versus Form...........................5
Dimensional versus Categorical Models of Emotion........................ ..............7
D im ensional M odels of Em otion................................... ..................................... 7
Categorical M odels of Em otion.................................... .................................... 10
Emotional Responses to Viewing Facial Expressions............................. .............12

2 STATEM ENT OF THE PROBLEM ................................... .................................... 15

S p e c ific A im I .............................................................................................................1 6
S p e c ific A im II ..................................................................................................... 1 6

3 M E T H O D S .......................................................................................................1 8

P a rtic ip a n ts ........................................................................................................... 1 8
M materials ......................................1...................9..........
Collection of Facial Stimuli: Video Recording ...............................................19
Selection of Facial Stim uli ................. ................................20
Digital Formatting of Facial Stimuli ........................... ....... ............... 21
D ynam ic Stim uli ................................... ..............................22
Final Selection of Stimuli for Psychophysiology Experiment .........................23
D esign Overview and Procedures............................................. 23
Psychophysiologic Measures ...................... ........ .....................26
Acoustic Startle Eyeblink Reflex (ASR) ........ ........................... ......... 26
Skin Conductance Response (SCR) ............. ..................... .................. 27
Data Reduction of Psychophysiology Measures ...................... .......................27
Statistical A n aly sis..........................................................................................2 8









4 R E S U L T S .......................................................... ................ 3 0

Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces .....................30
Startle Eyeblink Response......... ......................... ........... ........... .... 31
Skin Conductance Response (SCR) ....................................... ............... 31
Self-R reported A rousal ............................. ............ .. .................................. 32
Hypothesis 2: Emotion Modulation of Startle by Expression Categories ..................32
Other Patterns of Emotional Modulation by Viewing Mode................ ..................35
Skin C onductance R esponse..................................................... .....................35
S elf-R ep orted A rou sal .............................................................. .....................36
Self-R reported V alence................................................ ............................. 37

5 D ISC U S SIO N ............................................................................... 40

Interpretation and Relationship to Other Findings ............................................. 41
Methodological Issues Regarding Facial Expressions ............................................44
Other Considerations of the Present Findings ......................................................46
Lim stations of the Current Study ...... ............................................... ............... 47
Directions for Future Research ........... ..... ......... ................... 48

APPENDIX

A ST A T IC ST IM U L U S SE T .............................................................. .....................51

B DYNAMIC STIMULUS SET ......... ............... .................... 52

LIST OF REFEREN CE S .. ....... ................................ ........................... ............... 53

BIO GRAPH ICAL SK ETCH .................................................. ............................... 60
















LIST OF TABLES


Table page

3-1 Demographic characteristics of experimental participants .....................................19

3-2 Mean (SD) recognition rates, valence, and arousal of static and dynamic face
stim u li ...................................... ................................................... 2 3

4-1 Mean (SD) dependent variable scores by Viewing Mode........................ 30

4-2 Mean (SD) dependent variable scores by Viewing Mode and Expression
C category ............... ..................................... ...........................33
















LIST OF FIGURES

Figure pge

1-1 Neuroanatomic circuitry of the startle reflex .............. ............... ...............13

3-1 Temporal representation of dynamic and static stimuli .......................................22

4-1 Startle eyeblink T-scores by expression category ......................................... 34

4-2 Self-reported arousal by expression category .................................. ............... 37

4-3 Self-reported valence by expression category ............. ........................................38















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

DIFFERENCES IN PSYCHOPHYSIOLOGIC REACTIVITY TO STATIC AND
DYNAMIC DISPLAYS OF FACIAL EMOTION

By

Utaka S. Springer

May 2005

Chair: Dawn Bowers
Major Department: Clinical and Health Psychology

Rationale. Recent studies suggest that many neurologic and psychiatric disorders

are associated with impairments in accurately interpreting facial expressions. These

studies have typically used photographic stimuli, yet cognitive and neurobiological

research suggests that the perception of moving (dynamic) expressions is different from

the perception of static expressions. Moreover, in day-to-day interactions, humans

generally view faces while they move. This study had two aims: (1) to elucidate

differences in physiological reactivity [i.e., startle eyeblink reflex and the skin

conductance response (SCR)] while viewing static versus dynamic facial expressions,

and (2) to examine patterns of reactivity across specific facial expressions. It was

hypothesized that viewing dynamic faces would be associated with greater physiological

reactivity and that expressions of anger would be associated with potentiated startle

eyeblink responses relative to other facial expressions.









Methods. Forty young adults viewed two slideshows consisting entirely of static

or dynamic facial expressions. Expressions represented the emotions of anger, fear,

happiness, and neutrality. Psychophysiological measures included the startle eyeblink

reflex and SCR. Self-reported valence and arousal were also recorded for each stimulus.

Results. Data were analyzed using repeated measures analyses of variance. The

participants exhibited larger startle eyeblink responses while viewing dynamic versus

static facial expressions. Differences in SCR approached significance (p = .059), such

that dynamic faces tended to induce greater responses than static ones. Self-reported

arousal was not significantly different during either condition. Additionally, the startle

reflex was significantly greater for angry expressions, and comparably smaller for the

fearful, neutral, and happy expressions, across both modes of presentation. Self-reported

differences in reactivity between types of facial expressions are discussed in the context

of the psychophysiology results.

Conclusions. The current study found evidence supporting greater

psychophysiological reactivity in young adults while they viewed dynamic compared to

static facial expressions. Additionally, expressions of anger induced relatively higher

startle responses relative to other expressions, including fear. It was concluded that angry

expressions, representing personally directed threat, induce a greater motivational

propensity to withdraw or escape. These findings highlight an important distinction

between initial stimulus processing (i.e., expressions of fear or anger) and motivated

behavior.














CHAPTER 1
INTRODUCTION

The ability to successfully interpret facial expressions is a fundamental aspect of

normal life. An immense number of configurations across the landscape of the human

face are made possible by 44 pairs of muscles anchored upon the curving surfaces of the

skull. A broad smile, a wrinkled nose, widened eyes, a wink all convey emotional

content important for social interactions. Darwin (1872) suggested that successful

communication through nonverbal means such as facial expressions has promoted

survival of the human species. Indeed, experimental research has demonstrated that

infants develop an understanding of their mother's facial expressions rapidly and

automatically, and that they use these signals to guide their safe behavior (Field,

Woodson, Greenberg, & Cohen, 1982; Johnson, Dziurawiec, Ellis, & Morton, 1991;

Nelson & Dolgrin, 1985; Sorce, Emde, Campos, & Klinnert, 1985). The accurate

decoding of facial signals, then, can play a protective role as well as a communicative

one.

A growing body of empirical research suggests that many conditions are associated

with impaired recognition of facial expressions. A list of neurologic and psychiatric

conditions within which studies have associated impaired interpretation of facial

expressions include autism, Parkinson's disease, Huntington's disease, Alzheimer's

disease, schizophrenia, body dysmorphic disorder, attention-deficit/hyperactivity

disorder, and social phobia (Buhlmann, McNally, Etcoff, Tuschen-Caffier, & Wilhelm,

2004; Edwards, Jackson, & Pattison, 2002; Gilboa-Schechtman, Foa, & Amir, 1999; Kan,









Kawamura, Hasegawa, Mochizuki, & Nakamura, 2002; Singh et al., 1998;

Sprengelmeyer et al., 1996; Sprengelmeyer et al., 2003; Teunisse & de Gelder, 2001).

These deficits in processing facial expressions appear to exist above and beyond

disturbances in basic visual or facial identify processing and may reflect disruption of

cortical and subcortical networks for processing nonverbal affect (Bowers, Bauer, &

Heilman, 1993). In many cases, impairments in the recognition of specific facial

expressions have been discovered. For example, bilateral damage to the amygdala has

been associated with the inability to recognize fearful faces (Adolphs, Tranel, Damasio,

& Damasio, 1994).

One potential problem with these clinical studies is that they most often use static,

typically photographic, faces as stimuli. This may be problematic for two reasons. First,

human facial expressions usually consist of complex patterns of movement. They can

flicker across the face in a fleeting and subtle manner, develop slowly, or arise with

sudden intensity. The use of static stimuli in research and clinical evaluation, then, has

poor ecological validity. Second, mounting evidence suggests that there are fundamental

cognitive and neural differences between the perception of static-based and dynamic

facial expressions. These differences, which can be subdivided into evidence from

cognitive and more biologically based studies, are described in more detail in the

following sections.

The preceding highlights the need to incorporate dynamic facial expression stimuli

in the re-evaluation of conditions currently associated with facial expression processing

deficits, as argued by Kilts and colleagues (2003). This line of research would greatly

benefit from the creation of a standardized battery of dynamic expression stimuli. Before









a more ecologically valid dynamic battery can be developed, it is necessary to more

precisely characterize how normal individuals respond to different types of facial

expression stimuli. Although cognitive, behavioral, and neural systems have been

examined in the comparing responses associated with static and dynamic face perception,

no studies to date have compared differences in emotional reactivity using

psychophysiologic indices of arousal and valence (i.e., startle reflex, skin conductance

response). The two major goals of the present study, then, are as follows: first, to

empirically characterize psychophysiologic differences in how people respond to

dynamic versus static emotional faces, and second, to determine whether

psychophysiologic response patterns differ when individuals view different categories of

static and dynamic facial expressions (e.g., anger, fear, or happiness).

The following sections provide the background for the current study in three parts:

(1) evidence that suggests cognitive and neurobiological differences in the perception of

static versus dynamic expressions, (2) "dimensional" and "categorical" approaches to

studying emotion, and (3) emotional responses to viewing facial expressions. Specific

hypotheses and predictions are presented in the next chapter.

Perceptual Differences for Static and Dynamic Expressions

Evidence that individuals respond differently to static and dynamic displays of

emotion comes from two major domains of research. The first major domain is cognitive

research. With regard to the present study, this refers to the study of the various internal

mental processes involved in the perception of emotions in others (i.e., recognition and

discrimination), as inferred by overt responses. The second major domain is

neurobiological research. Again, specific to the present study, this refers to the

physiological and neurological substrates involved during or after emotion perception.









The following sections review the literature from these two domains with regard to

differences in perception of static and dynamic expressions.

Cognitive Studies

Recent research suggests that facial motion influences several cognitive aspects of

face perception. First, facial motion improves recognition of familiar faces, especially in

less-than-optimal visual conditions (Burton, Wilson, Cowan, & Bruce, 1999; Lander,

Christie, & Bruce, 1999). For example, in conditions such as low lighting or blurriness,

the identity of a friend or a famous actor is more easily discerned through face perception

if the face is moving. It is less clear whether this advantage of movement is also

conferred to the recognition of unfamiliar faces (Christie & Bruce, 1998; Pike, Kemp,

Towell, & Phillips, 1997). As reviewed by O'Toole et al. (2002), there are two

prevailing hypotheses on how facial motion enhances face recognition. According to the

first, facial movement provides additional visual information that helps the viewer

assemble a three-dimensional mental construct of the face (e.g., Pike et al., 1997). A

second view is that certain movement patterns may be unique and characteristic of a

particular individual (i.e., "movement signatures"). These unique movement signatures,

such as Elvis Presley's lip curl, are thought to supplement the available structural

information of the face (e.g., Lander & Bruce, 2004). Either or both hypotheses can

account for observations that familiar individuals are more readily recognized from

dynamic than static pictures.

One question that naturally arises is whether facial motion also increases

recognition and discrimination of discrete types of emotional expressions. Like familiar

faces, emotional expressions on the face have been shown to be similar across individuals

and even across cultures (Ekman, 1973; Ekman & Friesen, 1976). Leonard and









colleagues (1991) found that categorical judgments of "happiness" during the course of a

smile occurred at the point of most rapid movement change in the actor's facial

configuration. Werhle and colleagues (2000) reported that recognition of discrete

emotions was enhanced through the use of dynamic versus static synthetic facial stimuli.

Other research extended the findings of Werhle et al. by finding that certain speeds of

facial expressions are optimal for recognition, depending on the specific expression type

(Kamachi et al., 2001). Altogether, these studies suggest that motion does facilitate the

recognition of facial expressions.

Some research suggests that the subjectively rated intensity of emotional displays

might also be influenced by a motion component. For example, a study by Atkinson and

colleagues (2004) suggested that the perceived intensity of emotional displays is

dependent on motion rather than on form. Participants in this study judged actors posing

full-body expressions of anger, disgust, fear, happiness, and sadness, both statically and

dynamically. Dynamic displays of emotion were judged as more intense than static ones,

both in normal lighting and in degraded lighting (i.e., in darkness with points of light

attached to the actors' joints and faces). Although this evidence suggests that dynamic

expressions of emotion are indeed perceived as more intense than static ones, research on

this topic has been sparse.

Neural Systems and the Perception of Movement versus Form

Previous research also suggests that distinct neural systems are involved in the

perception of static and dynamic faces. A large body of evidence convincingly supports

the existence of two anatomically distinct visual pathways in the cerebral cortex

(Ungerleider & Mishkin, 1982). One visual pathway is involved in motion detection

(V5) while the other visual pathway is involved in processing form or shape information









(V3, V4, inferotemporal cortex) [for review, see Zeki (1992)]. As one example of

evidence that visual form is processed relatively independently, microelectrode

recordings of individual neurons in the inferotemporal cortex of monkeys have been

shown to respond preferentially to simple, statically presented shapes (Tanaka, 1992).

Preferential single-cell responses to more complex types of statically presented stimuli,

such as faces, have also been shown (DeSimone, 1991). An example of evidence for the

existence of a specialized "motion" pathway is provided by a fascinating case study

describing a patient with a brain lesion later found to be restricted to area V5 [Zihl et al.,

1983; as discussed in Eysenck (2000)]. This woman was adequate at locating stationary

objects by sight, she had good color discrimination, and her stereoscopic depth perception

was normal; however, her perception of motion was severely impaired. The patient

perceived visual events as if they were still photographs. People would suddenly appear

here or there, and when she poured her tea, the fluid appeared to be frozen, like a glacier.

Humphreys and colleagues (1993) described findings from two brain-impaired

patients who displayed different patterns of performance during the perception of static

and dynamic facial expressions. One patient was impaired at discriminating facial

expressions from still photographs of faces, but performed normally when asked to make

judgments of facial expressions depicted by moving dots of light. This patient had

suffered a stroke that involved the bilateral occipital lobes and extended anteriorly

towards the temporal lobes (i.e., the "form" visual pathway). The second patient was

poor at judging emotional expressions from both the static and dynamic displays despite

being relatively intact in other visual-perceptual tasks of comparable complexity. This

patient had two parietal lobe lesions, one in each cerebral hemisphere. Taken together,









the different patterns of performance from these two patients suggest dissociable neural

pathways between recognition of static and dynamic facial expressions.

Additional work with microelectrode recordings in non-human primates suggests

that static and dynamic facial stimuli are processed by visual form and visual motion

pathways, respectively, and converge at the area of the superior temporal sulcus (STS)

(Puce & Perrett, 2003). A functional imaging study indicates that the STS region

performs the same purpose in humans (Puce et al., 2003). In monkeys, specific responses

in individual neurons of the STS region have shown sensitivity to static facial details such

as eye gaze and the shape of the mouth, as well as movement-based facial details, such as

different types of facial motion (Puce & Perrett, 2003).

The amalgamation of data from biological studies indicates that static and dynamic

components of facial expressions appear to be processed by separable visual streams that

eventually converge within the region of the STS. The next section provides a

background for two major conceptual models of emotion. This information is then used

as a backdrop for the current study.

Dimensional versus Categorical Models of Emotion

Dimensional Models of Emotion

Historically, there have been two major approaches in the study of emotion. In

what is often described as a dimensional model, emotions are characterized using chiefly

two independent, bipolar dimensions (e.g., Schlosberg, 1952; Wundt, 1897). The first

dimension, "valence", has been described in different ways (i.e., pleasant to unpleasant,

positive to negative, appetitive to aversive); however, it generally refers to a range of

positive to negative feeling. The second dimension, arousal, represents a continuum

ranging from very low (e.g., calm, disinterest, or a lack of enthusiasm) to very high (e.g.,









extreme alertness, nervousness, or excitement). These two orthogonal scales create a

two-dimensional affective space, across which emotions and emotional responses might

be characterized.

Other dimensional approaches have included an additional scale in order to more

fully define the range of emotional judgments. This third scale has been variously

identified as "preparation for action", "aggression", "attention-rejection", "dominance",

and "potency", and has been helpful for differentiating emotional concepts (Averill,

1975; Bush, 1973; Heilman, 1987, February; Russell & Mehrabian, 1977; Schlosberg,

1952). For instance, fear and anger might be indistinguishable within a two-dimensional

affective space both may be considered negative/unpleasant emotions high in arousal.

A third dimension such as dominance or action separates these two emotions in three-

dimensional affective space. Briefly, dominance refers to the range of feeling dominant

(i.e., having total power, control, and influence) to submissive (i.e., feeling a lack of

control or unable to influence a situation). This construct has been discovered

statistically through factor analytic methods based on the work of Osgood, Suci, and

Tannenbaum (1957). Action (preparation for action to non-preparation for action), on the

other hand, was proposed by Heilman [1987; from Bowers et al. (1993)]. This construct

was based on neuropsychological evidence and processing differences between the

anterior portions of the right and left hemispheres (e.g., Morris, Bradley, Bowers, Lang,

& Heilman, 1991). Thus, in the present example for differentiating fear and anger, anger

is associated with feelings of dominance or preparation for action, whereas fear is

associated with feelings of submission (lack of dominance) or a lack of action (i.e., the

"freezing" response in rats with a sudden onset of fear). In this way, then, a third









dimension can sometimes help distinguish between emotional judgments that appear

similar in two-dimensional affective space. Generally, however, the third dimension has

not been a replicable factor across studies or cultures (Russell, 1978; Russell &

Ridgeway, 1983). The present study incorporates only the dimensions of valence and

arousal.

Emotion researchers have measured emotional valence and arousal in several ways,

including: (1) overt behaviors (e.g., EMG activity of facial expression muscles such as

corrugator or zygomatic muscles), (2) conscious thoughts or self-reports about one's

emotional experience, usually measured by ordinal scales, and (3) central and physiologic

arousal and activation, such as electrodermal activity, heart rate, and the magnitude of the

startle reflex (Bradley & Lang, 2000). All three components of emotion have been

measured reliably in laboratory settings. Among the physiological markers of emotion,

the startle eyeblink typically is used as an indicator of the valence of an emotional

response (Lang, Bradley, & Cuthbert, 1990). The startle reflex is an automatic

withdrawal response to a sudden, intense stimulus, such as a flash of light or a loud burst

of noise. More intense eyeblink responses, measured from electrodes over the orbicularis

oculi muscles, have been found in association with negative/aversive emotional material

relative to neutral material. Less intense responses have been found for

positive/appetitive material, relative to neutral material. Palm sweat, or SCR, is another

physiological marker of emotion and typically is used as an indicator of sympathetic

arousal (Bradley & Lang, 2000). Higher SCR has been shown to be associated with

higher self-reported emotional arousal, relatively independent of valence (e.g., Lang,

Greenwald, Bradley, & Hamm, 1993).









Categorical Models of Emotion

A second major approach to the study of emotion posits that emotions are actually

represented by basic, fundamental categories (e.g., Darwin, 1872; Izard, 1994). Support

for the discrete emotions view comes from two major lines of evidence: cross-cultural

studies and neurobiological findings [although cognitive studies have also been

conducted, e.g., Young et al. (1997)]. With regard to the former line of evidence, Darwin

(1872) argued that specific emotional states are evidenced by specific, categorical

patterns of facial expressions. He suggested that these expressions contain universal

configurations that are displayed by people throughout the world. Ekman and Friesen

(1976) developed this idea further and created an atlas describing the precise muscular

configurations associated with each of six basic emotional expressions (e.g., surprise,

fear, disgust, anger, happiness, and sadness). In a cross-cultural study, Ekman (1972)

found that members of a preliterate tribe in the highlands of New Guinea were able to

recognize the meaning of these expressions with a high degree of accuracy. Further,

photographs of tribal members who had been asked to pose various emotions were shown

to college students in the United States. The college students were able to recognize the

meanings of the New Guineans' emotions, also with a high degree of accuracy.

Additional evidence supporting the "categories of emotion" conceptualization is

derived from the neurobiological literature. For instance, electrical stimulation of highly

specific regions of the brain has been associated with distinct emotional states. Hess and

Brugger [1943; from Oatley & Jenkins (1996)] discovered that angry behavior in cats,

dubbed "sham rage" (Cannon, 1931), were elicited with direct stimulation of the

hypothalamus. Fearful behavior and autonomic changes have been induced (both in rats

and humans) with stimulation of the amygdala, an almond-shaped limbic structure within









the anterior temporal lobe. These changes include subjective feelings of fear and anxiety

as well as freezing, increased heart rate, and increased levels of stress hormones [for

review, see Davis & Whalen (2001)]. Positive feelings have also been elicited with direct

stimulation of a specific neural area. Okun and colleagues (2004) described a patient

exuding smiles and feelings of euphoria in association with deep brain stimulation of the

nucleus accumbens region. These studies of electrical stimulation in highly focal areas in

the brain appear to lend credence to the hypothesis that emotions can be categorized into

discrete subtypes.

The case for categorical emotions has been further bolstered with evidence that

different emotional states have been associated with characteristic psychophysiologic

responses. Several studies conducted by Ekman, Levenson, and Friesen (Ekman,

Levenson, & Friesen, 1983; Levenson, Carstensen, Friesen, & Ekman, 1991; Levenson,

Ekman, & Friesen, 1990) involved participants reliving emotional memories and/or

receiving coaching to reconstruct their facial muscles to precisely match the

configurations associated with Ekman's six major emotions (Ekman & Friesen, 1976).

The results of these studies indicated that the response pattern from several indices of

autonomic nervous system activity (specifically, heart rate, finger temperature, skin

conductance, and somatic activity) could reliably distinguish between positive and

negative emotions, and even among negative emotions of disgust, fear, and anger (Ekman

et al., 1983; Levenson et al., 1991; Levenson et al., 1990). Sadness was associated with a

distinctive, but less reliable pattern. Other researchers also have described characteristic

psychophysiologic response patterns associated with discrete emotions (Roberts &

Weerts, 1982; Schwartz, Weinberger, & Singer, 1981).









Emotional Responses to Viewing Facial Expressions

Emotion-specific psychophysiologic responses have been elicited in individuals

viewing facial displays of different types of emotions. For instance, Balaban and

colleagues (1995) presented photographic slides of angry, neutral, and happy facial

expressions to 5-month-old infants. During the presentation of each slide, a brief

acoustic noise burst was presented to elicit the eyeblink component of the startle reflex.

Angry expressions were associated with significantly stronger startle responses than

happy expressions, suggesting that at least in babies, positive and negative facial

expressions could emotionally modulate the startle reflex. This phenomenon was

explored in a recent study using an adult sample, but with the addition of fearful

expressions as a category (Bowers et al., 2002). Thirty-six young adults viewed static

images of faces displaying anger, fear, happy, and neutral expressions. Acoustic startle

probes elicited the eyeblink reflex during the presentation of each emotional face.

Similar to Balaban's (1995) study, responses to angry faces were associated with

significantly stronger startle reflexes than responses to other types of expressions.

Startle eyeblinks during the presentation of neutral, happy, and fearful expressions did

not significantly differ in this study.

The observations that fear expressions failed to prime or enhance startle reactivity

seem counterintuitive for two reasons (Bowers et al., 2002). First, many studies have

indicated that the amygdala appears to play a role in danger detection and processing

fearful material. Stimulation of the amygdala induces auras of fear (Gloor, Olivier,

Quesney, Andermann, & Horowitz, 1982), while bilateral removal or damage of the

amygdala is characterized by behavioral placidity and blunted fear for threatening

material (Adolphs et al., 1994; Kluver & Bucy, 1937). A few studies have even










suggested that the amygdala is particularly important for identification of fearful facial

expressions (Adolphs et al., 1994; J. S. Morris et al., 1998). A second reason why the

null effect of facial fear to startle probes seems counterintuitive is derived from the

amygdala's role in the startle reflex. Davis and colleagues mapped the neural circuitry of

the startle reflex using an animal model [see Figure 1-1; for a review, see Davis (1992)].

Their work has shown that through direct neural projections, the amygdala serves to

amplify the startle circuitry in the brainstem under conditions of fear and aversion. In

light of this research, the finding that fearful faces exerted no significant modulation

effects on the startle circuitry (Bowers et al., 2002) does appear counterintuitive, at least

from an initial standpoint.

Lateral I Autonomic
Region NS
Hypothalamus (HR, BP)

Dorsal Central Gray
Lateral Central (Fight/Flight)
Stimulus Sensory Sensory Nucleus Nucleus
Input Cor Thalamus Ventral Central Gray
Amygdala ,,, ,

SNucleus Reticularis Pontis Caudalis
Potentiated Startle

Figure 1-1. Neuroanatomic circuitry of the startle reflex (adapted from Lang et al., 1997)


The authors, however, provided a plausible explanation for this result (Bowers et

al., 2002). They underscored the importance of the amygdala's role in priming the

subcortical startle circuitry during threat-motivated behavior. Angry faces represent

personally directed threat, and, as demonstrated by the relatively robust startle response

they found, induce a motivational propensity to withdraw or escape from that threat.

Fearful faces, on the other hand, reflect potential threat to the actor, rather than to the

perceiver. It is perhaps unsurprising in this light that fearful faces exerted significantly









less potentiation of the startle reflex. The "preparation for action" dimension (Heilman,

1987) might account for this difference between responses to fearful and angry faces -

perhaps the perception of fear in another face involves less propensity or motivation to

act than personally directed threat. Regardless of the interpretation, these findings

suggest that different types of emotional facial expressions are associated with different,

unique patterns of reactivity as measured by the startle reflex (also referred as "emotional

modulation of the startle reflex"). The question remains as to whether the pattern of

startle reflex responses while viewing different facial expressions is different when

viewing dynamic versus static emotional facial expressions. This has only been

evaluated previously for static facial expressions, but not for dynamic ones. It seems

reasonable to hypothesize that the two patterns of modulation will be similar, as both

dynamic and static visual information must travel from their separate pathways to

converge on the area of the cortex that enables one to apply meaning (STS area of the

cortex). Across emotions, the question also remains as to whether overall differences in

physiologic reactivity exist. These questions are tested empirically in the present study.














CHAPTER 2
STATEMENT OF THE PROBLEM

Historically, the characterization of expression perception impairments in

neurologic and psychiatric populations has been largely based on research using static

face stimuli. The preceding literature suggests this may be problematic, as fundamental

cognitive and neurobiological differences exist in the perception of static and dynamic

displays of facial emotion. A long-term goal is to develop a battery of dynamic face

stimuli that would enable investigators and clinicians to better evaluate facial expression

interpretation in neurologic and psychiatric conditions. Before this battery can be

developed, however, an initial step must be taken to characterize differences and

similarities in the perception of static and dynamic expressions. To date, no study has

used psychophysiological methods to investigate this question.

This study investigates the emotional responses that occur in individuals as a result

of perceiving the emotions of others via facial expressions. The two major aims of the

present study are to empirically determine in normal, healthy adults (1) whether dynamic

versus static faces induce greater psychophysiologic reactivity and self-reported arousal

and (2) whether reactions to specific types of facial expressions (e.g., anger, fear,

happiness) resolve into distinct patterns of emotional modulation based on the mode of

presentation (i.e., static, dynamic). To examine these aims, normal individuals were

shown a series of static or dynamically presented facial expressions (fear, anger, happy,

neutral) while psychophysiologic measures (skin conductance, startle eyeblink) were

simultaneously acquired. Following presentation of each facial stimulus, subjective









ratings of valence and arousal were obtained. Thus, the primary dependent variables

were included: (a) skin conductance as a measure of psychophysiologic arousal; (b)

startle eyeblink as a measure of valence; and (c) subjective ratings of valence and arousal.

Specific Aim I

To test the hypothesis that dynamically presented emotional faces will induce

greater psychophysiologic reactivity and self-reported arousal than statically presented

faces. Based on the reviewed literature, it is hypothesized that the perception of dynamic

facial expressions will be associated with greater overall physiological reactivity than

will the perception of static facial expressions. This hypothesis is based on evidence

suggesting that dynamic displays of emotion are judged as more intense, as well as the

fact that the perception of motion in facial expressions appears to provide more visual

information to the viewer, such as three-dimensional structure or "movement signatures".

The following specific predictions are made: (a) the skin conductance response will be

significantly larger when subjects view dynamic than static faces; (b) overall startle

magnitude will be greater when subjects view dynamic versus static faces; and (c)

subjective ratings of arousal will be significantly greater for dynamic versus statically

presented faces.

Specific Aim II

To test the hypothesis that the pattern of physiologic reactivity (i.e., emotional

modulation) to discrete facial emotions (i.e., fear, anger, happiness, neutral) will be

similar for both static and dynamically presented facial expressions. Based on

preliminary findings from our laboratory, we expected that anger expressions would

induce heightened reactivity (as indexed by the startle eyeblink reflex) than fear,

happiness, or neutral expressions. We hypothesized that this pattern of emotion






17


modulation will be similar for both static and dynamic expressions, since both modes of

presentation presumably gain access to neural systems that underlie interpretation of

emotional meaning. The following specific predictions are made: (a) for both static and

dynamic modes of presentation, the startle response (as indexed by T-scores) for anger

expressions will be significantly larger than those for fear, happy, and neutral ones, while

magnitudes for fear, happy, and neutral expressions will not be significantly different

from each other.














CHAPTER 3
METHODS

Participants

Participants consisted of 51 (27 females, 24 males) healthy, right-handed adults

recruited from the University of Florida campus. Exclusion criteria included: (1) a

history of significant neurologic trauma or disorder, (2) a history of any psychiatric or

mood disorder, (3) a current prescription for mood or anxiety-altering medication, (4) a

history of learning disability, and (5) clinical elevations on the Beck Depression

Inventory (BDI) (Beck, 1978) or the State-Trait Anxiety Inventory (STAI) (Spielberger,

1983). Participants gave written informed consent according to university and federal

regulations. All participants who completed the research protocol received $25.

Eleven of the 51 subjects were excluded from the final data analyses. They

included 8 subjects whose psychophysiology data were corrupted due to excessive

artifact and/or absence of measurable blink responses. The data from 3 subjects were not

analyzed due to clinical elevations on mood questionnaires [BDI (N=2; scores of 36 and

20); STAI (N=1; State score = 56, Trait score = 61)].

Demographic variables for the remaining 40 participants are given in Table 3-1. As

shown, subjects ranged in age from 18 to 43 years (M=22.6, SD=4.3) and had 12 to 20

years of education (M=15.3, SD=1.7). BDI scores ranged from 0 to 9 (M=3.8, SD=2.9),

STAI-State scores ranged from 20 to 46 (M=29.2, SD=6.9), and STAI-Trait scores

ranged from 21 to 47 (M=31.0, SD=6.9). The racial representation was 52.5% Caucasian,









17.5% African American, 12.5% Hispanic/Latino, 12.5% Asian, 2.5% Native American,

and 2.5% Multiracial.

Table 3-1
Demographic characteristics of experimental
participants
Measure Mean (SD) Range
Age 22.6(4.3) 18-43
Education 15.3 (1.7) 20-Dec
GPA 3.48 (0.49) 2.70 3.96
BDI 3.8(2.9) 0-9
STAI-State 29.2 (6.9) 20 46
STAI-Trait 31.0(6.9) 21-47
Note. BDI = Beck Depression Inventory; GPA = Grade
Point Average; STAI = State-Trait Anxiety Inventory.


Materials

Static and dynamic versions of angry, fearful, happy, and neutral facial expressions

from 12 "untrained" actors (6 males, 6 females) were used as stimuli in this study. These

emotions were chosen based on previous findings from our laboratory (Bowers et al.,

2002). The following sections describe the procedure used for eliciting, recording, and

digitally standardizing these stimuli.

Collection of Facial Stimuli: Video Recording

The stimulus set for the present study was originally drawn from 15 University of

Florida graduate students (Clinical and Health Psychology) and undergraduates who were

asked to pose various facial expressions. These untrained actors ranged in age from 19 to

32 years and represented Caucasian, African American, Hispanic, and Asian ethnicities.

All provided informed consent to allow their faces to be used as stimuli in research

studies.









The videorecording session took place in the Cognitive Neuroscience Laboratory,

where the actor sat comfortably in a chair in front of a continuously recording black-and-

white Pulnix videocamera. The camera was connected to a Sony videorecorder and

located approximately 2 meters in front of the actor. The visual field of the videocamera

was adjusted to include only the face of the actor. A Polaris light meter was used to

uniformly balance the incident light upon the patient's left and right sides to within 1 lux

of brightness. To minimize differences in head position and angle between captured

facial expressions, the actor's head was held in one position by a rigid immobilization

cushion (Med-Tec, Inc.) during the entirety of the recording session. Prior to the start of

videorecording, the experimenter verified that the actor was comfortable and that the

cushion did not obstruct the view of the actor's face.

A standardized format was followed for eliciting the facial expressions. The actor

was asked to pose 6 emotional expressions (i.e., anger, disgust, fear, happiness, sadness,

and neutral) and to make each expression intense enough so that others could easily

decipher the intended emotion. For 'neutral', the actor was told to look into the camera

lens with a relaxed expression and blink once. Before each expression type was

recorded, visual examples from Ekman & Friesen's Pictures of Facial Affect (Ekman &

Friesen, 1976) and Bowers and colleagues' Florida Affect Battery (Bowers, Blonder, &

Heilman, 1992) were shown to the actor. At least three trials were recorded for each of

the six expression types.

Selection of Facial Stimuli

Once all the face stimuli were recorded, three naive raters from the Cognitive

Neuroscience Laboratory reviewed all trials of each expression made by the 15 actors.

The purpose of this review was to select the most easily identifiable exemplar from each









emotion category (anger, disgust, fear, happiness, sadness, neutral) that was free of

artifact (blinking, head movement) and most closely matched the stimuli from the Ekman

series (Ekman & Friesen, 1976) and the Florida Affect Battery (Bowers et al., 1992).

Selection was based on consensus by the three raters. The expressions from 3 actors (2

female, 1 male) were discarded due to movement artifact, occurrence of eyeblinks, and

lack of consensus regarding at least half of the intended expression types. This resulted

in 72 selected expressions (6 expressions x 12 actors) stored in videotape format.

Digital Formatting of Facial Stimuli

Each of the videotaped facial expressions were digitally formatted and

standardized. Dynamic versions were created first. Each previously selected expression

(the best exemplar from each emotion category) was digitally captured onto a PC using a

FlashBus MV Pro framegrabber (Integral Technologies) and VideoSavant 4.0 (IO

Industries) software. The resulting digital "movie clips" (videosegments) consisted of a

5.0-second sequence of 150 digitized images or frames (30 frames per second). Each

segment began with the actor's face in a neutral pose that then moved to peak expression.

The temporal sequence of each stimulus was standardized such that the first visible

movement of the face (the start of each expression) occurred at 1.5 seconds and that the

peak intensity was visible and unchanging for at least 3.0 seconds at the end of the

videosegment. To standardize the point of the observer's gaze at the onset of each

stimulus, 30 frames (1 s) of a white crosshairs over a black background were inserted

before the first frame of the videosegment, such that the crosshairs marked the point of

intersection over each actor's nose. In total, each final, processed videosegment

consisted of 180 frames (6.0 seconds). All videosegments were stored in 16-bit greyscale









(256 levels) with a resolution of 640 x 480 pixels and exported to a digital MPEG movie

file (Moving Picture Experts Group) to comprise the dynamic set of face stimuli.

Unmoving, or static correlates of these stimuli were then created by using the

frame representing the peak intensity of each facial expression. "Peak intensity" was

defined as the last visible frame in the dynamic expression sequence of frames. This

frame was multiplied to create a sequence of 150 identical frames (5.0 seconds). As with

the dynamic stimuli, 1.0 second of crosshairs was inserted into the sequence prior to the

first frame. The digital specifications of this stimulus set were identical to that of the

dynamic stimulus set. Figure 3-1 graphically compares the content and timing of the

both versions of these stimuli.

Dynamic Stimuli

Image Crosshairs Neutral Moving Peak
Expression Expression Expression
Seconds 0 1.0 2.5 -3.0 6.0
Frame No. 0 30 75 90 180

Static Stimuli

Image Crosshairs Peak Expression
Seconds 0 1.0 6.0
Frame No. 0 30 180

Figure 3-1. Temporal representation of dynamic and static stimuli by time (s) and frame
number. Each stimulus frame rate is 30 frames / s.


After dynamic and static digital versions of the facial stimuli were created, an

independent group of 21 naive individuals rated each face according to emotion category,

valence, and arousal. Table 3-2 provides the overall mean ratings for each emotion









category by viewing mode (static or dynamic). Ratings by individual actor are given in

Appendixes A (static) and B (dynamic).

Table 3-2
Mean (SD) recognition rates, valence, and arousal of static and dynamic face stimuli
Measure Anger Disgust Fear Happiness Neutral Sadness
Dynamic Faces (n = 12)
% Correct 78.2(16.7) 79.0 (17.5) 94.4 (6.5) 99.6 (1.4) 92.0 (4.2) 93.5 (10.0)
Valence 3.34 (.40) 3.58 (.43) 4.12 (.29) 7.23 (.39) 4.68 (.65) 3.51 (.52)
Arousal 5.28 (.38) 5.19 (.56) 6.00 (.47) 6.00 (.51) 3.63 (.50) 4.55 (.64)
Static Faces (n = 12)
% Correct 68.2(21.3) 77.4 (16.6) 95.2 (5.0) 99.2 (1.9) 89.3 (8.1) 91.3 (11.0)
Valence 3.04 (.39) 3.39 (.55) 3.60 (.41) 7.18 (.52) 4.95 (.41) 3.45 (.40)
Arousal 5.13 (.61) 5.31 (.64) 5.96 (.53) 5.84 (.56) 3.26 (.39) 4.48 (.56)


Final Selection of Stimuli for Psychophysiology Experiment

The emotional categories of anger, fear, happiness, and neutral were selected for

the present study based on previous results from our laboratory (Bowers et al., 2002).

Thus, the final set of stimuli used in the present study consisted of static and dynamic

versions of 12 actors' (6 female, and 6 male) facial expressions representing these four

emotion categories. The total number of facial stimuli was 96 (i.e., 48 dynamic, 48

static).

Design Overview and Procedures

Each subject participated in two experimental conditions, one involving dynamic

face stimuli and the other involving static face stimuli. During both conditions,

psychophysiologic data (i.e., skin conductance, startle eyeblink responses) were collected

along with the participant's ratings of each face stimulus according to valence

(unpleasantness to pleasantness) and arousal. There was a 5-minute rest interval between

the two conditions. Half the participants viewed the dynamic faces first, whereas the









remaining viewed the static faces first. The order of these conditions was randomized but

counterbalanced across subjects.

Testing took place within the Cognitive Neuroscience Lab of the McKnight Brain

Institute at the University of Florida. Informed consent was obtained according to

University and Federal regulations. Prior to beginning the experiment, the participant

completed several questionnaires including a demographic form, the BDI, the STAI, and

a payment form. The skin from both hands and areas under each eye were cleaned and

dried thoroughly. A pair of 3 mm Ag/AgCl sensory electrodes was filled with a

conducting gel (Medical Associates, Inc., Stock # TD-40) and attached adjacently over

the bottom arc of each orbicularis oculi muscle via lightly adhesive electrode collars.

Two 12 mm Ag/AgCl sensory electrodes were filled with conducting gel (K-Y Brand

Jelly, McNeil-PPC, Inc.) and were attached adjacently via electrode collars on the thenar

and hypothenar surfaces of each palm.

Throughout testing, the participant sat in a reclining chair in a dimly lit sound-

attenuated 12' x 12' room with copper-mediated electric shielding. An initial period was

used to calibrate the palmar electrodes and to familiarize the participant with the startle

probes. The lights were dimmed, and twelve 95-dB white noise bursts were presented to

the subject via stereo Telephonics (TD-591c) headphones. The noise bursts were

presented at a rate of about once per 30 seconds.

After the initial calibration period, the participant was given instructions about the

experimental protocol. They were told they would see different emotional faces, one face

per trial, and were asked to carefully watch each face and ignore the brief noises that

would be heard over the headphones. During each trial, the dynamic or static face stimuli









were presented on a 21" PC monitor, positioned 1 meter directly in front of the

participant. Each face stimulus was shown for six seconds on the monitor. While

viewing the face stimulus, the participant heard a white noise burst (95 db, 50 ms) that

was delivered via headphones. The white noise startle probes were randomly presented

at 4200 ms, 5000 ms, or 5800 ms after the onset of the face stimulus.

At the end of each trial, the participant was asked to rate each face stimulus along

the dimensions of valence and arousal. The ratings took place approximately six seconds

following the offset of the face stimulus, when a Self-Assessment Manikin SAM;

Bradley & Lang, 1994) was shown on the computer monitor. Valence ratings ranged

from extremely positive, pleasant, or good (9) to extremely negative, unpleasant, or bad

(1). Arousal ratings ranged from extremely excited, nervous, or active (9) to extremely

calm, disinterested, or unenthusiastic (1). The participant reported their valence and

arousal ratings out loud, and their responses were recorded by an experimenter in the next

room, listening via a baby monitor. A new trial began 6 to 8 seconds after the ratings

were made.

Each experimental condition (i.e., dynamic, static) consisted of 48 trials that were

divided into 6 blocks of 8 trials each. A different actor represented each trial within a

given block. Half were males, and half females. One male actor and one female actor

represented each of four emotions (neutral, happiness, anger, fear) to total the 8 trials per

block. To reduce habituation of the startle reflex over the course of the experiment, 8

trials representing male and female versions of each expression category did not contain a

startle probe. These trials were spread evenly throughout each slideshow.









Following administration of both slideshows, the experimenter removed all

electrodes from the participant, who was then debriefed on the purpose of the experiment,

thanked, and released.

Psychophysiologic Measures

Acoustic Startle Eyeblink Reflex (ASR)

Startle eye blinks were measured via EMG activity from the orbicularis oculi

muscle beneath each eye. This measure was used as a dependent measure because of its

sensitivity to valence, with larger startle eyeblinks associated with negative/aversive

emotional states and smaller eyeblinks associated with positive emotional states (Lang,

Bradley, & Cuthbert, 1990). The raw EMG signal was amplified and frequencies below

90 Hz and above 1000 Hz were filtered using a Coulbourn bioamplifier. Amplification

of acoustic startle was set at 30000 with post-experimental multiplication to equate gain

factors (Bradley et al., 1990). The raw signal was then rectified and integrated using a

Coulbourn Contour Following Integrator with a time constant of 10 ms. Digital sampling

began at 20 Hz 3 s prior to stimulus onset. The sampling rate increased to 1000 Hz 50 ms

prior to the onset of the startle probe and continued at this rate for 250 ms after probe

onset. Sampling then resumed at 20 Hz until 2 s after stimulus offset. The startle data

were reduced off-line using custom software which evaluates trials for unstable baseline

and which scores each trial for amplitude in arbitrary A-D units and onset latency in

milliseconds. The program yields measures of startle response magnitude in arbitrary A-

D units that expresses responses during positive, neutral, and negative materials on the

same scale.









Skin Conductance Response (SCR)

The SCR was measured from electrodes attached to the palms with adhesive

collars. This measure was used because it is an index of sympathetic arousal, correlates

with self-reports of emotional arousal, and is relatively independent of valence (Bradley

& Lang, 2000). Skin conductance data were sampled at 20 Hz using two Coulboum

Isolated Skin Conductance couplers in DC mode (this is a constant voltage system in

which .5v is passed across the palm during recording). The SC couplers output to a

Scientific Solutions A/D board integrated within a custom PC. The skin conductance

response (SCR) was defined as the difference between the peak conductance during the

6-second viewing period and the mean conductance achieved during the last pre-stimulus

second, derived independently for each hand. SCR was represented in microsiemens

(US) units.

Data Reduction of Psychophysiology Measures

After the collection of the psychophysiologic data, the eyeblink and skin

conductance data were reduced using custom condensing software. For startle eyeblink,

data from trials without startle probes and the initial two practice trials were excluded

from the statistical analyses. Trials containing physiological data containing obvious

artifacts were also removed. For the remaining data, the peak magnitude of the EMG

activity elicited by each startle probe within the recorded time window was measured

(peak baseline in microvolts). Peak startle magnitudes were averaged for both eyes into

a composite score when data from both eyes were available. If data from only one eye

was available, this data was used in place of the composite score. Peak startle

magnitudes were additionally translated into T-scores, which were then averaged for each

expression type (i.e., happy, neutral, fear, and anger) and mode of presentation (i.e., static









and dynamic stimuli). For both startle magnitudes and T-scores, the four expression

categories were represented by no fewer than four trials each.

For the skin conductance response, condensing consisted of measuring the peak

magnitude of change relative to baseline activity at the start of each trial. Again, trials

containing physiological data containing obvious artifacts were removed. The magnitude

of change for each trial was measured and averaged for both hands, unless the data from

one of the palms contained excessive artifact. In these cases, the data from the other hand

was used in place of the composite data.

Statistical Analysis

Separate analyses were conducted for startle-blink, skin conductance, SAM

Valence ratings, and SAM Arousal ratings. Repeated-measures ANOVA with adjusted

degrees of freedom (Greenhouse-Geisser correction) were used, with a between-subjects

factor of Order ofSlideshows (dynamic, then static; static, then dynamic) and within-

subjects factors of Expression Category (anger, fear, neutral, happiness) and Viewing

Mode (dynamic, static). Analyses corresponding to apriori predictions were conducted

using planned contrasts (Helmert) between the four expression categories. A significance

level of alpha = 0.05 was used for all analyses.

We predicted three changes corresponding to indices of greater psychophysiologic

reactivity to dynamic expressions versus static expressions. These indices were: (1)

greater magnitude of the startle reflex, (2) greater percent change in skin conductance,

and higher self-reported SAM arousal ratings during perception of dynamic facial

expressions. Additionally, we predicted that the pattern of T-scores for both dynamic and

static facial expressions would show emotional modulation to the four different

categories of facial expressions incorporated in the experimental study. That is, startle






29


reflexes measured during the perception of anger would show larger startle reflexes than

those measured during the perception of fear, neutral, and happy expressions. Startle

responses measured during the perception of facial expressions represented by the latter

three emotional categories would not be appreciably different. Finally, this pattern of

modulation would not be significantly different between static and dynamic viewing

modes.














CHAPTER 4
RESULTS

The primary dependent measures were the acoustic startle eyeblink response

(ASR), the skin conductance response (SCR), and self-reported arousal from the Self-

Assessment Manikin (arousal). As previously described, the ASR was quantified by

measuring the change in EMG activity (mV) following the onset of the startle probes

(i.e., peak minus baseline EMG). The SCR was calculated by the difference between the

peak conductance in microsiemens (iS) during the 6-second period of stimulus

presentation and the mean level of conductance during a 1-s period immediately prior to

the onset of the stimulus. Finally, self-reported arousal encompassed a range of 1 to 9,

with higher numbers representing greater arousal levels. Table 1 gives the means and

standard deviations of each of these dependent variables by viewing mode.

Table 4-1
Mean (SD) dependent variable scores by Viewing Mode
Viewing Mode
Measure Dynamic Static
ASR-M .0062 (.0054) .0048 (.0043)
SCR .314 (.514) .172 (.275)
Arousal 5.27 (.535) 5.30 (.628)
Note. ASR = Acoustic Startle Eyeblink Response, Magnitude
(mV); SCR = Skin Conductance Response (uS); Arousal = Self-
Assessment Manikin, Arousal Scale (1-9).


Hypothesis 1: Differences in Reactivity to Dynamic vs. Static Faces

An initial set of analyses addressed the first hypothesis and investigated whether

psychophysiologic reactivity (startle eyeblink, SCR) and/or self-reported arousal differed









during the perception of dynamic versus static emotional faces. The results of the

analyses for each of the three dependent variables are described below.

Startle Eyeblink Response

The first analysis examined whether the overall size of the startle eyeblink

responses differed when participants viewed dynamic versus static facial expressions. A

repeated-measures ANOVA was conducted using Viewing Mode (dynamic, static) as the

within-subjects factor and Order ofPresentation (dynamic then static, or static then

dynamic) as the between-subjects factor.1 The results of the ANOVA revealed a

significant main effect for Viewing Mode [F(1, 38) = 9.003, p = .005, r,2= .192, power =

.832]. As shown in Table 1, startle eyeblink responses were greater during dynamic

versus static expressions. The main effect of Order ofPresentations was not significant

[F(1, 38) = 1.175, p = .285, ip2 .030, power =.185], nor was the Viewing Mode X

Order ofPresentations interaction [F(1, 38) = .895, p = .350, ip2 .023, power = .152].

Skin Conductance Response (SCR)

The second analysis examined whether the perception of the different types of

facial emotions induced different SCR patterns between modes of viewing. A repeated

measures ANOVA was conducted with Viewing Mode (dynamic, static) and Expression

Category (anger, fear, happy, neutral) as the within-subjects factors and Order of

Presentations (dynamic first, static first) as the between-subjects factor. The results of

the ANOVA revealed that the main effect of Viewing Mode approached significance

[F(1, 35) = 3.796, p = .059, p2 = .098, power = .474], such that SCR tended to be larger


1 Expression Category was not used as a factor in this analysis. Examination of emotional effects on startle
eyeblink is traditionally done using T-scores as the dependent variable rather than raw magnitude. Raw
startle magnitude is more appropriate as an index of reactivity, whereas T-scores are more appropriate for
examining patterns of emotional effects on startle.









when participants viewed dynamic versus static faces (see Table 1). No other main

effects or interactions reached trend level or significance {Order ofPresentations [F(1,

35) = .511, p .479, rp2= .014, power = .107]; Viewing Mode X Order ofPresentations

[F(1, 35) = 1.559, p = .220, rp2= .043, power = .229]; Expression Category X Order of

Presentations [F(1.832, 64.114)= .942,p .423, p2= .026, power = .251]}.

Self-Reported Arousal

The third analysis examined whether self-reported arousal ratings differed when

participants viewed static versus dynamic facial expressions. Again, a 2 (Viewing Mode)

X 4 (Expression Category) X 2 (Order ofPresentation) repeated measures ANOVA was

conducted. The results of this ANOVA revealed that no main effects or interactions were

significant: { Viewing Mode [F(1, 38) = .072,p .789, rp2 .002, power = .058]; Order

of Presentations [F(1, 38) = 2.912,p .096, p2= .071, power = .384]; Viewing Mode X

Order of Presentations [F(1,38) = .479, p = .493, p2= .012, power = .104]}. The effects

related to Expression Category will be described in the next section (page 39).

In summary, viewing dynamic facial stimuli was associated with significantly

larger acoustic startle eyeblink responses and a tendency (trend, p = .059) for larger skin

conductance responses than viewing static stimuli. There was no significant difference in

self-reported arousal ratings between dynamic and static stimuli.

Hypothesis 2: Emotion Modulation of Startle by Expression Categories

An additional set of analyses addressed the second hypothesis, investigating

emotional modulation of the startle eyeblink response via distinct categories of facial

expressions (i.e., anger, fear, neutral, and happy). Because of individual variability in the

size of basic eyeblink responses, the startle magnitude scores for each individual were

converted to T-scores on a trial-by-trial basis. These T-scores were analyzed in a









repeated-measures 4 (Expression Category: anger, fear, neutral, happy) X 2 (Viewing

Mode: dynamic, static) X 2 (Order ofPresentations: dynamic then static, or static then

dynamic) ANOVA. Table 2 gives the means and standard deviations of these scores and

other dependent variables by Viewing Mode and Expression Category.

Table 4-2
Mean (SD) Dependent variable scores by Viewing Mode and Expression Category
Expression Category
Viewing Mode Measure Anger Fear Neutral Happy
Dynamic
ASR-M .0053 (.0052) .0049 (.0046) .0045 (.0037) .0046 (.0042)
ASR-T 51.06 (3.43) 49.47 (3.01) 49.77 (3.47) 49.68 (3.14)
SCR .1751 (.2890) .1489 (.2420) .1825 (.3271) .1768 (.3402)
Valence 3.10 (.89) 3.44 (.99) 4.76 (.54) 7.19 (.84)
Arousal 5.39 (1.05) 6.43 (.98) 3.41 (1.33) 5.96 (.88)
Static
ASR-M .0066 (.0061) .0059 (.0051) .0061 (.0051) .0061 (.0057)
ASR-T 50.99 (3.79) 49.43 (3.92) 49.57 (4.30) 49.88 (3.21)
SCR .3247 (.5200) .3583 (.8070) .2515 (.3911) .3212 (.5457)
Valence 3.17(1.00) 3.65 (1.21) 4.69 (.84) 7.17 (.84)
Arousal 5.51(1.05) 6.35 (.95) 3.29 (1.36) 5.95 (.87)
Note. ASR=Acoustic Startle Response (mV); SCR=Skin Conductance Response ([tS); Valence=Self-
Assessment Manikin, Valence Scale (1-9); Arousal=Self-Assessment Manikin, Arousal Scale (1-9).


The main effect of Expression Category approached but did not reach

significance [F(3, 117) = 2.208, p = .091, rp2= .055, power = .548]. No other main

effects or interactions reached trend level or significance { Viewing Mode: [F(1, 114) =

.228, p = .636, p2= .006, power = .075]; Order ofPresentations: [F(1, 38) = .336, p =

.566, fp2= .009, power = .087]; Viewing Mode X Order ofPresentations: [F(1, 38) =

.457, p = .503, lp2 = .012, power = .101]; Expression Category X Order ofPresentations:

[F(3, 114) = .596, p = .619, ,p2 = .015, power = .171]; Expression Category X Viewing

Mode: [F(3, 114) = .037, p = .991, rpP2 = .001, power = .056]; Expression Category X









Viewing Mode X Order ofPresentations: [F(3, 114) = .728, p = .537, lp2= .019, power =

.201]}.

The apriori predictions regarding the expected pattern of emotion modulation of

the startle response [i.e., Anger > (Fear = Neutrality = Happiness)] warranted a series of

planned comparisons (Helmert) on Expression Category. Results of these comparisons

revealed that: (a) startle responses were significantly different for faces of anger than the

other expressions [F(1, 38) = 8.217, p = .007, p2= .178, power = .798]; (b) there were no

significant differences among the remaining emotional expressions [i.e., Fear = (Neutral

and Happy): F(1, 38) =.208, p = .651, p2= .005, power = .073); and Neutral = Happy:

F(1, 38) =.022, p = .882, rp2= .001, power = .052)]. Figure 4-2 graphically displays the

pattern of startle reactivity with T-scores among the four expression categories.


54
S53
552
g 51
50
49 -
48
47
46
Anger Fear Neutrality Happiness
Expression Category

Figure 4-1. Startle eyeblink T-scores by expression category [A > (F = N= H)].



To summarize these results, viewing angry facial expressions was associated with

significantly larger acoustic startle eyeblink responses than other types of facial









expressions (i.e., fear, neutral, and happy), and the responses between the other

expressions were not significantly different from each other. Additionally, the non-

significant Expression Category X Viewing Mode interaction (p = .991) indicates that this

response pattern was similar for both static and dynamic facial expressions.

Other Patterns of Emotional Modulation by Viewing Mode

The response pattern among different expression categories was also examined for

SCR and self-reported arousal, as well as self-reported valence. Like arousal, valence

was measured on a scale of 1-9, with higher numbers representing greater positive

feeling, pleasure, or appetitiveness, and lower numbers representing greater negative

feeling, displeasure, or aversiveness. For all three variables, the analyses were separate

3-way (4 x 2 x 2) repeated measures analyses of variance, using the within-subject factors

of Expression Category (anger, fear, neutral, happy) and Viewing Mode (dynamic, static),

and the between-subjects factor of Order ofPresentations (dynamic then static, or static

then dynamic). For SCR and arousal, these analyses were conducted in a preceding

section ("Differences in Reactivity to Dynamic vs. Static Faces", page 39). As such, for

these two measures, this section provides only the results for the Expression Category

main effect and associated interactions. The results for self-reported valence, however,

are provided in full, as this is a novel analysis. Table 2 gives the means and standard

deviations for each dependent variable by Viewing Mode and Expression Category.

Skin Conductance Response

For the skin conductance response, the main effect of Expression Category and all

associated interactions were non-significant: Expression Category [F(1.832, 64.114)=

.306, p = .821, rp2 = .009, power = .107], Expression Category X Viewing Mode









[F(2.012, 70.431) = 1.345, p .264, r,2= .037, power = .349];2 Expression Category X

Viewing Mode X Order ofPresentations [F(2.012, 70.431) = 1.341, p = .265, ,2= .037,

power = .348]. Thus, differences in SCR for discrete expressions were not found.

Self-Reported Arousal

For self-reported arousal, the main effect of Expression Category was significant

[F(2.144, 81.487) = 81.836, p < .001, rp2 = .683, power = 1.000],3 indicating that arousal

ratings were different while viewing different types of facial expressions. The results of

Bonferroni-corrected post-hoc comparisons are provided graphically in Figure 4-2.

Fearful faces (M= 6.39, SD = .91) were associated with significantly higher (p < .001)

intensity ratings than angry faces (M= 5.45, SD = .96), which were in turn rated as higher

(p < .001) in intensity than neutral faces (M= 3.35, SD = 1.22). Differences in intensity

ratings associated with happy faces (M= 5.96, SD = .76) approached significance when

compared to fearful (p = .082) and happy (p = .082) faces, and were rated as but

significantly higher (p < .001) than neutral faces.















2Mauchley's test was significant for both Expression Category [W = .273, ~2(5) = 43.762, p < .001] and the
Expression Category X Viewing Mode interaction [W = .451, X2(5) = 26.850, p < .001]; thus, degrees of
freedom for these effects were adjusted using the Greenhouse-Geisser method.

3 Mauchley's test was significant for both Expression Category [W = .507, ~2(5) = 24.965, p < .001] and
the Expression Category X Viewing Mode interaction [W = .403, X2(5) = 33.335, p < .001]; thus, degrees of
freedom for these effects were adjusted using the Greenhouse-Geisser method.













8

7



5

4

3

2


Anger Fear Neutrality Happiness
Expression Category

Figure 4-2. Self-reported arousal by expression category (F > A > N; H > N).



Self-Reported Valence

The final analysis explored the pattern of self-reported valence ratings for each of

the facial emotion subtypes and viewing modes. The results of the ANOVA revealed a

significant effect for Expression Category [F(2.153, 81.822) = 205.467, p < .001, mp2

.844, power = 1.00],4 indicating that valence ratings differed according to expression

categories. Bonferroni-corrected pairwise comparisons among the four facial expression

types indicated that faces of happiness (M 7.18, SD = .78) were rated as significantly

more pleasant than neutral faces (M= 4.73, SD = .59; p < .001), fear faces (M 3.54,

SD 1.03,p < .001), and angry faces (M= 3.14, SD =.84;p < .001). Additionally,

neutral faces were rated as significantly more pleasant than fearful (p < .001) or angry

4 A significant Mauchley's test for Expression Category [W = .566, X2(5) = 20.903, p = .001] and the
Expression Category X Viewing Mode interaction [W = .504, X2(5) = 25.146, p <.001] necessitated the use
of Greenhouse-Geisser adjusted degrees of freedom.









faces (p < .001). Finally, anger faces were rated as significantly more negative than

fearful faces (p = .014). This pattern is displayed graphically in Figure 4-3. No other

main effects or interactions reached trend level or significance { Viewing Mode: [F(1, 38)

=.646,p =.426, rp2= .017, power = .123]; Order ofPresentations: [F(1, 38) = 1.375,p

.248, rp2= .035, power = .208]; Viewing Mode X Order ofPresentations: [F(1, 38) =

.047, p = .829, rip2= .001, power = .055]; Expression Category X Order ofPresentations:

[F(2.153, 81.822) = 1.037,p = .363, rp2= .027, power = .233]; Expression Category X

ViewingMode: [F(2.015, 76.554) = .933,p = .398, rp2= .024, power = .207]; Expression

Category X Viewing Mode X Order ofPresentations: [F(2.015, 76.554) = 1.435, p =

.244, p2 .036, power = .300]}.


9 -

8 5

7 -

o ^ 6 -----------------------



4

3

2


Anger Fear Neutrality Happiness
Expression Category

Figure 4-3. Self-reported valence by expression category (H > N > F > A).



To summarize, these analyses revealed that the skin conductance response for

different categories of emotional expressions were not different from one another. By









contrast, both self-report measures did distinguish among the emotion categories. With

regard to self-reported arousal, fearful faces were rated highest, significantly moreso than

anger faces, which were in turn rated as significantly more arousing than neutral ones.

The difference in arousal between happy and angry faces, as well between happy and

fearful ones, approached but did not reach significance (p = .082, p = .082, respectively).

Happy faces were, however, rated as significantly more arousing than neutral ones. For

self-reported valence, each expression category was rated as significantly different from

the other, such that angry expressions were rated as most negative, followed by fearful,

neutral, and then happy faces.














CHAPTER 5
DISCUSSION

The present study examined two hypotheses. The first was that the perception of

dynamic versus static faces would be associated with greater physiological reactivity in

normal, healthy adults. Specifically, it was predicted that individuals would exhibit

significantly stronger startle eyeblink reflexes, higher skin conductance responses (SCR),

and higher levels of self-reported arousal when viewing dynamic expressions. These

predictions were based on evidence from previous research suggesting that movement in

facial expression (a) provides more visual information to the viewer, (b) increases

recognition of and discrimination between specific types of emotion, and (c) may make

the facial expressions appear more intense.

The second hypothesis was that the perception of different categories of facial

expressions would be associated with a distinct pattern of emotional modulation, and that

this pattern would not be different for static and dynamic faces. In other words, it was

hypothesized that the level of physiological reactivity while viewing facial expressions

would be dependent on the type of expression viewed, regardless of the viewing mode.

Specifically, the prediction was that normal adults would have increased startle eyeblink

responses during the perception of angry faces, and that responses to fearful, happy, and

neutral faces would not be significantly different from each other. Moreover, it was

predicted that this pattern of responses would be similar for both static and dynamically

presented expressions.









The first hypothesis was partially supported by the data. The participants tested in

the study sample exhibited larger startle eyeblink responses while viewing dynamic

versus static facial expressions. Differences in SCR while viewing the expressions in

these two modes reached trend level (p = .059), such that dynamic faces tended to induce

greater responses than static ones. Self-reported arousal was not significantly different

during either condition. Thus, the perception of moving emotional faces versus still

pictures was associated with greater startle eyeblink responses, but not SCR or self-

reported arousal.

The second hypothesis was supported by the data. That is, the startle reflex was

significantly greater for angry faces, and comparably smaller for the fearful, neutral, and

happy faces. The data suggested that this pattern of emotional modulation was similar

during both static and dynamic viewing conditions.

In summary, participants demonstrated greater psychophysiological reactivity to

dynamic faces compared to static faces, as indexed by the startle eyeblink response, and

partially by SCR. Participants did not, on the other hand, report differences in perceived

arousal. Emotional modulation of the startle response was similar for both modes of

presentation, such that angry faces induced greater negative or aversive responses in the

participants than did happy, neutral, and fearful faces.

Interpretation and Relationship to Other Findings

The finding that viewing faces of anger was found to increase the strength of the

startle eyeblink reflex is consistent with other results. Currently, only two other studies

are known that measured the magnitude of this reflex during the perception of different

facial emotions. Balaban and colleagues (1995) conducted one of these studies. They

measured the size of startle eyeblinks in 5-month-old infants viewing photographic slides









of happy, neutral, and angry faces. Their results were similar to those of the current

study, in that the magnitudes of startle eyeblinks measured in the infants were augmented

while they viewed faces of anger versus faces of happiness.

The other study was conducted by Bowers and colleagues (2002). Similar to the

present experiment, participants were young adults (n = 36) who viewed facial

expressions of anger, fear, neutral, and happiness. These stimuli, however, consisted

solely of static photographs and were sampled from standardized batteries (The Florida

Affect Battery: Bowers et al., 1992; Pictures of Facial Affect: Ekman & Friesen, 1976).

The startle eyeblink responses that were measured while viewing these pictures reflected

the pattern produced in the present study: greater negative or aversive responses were

associated with angry faces than happy, neutral, or fearful faces. Responses to happy,

neutral, and fearful faces yielded relatively reduced responses and were not different

from each other in magnitude.

The augmentation of the startle reflex during the perception of angry versus other

emotional faces appears to be a robust phenomenon for several reasons. First, the

findings from the present study were similar to those of previous studies (Balaban et al.,

1995; Bowers et al., 2002). Second, this pattern of emotional modulation was replicated

using a different set of facial stimuli. Thus, the previous findings were not restricted to

faces from specific sources. Third, the counterbalanced design of the present study

minimized the possibility that the anger effect was due to some imbalance of factors other

than the portrayed facial emotion. Within each experimental condition, for example, both

genders and each actor were equally represented within each expression category.









Although the current results were made more convincing for these reasons, the

implication that the startle circuitry is not enhanced in response to fearful expressions

was unexpected for several reasons. The amygdala has been widely implicated in states

of fear and processing fearful material (Davis & Whelan, 2001; Gloor et al., 1981,

Kltiver-Bucy, 1939), and some investigators have even directly implicated the amygdala

for processing facial expressions of fear (Adolphs et al., 1994; Morris et al., 1998).

Additionally, the work of Davis and colleagues (Davis et al., 1992) uncovered direct

neural projections from the amygdala to the subcortical startle circuitry, which have been

shown to prime the startle mechanism under fearful or aversive conditions.

This body of research suggests that fearful expressions might potentiate the startle

reflex relative to other types of facial expressions; however, Bowers and colleagues'

study (2002) as well as the present one provide evidence that suggests otherwise. No

other studies are known to have directly compared startle reactivity patterns among

fearful and other emotionally expressive faces. Additionally, imaging and lesion studies

have shown mixed results with respect to the role of the amygdala and the processing of

fearful and angryfaces per se. For instance, Sprengelmeyer and colleagues (1998)

showed no fMRI activation in the amygdala in response to fearful relative to neutral

faces. Young and colleagues (1995) attributed a deficit in recognition of fear faces to

bilateral amygdala damage, but the much of the surrounding neural tissue was also

damaged.

So, how might one account for the relatively reduced startle response to fearful

faces? Bowers and colleagues (2002) provided a plausible explanation, implicating the

role of motivated behavior [i.e., Heilman's (1987)preparation for action scale] on these










results. As previously described, angry faces represent personally directed threat, and, as

might be reflected by the increased startle found in the present study, induce a

motivational propensity to withdraw or escape from that threat. Fearful expressions, on

the other hand, reflect some potential environmental threat to the actor, rather than to the

observer. Thus, this would reflect less motivational propensity for action and might

account for the reduced startle response.

Methodological Issues Regarding Facial Expressions

Before discussing the implications of this study more broadly, several

methodological issues must be addressed that potentially influenced the present findings.

The first relates to the reliability of the facial expression stimuli in depicting specific

emotions. Anger was the emotion that elicited the greatest startle response overall. At

the same time, anger facial expressions were least accurately categorized by a group of

independent naive raters (see Table 3-2, page 23).5 Whether there is a connection

between these findings is unclear, particularly since the emotions that the raters viewed

included a wider variety of options (i.e., 6 expressions) than those viewed by the

participants in this study (4 expressions). For example, the raters were shown facial

expressions of anger, disgust, fear, sad, happiness and neutral. Their accuracy in



1 A 2 (Viewing Mode: dynamic, static) X 6 (Expression Category: anger, disgust, fear, happy, neutral, sad)
repeated-measures ANOVA was conducted with an alpha criterion of .05 and Bonferroni-corrected post-
hoc comparisons. Results showed that dynamic expressions (M = .89, SD = .06) were rated significantly
more accurately than static expressions (M = .87, SD = .07). Additionally, Expression Category was found
to be significant, but not the interaction between Expression Category and Viewing Mode. Specific to the
emotion categories used in the present study, it was also found that happy faces were rated significantly
more accurately (M = .99, SD = .01) than neutral (M = .91, SD = .06) and angry (M = .73, SD = .18) faces,
while fear (M = .95, SD = .05) recognition rates were not significantly different from the other three.
Comparing each emotion across viewing modes, only anger was rated significantly more accurately in
dynamic (M = .78, SD = .17), versus static (M = .68, SD = .21), modes, while the advantage for dynamic
neutral faces (M = .92, SD = .04) over static versions (M = .89, SD = .08) only approached significance (p
= .055). A static version of an emotional expression was never rated significantly more accurately than its
dynamic version.









identifying anger expression was around 78%. When errors were made, they typically

(i.e., 95% of the time) judged the anger expressions as being 'disgust.' In the

psychophysiology study, the participants were shown only four expressions. It seems

unlikely that participants in the psychophysiology study easily confused anger, fear,

happiness, and neutral expressions. However, this could be addressed by examining the

ratings that were made by the psychophysiology participants.

Nevertheless, elevated startle reactivity for facial expressions that were less reliably

categorized might occur for several reasons: (1) differences in attention between

relatively poorly and accurately recognized stimuli, and (2) differences in perceived

arousal levels between relatively poorly and accurately recognized stimuli.

Regarding attention, previous researchers have suggested that visual attention

inhibits the startle response when the modalities between the startle probe and stimulus of

interest are mismatched (e.g., Ornitz, 1996). In this case, acoustic startle probes were

used in conjunction with visual stimuli. Since anger was associated with the strongest

startle reflexes, it was not likely inhibited. Thus, attention was probably not a mediating

factor between lower recognition rates and this effect. Regarding arousal, researchers

such as Cuthbert and colleagues (1996) indicated that potentiation of the startle response

occurs with more arousing stimuli when the stimuli are of negative valence. Anger, was

rated as the most negatively valenced, significantly more so than fear. Happy was rated

most positively. Since anger was rated most negatively, the only way arousal could have

been an influencing factor on anger's potentiated startle response was if anger was more

arousing than the other two expressions. However, it was rated as significantly less

arousing than both fear and happiness.









To conclude, it seems unlikely that ambiguity of the angry facial expressions

significantly contributed to the current findings. However, examination of ratings made

by the participants themselves might better clarify the extent to which anger expressions

were less accurately categorized than other expressions.

Other Considerations of the Present Findings

One explanation for the failure to uncover more robust findings using the skin

conductance response might relate to several of this measure's attributes. First, although

SCR can be a useful measure of emotional arousal, it does have considerable limitations.

It is estimated that that 15-20% of healthy individuals are skin conductance "non-

responders"; some individuals do not exhibit a discernable difference in this response to

different categories of emotional stimuli, while others exhibit very weak responses

overall (Bradley & Lang, 2000; O'Gorman, 1990). Moreover, the sensitive electrical

signal that records SCR is vulnerable to the effects of idle, unconscious motor activity,

especially considering that the electrodes are positioned on the palms of both hands.

Because participants sat alone during these recordings, it was impossible to determine

whether they followed instructions for keeping still. These factors suggest that the

potential for interference during the course of the two slideshows in the present study is

not insignificant and may have contributed to the null SCR findings, both for reactivity

across emotions, and response differences between emotions. As such, this study

uncovered only weak evidence that dynamic faces induced stronger skin conductance

responses than static faces; only a trend towards significance was found. A significant

difference might have emerged with more statistical power (dynamic: power = .47).

Numerically, dynamic faces were associated with larger mean SCR values (.314) than









static faces (.172). Therefore, a larger sample size would be required to increase our

confidence about the actual relationship of SCR for these two visual modes.

Several explanations might account for the finding that self-reported arousal ratings

were not significantly different for static and dynamic expressions (contrary to one

prediction in the current study). First, it is possible that the similar ratings between these

two experimental conditions were the product of an insensitive scale. The choice

between integers ranging only from 1 to 9 may have prohibited sufficient response

variability for drawing out differences between viewing modes. Also, it is possible that

subjects rated each expression in arousal relative to the expressions immediately

preceding the currently rated one, and failed to consider their responses relative to the

previously seen presentation. If this were the case, the viewed expressions might have

been rated in arousal relative to the average score within the current presentation, and the

means of arousal ratings from both presentations would be virtually identical.

Limitations of the Current Study

It is important to acknowledge some of the limitations of the current study. One

limitation is that the specific interactions between participant and actor variables of

gender, race, and attractiveness were not analyzed. It is likely that the emotional

response of a given individual to a specific face is dependent upon these factors due to

the individual's unique experiences. In addition, the meaning of some facial expressions

may be ambiguous when they are viewed in isolation. Depending on the current

situation, for instance, a smile might communicate any number of messages, including

contentment, peer acceptance, sexual arousal, relief, mischief, or even contempt (i.e., a

smirk). Taken together, averaging potentially variable responses due to highly specific

interactions with non-expressive facial features or varying interpretations of facial stimuli









between subjects might have contributed to certain non-significant effects, or created

artificial ones.

Secondly, the facial expression stimuli may have been perceived as somewhat

artificial, which potentially reduced the overall emotional responses (and consequently,

physiologic reactivity). The actors were recorded using black and white video with their

heads surrounded on either side with an immobilization cushion. In addition, despite

some pre-training, the actors deliberately posed the facial expressions; these were not the

product of authentic emotion per se. Previous research has determined that emotion-

driven and posed expressions are mediated by different neural mechanisms and muscular

response patterns (Monrad-Krohn, 1924; for review, see Rinn, 1984). It is likely that

some expressions might have been correctly recognized by emotional category, but not

necessarily believed as having an emotional origin. The extent to which emotional

reactivity is associated with perceiving genuine versus posed emotion in others remains

the topic of future research. It is reasonable to conjecture, however, that based on

everyday social interactions, the perception of posed expressions would be less

emotionally arousing and would therefore be associated with reduced emotional

reactivity.

Directions for Future Research

There are many avenues for future research. Further investigation into the effects

of and interactions between factors of gender, race, age, and attractiveness and the

characterization of these effects on patterns of startle modulation is warranted. The

effects of these factors would need to be determined to clearly dissociate expression-

specific differences in emotion perception. One of these factors may be implicated as

being more influential than facial expressivity in physiological reactivity to facial stimuli.









Further, the use of more genuine, spontaneous expressions as stimuli might be considered

to potentially introduce greater levels of emotional arousal into studies of social emotion

perception. Greater ecological validity might be gained via this route, as well as the use

of color stimuli and actors given free range of head movement.

Also, patterns of startle modulation to facial expressions should be further studied

over different age groups to help uncover the development of emotional recognition and

social cognition over the lifespan. This is especially warranted given the difference in the

findings of the present study (i.e., increased startle response to anger with attenuated

responses being associated with fearful, happy, and neutral expressions) in relation to

those of Balaban's (1995) study who tested infants. In her study, fearful expressions

yielded significantly greater responses than neutral ones and neutral ones yielding greater

responses than happy ones). Continued research with different age groups would help

disentangle the ontogenetic responsiveness to the meaning conveyed through facial

emotional signals and help determine the reliability of these few studies that have been

conducted.

To conclude, despite the limitations of the current study, dynamic and static faces

appear to elicit qualitatively different psychophysiological responses; specifically, that

dynamic faces induce greater startle eyeblink responses than static versions. This

observation has not been previously described in the literature. Because they appear to

differentially influence motivational systems, these two types of stimuli cannot be treated

interchangeably. The results of this and future studies will likely play an important role

in the development of a dynamic facial affect battery and aid in the race to extricate more






50


precisely the social cognition impairments in certain neurologic, psychiatric, and brain

injured populations.

















APPENDIX A
STATIC STIMULUS SET

Actor Measure Anger Disgust Fear Happiness Neutrality Sadness
Male 1 % Recognition 47.6 66.7 90.5 100 100 85.7
Valence M (SD) 3.0 (1.6) 3.9 (1.5) 4.4 (1.7) 7.4 (1.3) 5.2 (0.9) 3.7 (1.2)
ArousalM (SD) 5.5 (1.4) 5.4 (1.7) 5.8(1.5) 6.3 (1.3) 3.5 (1.8) 4.6 (1.4)
Male 2 % Recognition 90.5 85.7 100 100 90.5 95.2
Valence 2.8 (1.3) 3.5(1.1) 4.5 (1.8) 7.2 (1.4) 4.2 (1.2) 2.6 (1.3)
Arousal 5.1(2.1) 5.0 (1.9) 6.8 (1.7) 5.7 (1.7) 3.7 (1.8) 5.0 (1.8)
Male 3 % Recognition 71.4 81 90.5 100 100
Valence 3.2(1.5) 3.2(0.9) 4.2(1.7) 7.3(0.9) 4.7(1.4)
Arousal 5.2 (2.0) 5.1(1.7) 6.3 (1.5) 5.9 (1.6) 3.7 (1.9)
Male 4 % Recognition 57.1 71.4 85.7 100 95.2 95.2
Valence 3.3 (1.5) 3.6 (1.7) 3.8(1.6) 7.0 (2.2) 4.6 (0.7) 3.1 (1.2)
Arousal 5.4 (1.4) 5.5 (1.2) 6.0 (0.8) 6.7 (1.4) 3.3 (1.7) 4.5 (1.6)
Male 5 % Recognition 57.1 76.2 95.2 95.2 81 100
Valence 4.1 (1.2) 4.6 (0.8) 4.5 (1.2) 7.0 (1.3) 5.4 (1.2) 4.1 (1.3)
Arousal 4.6(1.3) 4.0 (1.6) 5.5 (1.4) 5.4 (1.7) 3.9 (1.8) 4.1(1.7)
Male 6 % Recognition 71.4 61.9 95.2 100 90.5 76.2
Valence 3.1 (1.6) 3.0 (1.8) 3.6 (1.6) 6.9 (1.3) 4.6 (1.7) 3.5 (1.5)
Arousal 5.1(1.6) 6.1(2.3) 5.8(1.6) 5.3 (2.1) 3.9 (2.2) 5.3 (1.3)
Female 1 % Recognition 61.9 76.2 100 100 85.7 90.5
Valence 3.3 (1.5) 3.3 (1.6) 3.9(1.7) 6.7(1.1) 4.5 (1.3) 2.9 (1.2)
Arousal 6.1(1.8) 5.3 (2.0) 6.3 (1.9) 6.0 (1.3) 3.4 (1.6) 4.7 (1.6)
Female 2 % Recognition 28.6 100 100 100 76.2 66.7
Valence 3.2 (1.6) 3.5 (1.0) 3.9(1.5) 7.1 (1.1) 3.3 (1.3) 4.4 (1.0)
Arousal 5.5 (1.5) 4.7 (1.4) 5.9(1.9) 5.8 (1.7) 2.8 (1.6) 2.9 (1.6)
Female 3 % Recognition 95.2 71.4 95.2 100 90.5 100
Valence 3.9 (1.0) 3.6 (2.0) 4.0(1.1) 7.7 (1.3) 4.4 (1.0) 3.4 (1.5)
Arousal 5.0(1.5) 6.0 (1.7) 5.5 (1.2) 6.4 (1.5) 3.5 (1.8) 4.8 (1.5)
Female 4 % Recognition 95.2 100 100 100 95.2 100
Valence 2.9 (1.4) 3.7(1.3) 4.3(1.1) 7.1 (0.9) 4.8 (0.5) 3.7 (1.4)
Arousal 5.6 (2.3) 5.5 (1.9) 5.9(1.7) 5.9 (2.0) 3.3 (1.7) 4.6 (1.2)
Female 5 % Recognition 90.5 95.2 100 95.2 90.5 95.2
Valence 3.8 (1.7) 3.3 (1.0) 4.1 (1.8) 7.2(1.1) 4.5(1.1) 3.7 (1.2)
Arousal 5.5 (1.7) 5.2 (1.3) 7.0 (1.5) 5.7 (1.5) 4.1(1.9) 4.8 (1.5)
Female 6 % Recognition 52.4 42.9 90.5 100 76.2 100
Valence 3.5 (1.6) 3.9 (1.4) 4.1 (1.1) 8.1 (0.9) 5.9(1.1) 3.7(1.1)
Arousal 5.0(1.5) 4.9 (1.8) 5.6 (1.8) 7.1(2.0) 4.8 (2.4) 5.1(1.6)
Note. The sad expression for male 3 was not created because of videotape corruption.


















APPENDIX B
DYNAMIC STIMULUS SET


Actor Measure Anger
Male 1 % Recognition 76.2
Valence M (SD) 2.9(1.2)
Arousal M (SD) 5.7 (2.0)
Male 2 % Recognition 95.2
Valence 3.2 (1.3)
Arousal 4.0(1.3)
Male 3 % Recognition 71.4
Valence 2.9(1.1)


Arousal
Male 4 % Recognition
Valence
Arousal
Male 5 % Recognition
Valence
Arousal
Male 6 % Recognition
Valence
Arousal
Female 1 % Recognition
Valence
Arousal
Female 2 % Recognition
Valence
Arousal
Female 3 % Recognition
Valence
Arousal
Female 4 % Recognition
Valence
Arousal
Female 5 % Recognition
Valence
Arousal
Female 6 % Recognition
Valence
Arousal


5.3 (1.5)
95.2
3.6 (0.9)
4.5(1.3)
71.4
3.2 (1.4)
5.2(1.3)
66.7
3.0 (0.8)
5.4 (1.8)
57.1
2.7 (1.6)
5.7 (2.0)
52.4
2.6 (1.3)
5.1 (2.0)
100
3.5 (1.3)
4.3 (1.8)
100
2.3 (1.1)
6.1 (1.9)
85.7
3.4(1.7)
5.0 (2.0)
66.7
3.2 (1.3)
5.1 (1.5)


Disgust Fear Happiness Neutrality Sadness
52.4 90.5 100 95.2 95.2
4.1 (1.3) 4.5 (1.9) 7.5 (0.9) 5.4 (0.7) 4.1 (1.1)
5.1 (1.5) 6.1 (2.0) 6.1 (1.4) 3.2 (2.1) 3.4 (1.9)
85.7 100 100 95.2 100
3.7(1.1) 3.6(1.9) 7.0(1.0) 4.9(0.7) 3.1 (1.4)
4.6 (2.1) 6.3 (2.1) 5.6 (1.6) 3.1 (1.9) 4.9 (1.6)
85.7 95.2 100 95.2
3.1 (0.8) 3.7 (1.6) 6.5 (1.2) 4.7 (0.9)
4.8 (1.9) 6.2 (1.4) 5.4 (1.4) 3.2 (2.0)
85.7 90.5 100 90.5 100
3.3 (1.7) 4.0 (1.8) 6.9 (2.1) 5.0 (0.9) 3.3 (1.0)
5.8 (1.5) 5.9(1.9) 6.4 (1.8) 3.6 (2.6) 4.4 (1.4)
52.4 95.2 100 85.7 100
4.1 (0.9) 3.8(1.6) 6.9(1.1) 4.9 (0.4) 3.2 (1.3)
4.5 (1.9) 5.8(1.5) 5.2 (2.0) 3.1 (1.9) 4.7 (1.7)
85.7 100 95.2 95.2 90.5
2.9(1.5) 4.1 (1.2) 6.9 (1.7) 4.8 (0.7) 3.3 (1.5)
5.9 (1.5) 4.6 (2.2) 5.8 (2.1) 2.9 (2.0) 5.1 (2.0)
57.1 100 100 95.2 85.7
2.1 (1.1) 3.2 (1.3) 6.9 (1.5) 4.5 (1.3) 3.1 (0.9)
5.8 (2.1) 6.3 (1.6) 5.9 (0.9) 3.3 (2.1) 4.8 (1.3)
100 100 100 85.7 66.7
3.6 (0.9) 3.4 (1.5) 7.3 (1.2) 4.3 (0.9) 4.2 (0.9)
4.4 (1.8) 5.8(1.7) 5.5 (1.6) 2.8 (1.7) 3.5 (2.2)
81 80.1 100 90.5 100
3.7(2.1) 3.1(1.1) 7.9(1.2) 4.9(0.5) 3.2(1.0)
6.4 (1.9) 5.6 (1.8) 6.8 (1.9) 3.3 (2.0) 4.6 (1.4)
100 95.2 100 95.2 100
3.4 (2.2) 3.5 (1.3) 7.3 (1.4) 5.1 (0.7) 3.1 (1.0)
5.5 (1.8) 6.1 (1.6) 5.9 (1.8) 3.0 (1.9) 4.9 (1.1)
95.2 100 100 95.2 95.2
3.3 (1.0) 3.2 (1.8) 6.9 (1.6) 5.14 3.6 (1.6)
5.4 (1.8) 6.8 (2.0) 4.9 (1.8) 3.2 (2.1) 4.2 (1.4)
66.7 85.7 100 85.7 95.2
3.5 (1.3) 3.3 (1.3) 8.3 (0.9) 6.0 (1.0) 3.5 (1.1)
5.6 (1.3) 6.1 (1.7) 6.7 (2.1) 4.3 (2.2) 4.8 (2.1)


Note. The sad expression for male 3 was not created because of videotape corruption.















LIST OF REFERENCES


Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994). Impaired recognition of
emotion in facial expressions following bilateral damage to the human amygdala.
Nature, 372(6507), 669-672.

Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion
perception from dynamic and static body expressions in point-light and full-light
displays. Perception, 33(6), 717-746.

Averill, J. R. (1975). A semantic atlas of emotional concepts. JSAS Catalogue of Selected
Documents in Psychology, 5, 330. (Ms. No. 421).

Balaban, M. T. (1995). Affective influences on startle in five-month-old infants: reactions
to facial expressions of emotion. ChildDevelopment, 66(1), 28-36.

Beck, A. T. (1978). Depression inventory. Philadelphia: Center for Cognitive Therapy.

Bowers, D., Bauer, R., & Heilman, K. M. (1993). The Nonverbal Affect Lexicon:
theoretical perspectives from neuropsychological studies of affect perception.
Neuropsychology, 7(4), 433-444.

Bowers, D., Blonder, L. X., & Heilman, K. M. (1992). Florida Affect Battery. University
of Florida.

Bowers, D., Parkinson, B., Gober, T., Bauer, M. C., White, E., & Bongiolatti, S. (2002,
November). Two faces of emotion: patterns of startle modulation depend on facial
expressions and on knowledge of evil. Poster presented at the Society for
Neuroscience, Orlando, FL.

Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: the Self-Assessment Manikin
and the Semantic Differential. Journal of Behavioral Therapy and Experimental
Psychiatry, 25(1), 49-59.

Bradley, M. M., & Lang, P. J. (2000). Measuring emotion: behavior, feeling, and
physiology. In R. D. Lane & L. Nadel (Eds.), Cognitive Neuroscience of Emotion
(pp. 242-276). New York: Oxford University.

Buhlmann, U., McNally, R. J., Etcoff, N. L., Tuschen-Caffier, B., & Wilhelm, S. (2004).
Emotion recognition deficits in body dysmorphic disorder. Journal of Psychiatric
Research, 38(2), 201-206.






54


Burton, A. M., Wilson, S., Cowan, M., & Bruce, V. (1999). Face recognition in poor-
quality video: evidence from security surveillance. Psychological Science, 10(3),
243-248.

Bush, L. E., II. (1973). Individual differences in multidimensional scaling of adjectives
denoting feelings. Journal ofPersonality and Social Psychology, 25, 50-57.

Cannon, W. B. (1931). Again the James-Lange and the thalamic theories of emotion.
Psychological Review, 38, 281-295.

Christie, F., & Bruce, V. (1998). The role of dynamic information in the recognition of
unfamiliar faces. Memory and Cognition, 26(4), 780-790.

Cuthbert, B. N., Bradley, M. M., & Lang, P. J. (1996). Probing picture perception:
activation and emotion. Psychophysiology, 33(2), 103-111.

Darwin, C. (1872). The expression of the emotions in man and animals. Chicago:
University of Chicago Press.

Davis, M. (1992). The role of the amygdala in fear-potentiated startle: implications for
animal models of anxiety. Trends in Pharmacological Science, 13(1), 35-41.

Davis, M., & Whalen, P. J. (2001). The amygdala: vigilance and emotion. Mol.
Psychiatry, 6(1), 13-34.

DeSimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of
Cognitive Neuroscience, 3, 1-8.

Edwards, J., Jackson, H. J., & Pattison, P. E. (2002). Emotion recognition via facial
expression and affective prosody in schizophrenia: a methodological review.
Clinical Psychology Review, 22(6), 789-832.

Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In
J. Cole (Ed.), Nebraska symposium on motivation, 1971 (pp. 207-283). Lincoln,
NE: University of Nebraska Press.

Ekman, P. (1973). Darwin and facial expression; a century of research in review. New
York: Academic Press.

Ekman, P. (1980). The face of man: expressions of universal emotions in a New Guinea
village. New York: Garland STPM Press.

Ekman, P. (1982). Emotion in the human face (2nd ed.). New York: Cambridge
University Press. Editions de la Maison des Sciences de l'Homme.

Ekman, P., & Davidson, R. J. (1994). The nature of emotion: fundamental questions.
New York: Oxford University Press.









Ekman, P., & Friesen, W. V. (1976). Pictures official affect. Palo Alto: Consulting
Psychologists Press.

Ekman, P., Levenson, R. W., & Friesen, W. V. (1983). Autonomic nervous system
activity distinguishes among emotions. Science, 221(4616), 1208-1210.

Ekman, P., & Rosenberg, E. L. (1997). What the face reveals: basic and applied studies
of spontaneous expression using the facial action coding system (FACS). New
York: Oxford University Press.

Eysenck, M. W., & Keane, M. (2000). Cognitive Psychology: A S.Nlidel Handbook.
Philadelphia: Taylor & Francis.

Field, T. M., Woodson, R., Greenberg, R., & Cohen, D. (1982). Discrimination and
imitation of facial expression by neonates. Science, 218(4568), 179-181.

Gilboa-Schechtman, E., Foa, E. B., & Amir, N. (1999). Attentional biases for facial
expressions in social phobia: the face-in-the-crowd paradigm. Cognition and
Emotion, 13(3), 305-318.

Gloor, P., Olivier, A., Quesney, L. F., Andermann, F., & Horowitz, S. (1982). The role of
the limbic system in experiential phenomena of temporal lobe epilepsy. Annals of
Neurology, 12(2), 129-144.

Hargrave, R., Maddock, R. J., & Stone, V. (2002). Impaired recognition of facial
expressions of emotion in Alzheimer's disease. Journal ofNeuropsychiatry and
Clinical Neurosciences, 14(1), 64-71.

Hariri, A. R., Tessitore, A., Mattay, A., Frea, F., & Weinberger, D. (2001). The amygdala
response to emotional stimuli: a comparison of faces and scenes. Neuroimage,
17(317-323).

Heilman, K. M. (1987, February). Syndromes official affect processing. Paper presented
at the International Neuropsychological Society, Washington, DC.

Hess, W. R., & Brugger, M. (1943). Subcortical center of the affective defense reaction.
In K. Akert (Ed.), Biological order and brain organization: selected works of W. R.
Hess (pp. 183-202). Berlin: Springer-Verlag.

Humphreys, G. W., Donnelly, N., & Riddoch, M. J. (1993). Expression is computed
separately from facial identity, and it is computed separately for moving and static
faces: neuropsychological evidence. Neuropsychologia, 31(2), 173-181.

Izard, C. E. (1994). Innate and universal facial expressions: evidence from developmental
and cross-cultural research. Psychological Bulletin, 115(2), 288-299.

Johnson, M. H., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns' preferential
tracking of face-like stimuli and its subsequent decline. Cognition, 40(1-2), 1-19.









Kamachi, M., Bruce, V., Mukaida, S., Gyoba, J., Yoshikawa, S., & Akamatsu, S. (2001).
Dynamic properties influence the perception of facial expressions. Perception,
30(7), 875-887.

Kan, Y., Kawamura, M., Hasegawa, Y., Mochizuki, S., & Nakamura, K. (2002).
Recognition of emotion from facial, prosodic and written verbal stimuli in
Parkinson's disease. Cortex, 38(4), 623-630.

Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., & Hoffman, J. M. (2003). Dissociable
neural pathways are involved in the recognition of emotion in static and dynamic
facial expressions. Neuroimage, 18(1), 156-168.

Klver, H., & Bucy, P. C. (1937). "Psychic blindness" and other symptoms following
bilateral temporal lobectomy. American Journal ofPhysiology, 119, 352-353.

Kohler, C. G., Bilker, W., Hagendoorn, M., Gur, R. E., & Gur, R. C. (2000). Emotion
recognition deficit in schizophrenia: association with symptomatology and
cognition. Biological Psychiatry, 48(2), 127-136.

Lander, K., & Bruce, V. (2004). Repetition priming from moving faces. Memory and
Cognition, 32(4), 640-647.

Lander, K., Christie, F., & Bruce, V. (1999). The role of movement in the recognition of
famous faces. Memory and Cognition, 27(6), 974-985.

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1990). Emotion, attention, and the startle
reflex. Psychological Review, 97(3), 377-395.

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1997). Motivated attention: affect,
activation and action. In P. J. Lang, R. F. Simons & M. T. Balaban (Eds.), Attention
and orienting: sensory and motivational processes. Hillsdale, NJ: Lawrence
Erlbaum.

Lang, P. J., Bradley, M. M., Cuthbert, B. N., & Patrick, C. J. (1993). Emotion and
psychopathology: a startle probe analysis. Progress in Experimental, Personality,
andP% ) l q/itpIth 1h ogi /Research, 16, 163-199.

Lang, P. J., Greenwald, M. K., Bradley, M. M., & Hamm, A. O. (1993). Looking at
pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology, 30,
261-273.

Leonard, C., Voeller, K. K. S., & Kuldau, J. M. (1991). When's a smile a smile? Or how
to detect a message by digitizing the signal. Psychological Science, 2, 166-172.

Levenson, R. W., Carstensen, L. L., Friesen, W. V., & Ekman, P. (1991). Emotion,
physiology, and expression in old age. Psychology and Agiig. 6(1), 28-35.









Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial action generates
emotion-specific autonomic nervous system activity. Psychophysiology, 27(4),
363-384.

Monrad-Krohn, G. H. (1924). On the dissociation of voluntary and emotional innervation
in facial paralysis of central origin. Brain, 47(22-35).

Morris, J. S., Friston, K. J., Buchel, C., Frith, C. D., Young, A. W., Calder, A. J., et al.
(1998). A neuromodulatory role for the human amygdala in processing emotional
facial expressions. Brain, 121 (Pt 1), 47-57.

Morris, M., Bradley, M. M., Bowers, D., Lang, P. J., & Heilman, K. M. (1991). Valence
specific hypoarousal following right temporal lobectomy [Abstract]. Journal of
Clinical and Experimental Neuropsychology, 14, 105.

Nelson, C. A., & Dolgrin, K. G. (1985). The generalized discrimination of facial
expressions by seven-month-old infants. Child Development, 56, 58-61.

Oatley, K., & Jenkins, J. M. (1996). Understanding emotions. Cambridge: Blackwell
Publishers.

O'Gorman, J. G. (1990). Individual differences in the orienting response: nonresponding
in nonclinical samples. Pavlov Journal of Biological Science, 25(3), 104-108;
discussion 109-110.

Okun, M. S., Bowers, D., Springer, U., Shapira, N., Malone, D., Rezai, A., Nuttin, B.,
Heilman, K. M., Morecraft, R., Rasmussen, S., Greenberg, B., Foote, K.,
Goodman, W. (2004). What's in a "smile?" Intra-operative observations of
contralateral smiles induced by deep brain stimulation. Neurocase, 10(4), 271-279.

Ornitz, E. M., Russell, A. T., Yuan, H., & Liu, M. (1996). Autonomic,
electroencephalographic, and myogenic activity accompanying startle and its
habituation during mid-childhood. Psychophysiology, 33(5), 507-513.

Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning.
Chicago: University of Illinois Press.

O'Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: a
psychological and neural synthesis. Trends in Cognitive Science, 6(6), 261-266.

Pike, G. E., Kemp, R. I., Towell, N. A., & Phillips, K. C. (1997). Recognizing moving
faces: the relative contribution of motion and perspective view information. Visual
Cognition, 4(4), 409-438.

Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological
motion. Philosophical Transactions of the Royal Society ofLondon. Series B,
Biological Sciences, 358(1431), 435-445.









Puce, A., Syngeniotis, A., Thompson, J. C., Abbott, D. F., Wheaton, K. J., & Castiello,
U. (2003). The human temporal lobe integrates facial form and motion: evidence
from fMRI and ERP studies. Neuroimage, 19(3), 861-869.

Rinn, W. E. (1984). The neuropsychology of facial expression: a review of the
neurological and psychological mechanisms for producing facial expressions.
Psychological Bulletin, 95(1), 52-77.

Roberts, R. J., & Weerts, T. C. (1982). Cardiovascular responding during anger and fear
imagery. Psychology Report, 50(1), 219-230.

Rosen, J. B., & Davis, M. (1988). Enhancement of the acoustic startle by electrical
stimulation of the amygdala. Behavioral Neuroscience, 102(2), 195-202.

Russell, J. A. (1978). Evidence of convergent validity on the dimensions of affect.
Journal of Personality and Social Psychology, 36, 1152-1168.

Russell, J. A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions.
Journal ofResearch in Personality, 11(273-294).

Russell, J. A., & Ridgeway, D. (1983). Dimensions underlying children's emotion
concepts. Developmental Psychology, 19, 785-804.

Schlosberg, H. (1952). The description of facial expressions in terms of two dimensions.
Journal ofExperimental Psychology, 44(4), 229-237.

Schwartz, G. E., Weinberger, D. A., & Singer, J. A. (1981). Cardiovascular
differentiation of happiness, sadness, anger, and fear following imagery and
exercise. Psychosomatic Medicine, 43(4), 343-364.

Singh, S. D., Ellis, C. R., Winton, A. S., Singh, N. N., Leung, J. P., & Oswald, D. P.
(1998). Recognition of facial expressions of emotion by children with attention-
deficit hyperactivity disorder. Behavior Modification, 22(2), 128-142.

Sorce, J., Emde, R., Campos, J., & Klinnert, M. (1985). Maternal emotional signaling: it's
effect on the visual cliff behavior of 1-year-olds. Developmental Psychology, 21(1),
195-200.

Spielberger, C. D. (1983). State-Trait Anxiety Inventory. Palo Alto, CA: Mind Garden.

Sprengelmeyer, R., Rausch, M., Eysel, U. T., & Przuntek, H. (1998). Neural structures
associated with recognition of facial expressions of basic emotions. Proceedings of
the Royal Society of London Series B Biological Sciences, 265(1409), 1927-1931.

Sprengelmeyer, R., Young, A. W., Calder, A. J., Karnat, A., Lange, H., Homberg, V., et
al. (1996). Loss of disgust. Perception of faces and emotions in Huntington's
disease. Brain, 119 (Pt 5), 1647-1665.









Sprengelmeyer, R., Young, A. W., Mahn, K., Schroeder, U., Woitalla, D., Buttner, T., et
al. (2003). Facial expression recognition in people with medicated and unmedicated
Parkinson's disease. Neuropsychologia, 41(8), 1047-1057.

Tanaka, K. (1992). Inferotemporal cortex and higher visual functions. Current Opinion in
Neurobiology, 2, 502-505.

Teunisse, J. P., & de Gelder, B. (2001). Impaired categorical perception of facial
expressions in high-functioning adolescents with autism. Neuropsychology,
Development, and Cognition. Section C, Child Neuropsychology, 7(1), 1-14.

Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M.
A. Goodale & R. J. W. Mansfield (Eds.), Analysis of VisualBehavior (pp. 549-
586). Cambridge: MIT Press.

Walker, D. L., & Davis, M. (2002). Quantifying fear potentiated startle using absolute
versus proportional increase scoring methods: implications for the neurocircuitry of
fear and anxiety. Psychopharmacology(164), 318-328.

Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of
emotional expression using synthesized facial muscle movements. Journal of
Personality and Social Psychology, 78(1), 105-119.

Wundt, W. (1897). Outlines ofpsychology (C. H. Judd, Trans.). New York: Gustav E.
Stetchert.

Young, A. W., Aggleton, J. P., Hellawell, D. J., Johnson, M., Broks, P., & Hanley, J. R.
(1995). Face processing impairments after amygdalotomy. Brain, 118 (Pt 1), 15-24.

Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997).
Facial expression megamix: tests of dimensional and category accounts of emotion
recognition. Cognition, 63(3), 271-313.

Zeki, S. (1992). The visual image in mind and brain. Scientific American, 267(3), 68-76.

Zihl, J., von Cramon, D., & Mai, N. (1983). Selective disturbance of movement vision
after bilateral brain damage. Brain, 106 (Pt 2), 313-340.















BIOGRAPHICAL SKETCH

Utaka Springer was born in Menomonie, WI, and received his B.S. in biology from

Harvard University. After gaining research experience in cognitive neuroscience at the

McKnight Brain Institute in Gainesville, FL, he entered the doctoral program in clinical

psychology at the University of Florida, specializing in neuropsychology.